Application of the QuO Quality-of-Service Framework to a Dis

更新时间:2023-03-21 09:03:01 阅读量: 实用文档 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

Application of the QuO Quality-of-Service Framework to a

Distributed Video Application

David A.Karr,Craig Rodrigues,Yamuna Krishnamurthy and Irfan Pyarali Joseph P.Loyall and Richard E.Schantz yamuna,irfan@701a0e04eff9aef8941e062a dkarr,crodrigu,jloyall,schantz@701a0e04eff9aef8941e062a Department of Computer Science BBN Technologies Washington University

Cambridge,MA,USA St.Louis,MO,USA

Douglas C.Schmidt

schmidt@701a0e04eff9aef8941e062a

Electrical&Computer Engineering Department

University of California

Irvine,CA,USA

Abstract

Adaptation of distributed software to maintain the best possible application performance in the face of changes in available resources is an increasingly important and complex problem.In this paper,we discuss the appli-cation of the QuO adaptive middleware framework and the CORBA A/V Streaming Service to the development of real-time embedded applications.We demonstrate a standards-based middleware platform for developing adaptive applications that are better architected and easier to modify and that can adapt to changes in resource avail-ability to meet QoS requirements.These are presented in the context of an Unmanned Aerial Vehicle(UA V)video distribution application.The UA V application is devel-oped using QuO and the A/V Streaming Service,and uses adaptive behavior to meet timeliness requirements in the face of restrictions in processing power and network bandwidth.We also present some experimental results we have gathered for this application.

1Introduction

Middleware for Distributed Object Computing(DOC)is an emerging and increasingly accepted tool for develop-

ment and implementation of a wide variety of software ap-plications in a wide variety of environments.The applica-tion of DOC middleware to real-time,embedded software (RES)has resulted in the emergence of middleware sup-port for the strict quality of service(QoS)requirements of RES use cases.For example,the Minimum CORBA spec-i?cation[OMG00],the Real-time CORBA1.0speci?ca-tion[OMG00],and the Real-Time Speci?cation for Java (RTSJ)[Mic]are examples of extensions and services that have grown out of a need to support embedded and real-time applications.Adaptation of distributed software to maintain the best possible application performance in the face of changes in available resources is an increasingly important and complex problem for RES applications.We have been developing QuO,a middleware framework sup-porting adaptive distributed-object applications.

In this paper,we apply QuO to the development of an Unmanned Aerial Vehicle(UA V)video distribution appli-cation,in which an MPEG video?ow adapts to meet its mission QoS requirements,such as timeliness.We dis-cuss three distinct behaviors that adapt to restrictions in processing power and network bandwidth:reduction of the video?ow volume by dropping frames,relocation of a software component for load balancing,and bandwidth reservation to guarantee a level of network bandwidth.We 1

have developed a prototype application which uses QuO, the TAO real-time ORB,and the TAO A/V Streaming Ser-vice to establish and adaptively control video transmis-sion from a live video camera via a distribution process to viewers on computer displays.The TAO A/V Streaming Service is an implementation of the CORBA A/V Streams speci?cation[OMG98],which grew out of the need to transmit multimedia data among distributed objects. The application demonstrates a standards-based mid-dleware platform for RES applications and that adapta-tion can be effectively controlled by a superimposed QuO contract to regulate performance problems in the proto-type that are induced by heavy processor load on the host of a critical component.Our experience also shows that the use of the QuO framework,in contrast to typical ad-hoc performance-optimization techniques,which be-come entangled with basic functionality,leads to a form of aspect-oriented programming with a bene?cial separa-tion of concerns.This results in code that is clearer and easier to modify,and promotes re-usability of software under different sets of requirements.

The rest of this paper is organized as follows.Sec-tion2provides a brief introduction to QuO.Section3de-scribes the UA V implementation.Section4describes the way in which the UA V prototype can adapt to resource constraints using QuO while still delivering its live video feed in a timely manner using the A/V Streaming Ser-vice.Section5describes some of the domain-speci?c is-sues that we encountered in implementing the adaptive strategies,which are illustrative of corresponding issues that we would expect to encounter in other application do-mains.In Section6,we present empirical results showing that while system loads can dramatically degrade software performance metrics of the application(in this case,time-liness),a QuO-controlled adaptation restores the metric to the needed value.In this section we also compare our ex-periences with modifying adaptive behavior through QuO with the more dif?cult task of modifying ad-hoc adapta-tions entangled with functional code.Section7discusses related work.Section8projects future work on this re-search.Finally,Section9presents some concluding re-marks.

2An Adaptive Framework for DOC 2.1The Bene?ts of Adaptation

Except in systems whose deployment environment is ex-tremely stable,operational DOC systems will encounter more-or-less temporary conditions that impinge on their available computing resources(e.g.,processing power and network bandwidth)and have a consequent effect on the ability of the system to deliver the quality of service needed by users.For example,other applications may re-quire some resources,or hardware may fail,or the net-work may even be recon?gured.

It is also possible in many cases that the desired quality of service may change depending on the usage patterns at any given time,e.g.,some use cases may require a great volume of data(high precision)even if this comes at the expense of latency(timeliness);other use cases may re-verse these priorities.

It is highly desirable,therefore,that software systems be able to adapt to these varying resources and needs.Be-cause of this,many existing systems already incorporate specialized adaptations(typically ad-hoc and local to a subsystem)to adapt to at least some of these variations. In other cases,adaptive behaviors are encoded in more general-purpose software;for example,the TCP proto-col will adjust its data rate upward as bandwidth becomes available to support transmission of more data,and down-ward when not enough bandwidth is available to transmit data at the current rate.Such adaptations tend to be“one size?ts all,”however,and prove to be poor behaviors for certain applications.

2.2The Bene?ts of a Framework Adaptation is made more complicated by application-level issues.For example,consider a video-editing system in which the user can fast-forward to a desired section of the video and then copy that section frame by frame to a new?le.During the fast-forward mode,the most impor-tant performance characteristic may be that the position in the video(measured in seconds since the beginning) advance at a constant rate.The number of frames actu-ally transmitted during any given period of real time is less critical,provided the user is shown enough frames to detect what scene or action is being shown.Once the 2

“copy”mode is entered,however,it is critical to copy ev-ery frame,even if it is not possible to do so at the normal speed of motion of the video.

Protocols such as TCP and UDP can relatively easily be selected and con?gured to adapt within a reasonable set of parameters in either one of the two application modes described above.The dif?culty is that this application must be able to switch between one mode and the other during its execution.This complicates the application’s interface to network facilities;the code that implements the“fast forward”and“copy”functions must be tangled up with code to achieve the needed QoS at any instant in the communications protocols.The complexity of this greatly increases when other considerations(e.g.,running in different computing environments,or other user prefer-ences)are taken into account.It is therefore desirable to separate the concerns of the program’s functional spec-i?cation and these condition-dependent optimizations of QoS.Separation of concerns is the primary objective of Aspect-Oriented Programming[KIL96].

The use of an appropriate framework to handle these QoS concerns alongside the functional code created by the application developer,enables a separation of con-cerns.As we will see,this results in code that is much clearer,is developed at a greater speed,and is easily mod-i?able to meet new performance requirements.

2.3The Bene?ts of QuO

Quality Objects(QuO)is a distributed object computing (DOC)framework designed to develop distributed appli-cations that can specify(1)their QoS requirements,(2) the system elements that must be monitored and con-trolled to measure and provide QoS,and(3)the behavior for adapting to QoS variations that occur at run-time.By providing these features,QuO opens up distributed ob-ject implementations[Kic96]so that the implementation strategies of the application’s functions can be controlled in an adaptive manner.

In a client-to-object logical method call over a typical DOC toolkit,a client makes a logical method call to a remote object.In a traditional CORBA application,the client does this by invoking the method on a local ORB proxy.The proxy marshals the argument data,which the local ORB then transmits across the network.The ORB on the server side receives the message call,and a remote

proxy(i.e.,a skeleton)then unmarshals the data and de-livers it to the remote servant.Upon method return,the process is reversed.

A method call in the QuO framework is a superset of a traditional DOC call,and includes the following compo-nents:

Contracts specify the level of service desired by a

client,the level of service an object expects to pro-

vide,operating regions indicating possible measured

QoS,and actions to take when the level of QoS

changes.

Delegates act as local proxies for remote objects.

Each delegate provides an interface similar to that

of the remote object stub,but adds locally adaptive

behavior based upon the current state of QoS in the

system,as measured by the contract.

System condition objects provide interfaces to re-

sources,mechanisms,objects,and ORBs in the sys-

tem that need to be measured and controlled by QuO

contracts.

In addition,QuO applications may use property man-agers and specialized ORBs.Property managers are responsible for managing a given QoS property(such as the availability property via replication manage-ment[CRS98]or controlled throughput via RSVP reser-vation management[BBN98])for a set of QuO-enabled server objects on behalf of the QuO clients using those server objects.In some cases,the managed property requires mechanisms at lower levels in the protocol stack.To support this,QuO includes a gateway mecha-nism[SZK99],which enables special purpose transport protocols and adaptation below the ORB.

In addition to traditional application developers(who develop the client and object implementations)and mech-anism developers(who develop the ORBs,property man-agers,and other distributed resource control infrastruc-ture),QuO applications involve another group of devel-opers,namely QoS developers.QoS developers are re-sponsible for de?ning QuO contracts,system condition objects,callback mechanisms,and object delegate behav-ior.To support the added role of QoS developer,we

3

are developing a QuO toolkit,described in earlier pa-pers[LBS98,LSZB98,VZL98],and consisting of the following components:

Quality Description Languages(QDL)for describ-ing the QoS aspects of QuO applications,such as QoS contracts(speci?ed by the Contract Description Language,CDL)and the adaptive behavior of ob-jects and delegates(speci?ed by the Structure De-scription Language,SDL).CDL and SDL are de-scribed in[LBS98,LSZB98].

The QuO runtime kernel,which coordinates evalua-tion of contracts and monitoring of system condition objects.The QuO kernel and its runtime architecture are described in detail in[VZL98].

Code generators that weave together QDL descrip-tions,the QuO kernel code,and client code to pro-duce a single application program.Runtime integra-tion of QDL speci?cations is discussed in[LBS98].

The QuO contract offers a number of powerful abstrac-tions for programming QoS in a DOC application.These include regions,which abstract the notion of regions of operation which may depend on momentary user prefer-ences or on the condition of the computing environment (as re?ected by the system conditions).The QuO contract may also contain states,an abstraction on which one can program a state machine whose inputs are the changing system conditions.

We currently have implementations of QuO to support CORBA applications over C++and Java,as well as Java RMI applications.

3The Unmanned Air Vehicle appli-cation

As part of an activity for the US Navy at the Naval Surface Warfare Center in Dahlgren,Virginia,USA,we have been developing a prototype concept application for use with an Unmanned Air Vehicle(UA V).A UA V is a remote-controlled aircraft that is launched in order to obtain a view of an engagement,performing such functions as spotting enemy movements or locating targets.A UA

V

701a0e04eff9aef8941e062a/home.htm

Figure1:Artist’s depiction of UA Vs

can receive remote-control commands from a ship in or-der to perform such actions as changing its direction of ?ight or directing a laser at a target.Several UA Vs might be active during an engagement,as depicted in Figure1.

The prototype supports the UA V concept of operation by disseminating data from a UA V throughout a remotely located ship.As shown in Figure2,there are several steps to this process:

1.Video feed from off-board source(UA V).

2.Distributor sends video to hosts on ship’s network.

701a0e04eff9aef8941e062aers’hosts receive video and display it.

701a0e04eff9aef8941e062aers analyze the data and(optionally)send com-

mands to UA V to control it.

Our prototype simulates the?rst three of these steps. The command phase of the fourth step is observed as a requirement to be able control the timeliness of data dis-played on a user’s video monitor:if the data is too stale, it will not represent the current situation of the physical UA V and the scene it is observing,and the user cannot control the UA V appropriately.Hence,for example,for such uses it is not acceptable to suspend the display dur-ing a period of network congestion and resume the display from the same point in the video?ow when bandwidth is restored.

4

2

Figure2:Sequence of events in typical UA V operation

Figure3:UA V Prototype Architecture

5

3.1Prototype architecture

Figure3illustrates the initial architecture of the demon-stration.It is a three-stage pipeline,with an off-board UA V sending MPEG video to an on-board video distribu-tion process.The off-board UA V is simulated by a process that continually reads an MPEG?le and sends it to the distribution process.The video distribution process sends the video frames to video display processes throughout the ship,each with their own mission requirements.

All remote method calls in this architecture are made via TAO,the real-time ORB developed by the Distributed Object Computing Group at Washington University in St. Louis[SLM98].However,in early versions of the pro-totype,ad-hoc TCP connections were made between the processes in order to transmit video data.This made re-con?guration of the system processes(e.g.,changing the number of processes or their locations)dif?cult,as it was necessary for each process to know the speci?c hosts and ports that would be used to establish each connection.In current versions,we have replaced this?ow connection setup between the various processes with TAO’s imple-mentation of the CORBA A/V Streaming Service stream setup.This service is discussed further in section3.2. QuO adaptation is used as part of an overall system concept to provide load-invariant performance.Some of the video displays located throughout the ship must dis-play the current images observed by the UA V with ac-ceptable?delity,regardless of the network and host load, in order for the shipboard operators to achieve their mis-sions(e.g.,?ying the UA V or tracking a target).There are several ways to achieve this goal by appropriate adap-tations to various conditions of the system.Among the possible adaptive strategies are:

Send a reduced amount of data,for example by drop-ping frames of the video.The resultant video appears as if the camera had simply captured fewer images per second,without affecting the speed at which ob-jects in the scene move.

Move the distributor from an overloaded host to a different host where more performance is available.

Use a bandwidth reservation protocol to ensure that the distributor is able to send the necessary data to

the viewers through the network,even when the net-

work is congested.

We discuss these adaptations in more detail in Sec-tion4.

3.2A/V Streams transport

As described in Section3.1,we are connecting the com-ponents of the UA V example using the TAO A/V Stream-ing Service[MSS99].This is an implementation of the CORBA A/V Streaming Service[Obj97],which is in-tended to support multimedia applications,such as video-on-demand.The TAO A/V Streaming Service is layered over TAO and ACE[SS94],which handle?ow control processing and media transfer,respectively.

The CORBA A/V Streaming Service controls and man-ages the creation of streams between two or more me-dia devices.Although the original intent of this service was to transmit audio and video streams,it can be used to send any type of data.Applications control and manage A/V streams using the A/V Streaming Service.Streams are terminated by endpoints that can be distributed across networks and are controlled by a stream control interface, which manages the behavior of each stream.

The CORBA A/V Streaming Service combines(1)the ?exibility and portability of the CORBA object-oriented programming model with(2)the ef?ciency of lower-level transport protocols.The stream connection establishment and management is performed via conventional CORBA operations.In contrast,data transfer can be performed directly via more ef?cient lower-level protocols,such as ATM,UDP,TCP,and RTP.This separation of concerns addresses the needs of developers who want to leverage the language and platform?exibility of CORBA,without incurring the overhead of transferring data via the stan-dard CORBA inter-operable inter-ORB protocol(IIOP) operation path through the ORB.

4Adaptation in UA V

In this section,we discuss some performance issues in our UA V concept application,and adaptive behaviors that address these issues.

6

A bottleneck may occur in the application because at some point along the video transport path there are not enough resources to send the entire video to the viewers in real time.For example,the distributor host may not have enough CPU power available to dispatch video frames to all viewers at that rate,or there may be insuf?cient band-width in the network path to one or more viewers.In ei-ther of these cases,one of the following methods can be used to detect the bottleneck,depending on the system ar-chitecture:

Track the number of frames received by the distrib-utor and the number of frames displayed by each viewer,and compare the numbers.This can detect bottlenecks that“back up”the distribution paths as well as overload conditions that slow down the dis-tributor or viewer or that cause frames to be lost.

If the transport provides some form of back pres-sure,for example if TCP is used,measure the rate at which the distributor is able to send frames to view-ers.This will detect not only overload conditions at the distributor,but also network congestion down-stream and overload at viewers.When those condi-tions prevent the viewer from reading frames at the requested rate,back pressure will force the distrib-utor’s sending rate(the measured condition)to de-crease.

In our current implementations of the UA V application, we used the TCP transport protocol,and so were able to control adaptation using the second detection method (measured only at the distributor).We are in the pro-cess of creating a new implementation over a UDP trans-port layer;this implementation requires the?rst detection method(comparing the frames at distributor and viewer). We now present three examples of adaptation that are used in our current UA V application.

One adaptation to these conditions is simply to reduce the amount of data being sent.Depending on user re-quirements,it may be possible to omit some frames of the video entirely,resulting in an end-user video that displays the motion of the scene in real time(that is,objects that move across the real-life scene at constant speed appear to move at constant speed in the video),but without the to-tal illusion of continuously displayed motion that can be

Figure4:Adaptation by?ltering frames attained at frame rates of24frames or more per second. Figure4shows a mechanism for reducing the number of frames in the video stream.For example,if the distribu-tor receives a video at30frames per second,it can reduce resource usage by deleting two of every three frames to produce video output at10frames per second.This par-ticular example was selected as one of the adaptations in our UA V prototype,for reasons that are described in detail in Section4.2.

Alternatively to reducing the number of frames,the amount of data per frame might be reduced.This would typically reduce the image quality of each frame.

A second adaptation is to move the distributor to a host that does not suffer the bottleneck,either because of bet-ter network location or because of greater available CPU resources.Figure5illustrates this adaptation.A new in-stance of the distributor must be started on the new host, and new communication paths must be formed between the UA V,the new distributor instance,and the viewer to replace the corresponding paths that led through the old distributor instance.The old distributor can then be halted and its paths torn down.

A third adaptation applies when the bottleneck is due to competing data?ows that take up some of the network bandwidth needed by the UA V.This adaptation reserves a certain amount of network bandwidth(through an In-tegrated Services(IntServ)protocol)for the distributor’s communication paths so that a suf?cient rate of data can be transmitted.Figure6illustrates this adaptation.

It is also possible to combine these adaptations in var-ious ways.For example,it might be necessary not only to move the distributor to a new host,but also to send a reduced video?ow(e.g.,fewer frames)to certain viewers through RSVP-enabled links.

Which adaptations should be used can depend on the 7

in this paper we use only MPEG-1and will refer to it sim-ply as“MPEG.”

The MPEG video format renders a sequence of images (frames of a video)into a highly compressed string of bytes.It was designed to be able to transmit broadcast-quality video over a link of1.5Mbit bandwidth.A typical application of MPEG-1is to store a video in a?le in this format,and to play it back by retrieving the?le,decod-ing it incrementally and displaying it on a monitor.The format is,however,not limited to retrieved?les;for ex-ample,devices exist that take in a direct video feed and produce MPEG-encoded output.In our prototype,a live video feed such as this was not available,so it was sim-ulated by reading pre-recorded data from a?le at a?xed rate.

A video?ow consists primarily of a sequence of frames that was(presumably)recorded by a camera at some?xed frame rate(selected from a small set of standard rates used by modern?lm or video equipment);when displayed at this same rate,the frames accurately show the images and motion of the recorded scene.The video?ows we used in testing the UA V application were typically captured at the rate of30frames per second.The remainder of the video data consists of various control blocks interspersed among the frames.

In order to increase the compression of the video?ow, MPEG uses three distinct types of frame.The?rst and most essential type of frame is the I-frame(standing for “intraframe”).An I-frame is a compressed image of a sin-gle frame.For our purposes here,the most important fea-ture of the I-frame is that it is independent of other frames, that is,it can be displayed correctly without knowing any-thing about the image contained in any other frame of the video.

The second type of frame is the P-frame(standing for “predictive”).A P-frame contains only the data necessary to correctly extrapolate the image in question from the image of a previously-displayed frame,(which may have been encoded as an I-frame or as another P-frame,and which may or may not have been the immediately preced-ing frame)as depicted in Figure7.In this?gure,frames of a video are shown in increasing time order from left to right;the P-frame is shown as a grid,and the arrow indi-cates the dependence of the displayed image on the pre-vious I-or P-frame.Various algorithms such as motion-detection are used to express the extrapolation

concisely.

P

Figure7:Predictive(P)Frame

The third type of frame is the B-frame(standing for “bidirectional”).To correctly decode a B-frame requires the images from two other frames(each of which may have been encoded either as an I-frame or as a P-frame), one displayed prior to the B-frame and one displayed after the B-frame,as shown in Figure8.In this?gure,the

B-

B

Figure8:Bidirectional(B)Frame

frame is shown as a grid,and arrows in both directions indicate the dependence of the B-frame’s image on the images of frames displayed before and after.The image of the B-frame is then constructed by a combination of data interpolated between the two other frames and data encoded in the B-frame itself.

Because the format allows different parameters for the compression algorithms,and because the compression ra-tio can be dependent on the contents of frames,the num-ber of bytes in a frame can vary substantially from one video?le to another and even among frames of the same type in the same video.But in general,I frames require by far the most bytes to transmit,because they cannot make use of the data in other frames;P frames are much smaller because most of the necessary data is contained in a prior frame;and B frames are typically the smallest of all,because they need only record the deviations from a straight-line interpolation between two other frames. (In most of the MPEG?les we have examined,a few B frames were slightly larger than the smallest P frames in the same video,so size is not absolutely correlated with frame type.)

9

The MPEG format groups frames into two higher-level structures:a group of pictures(GOP)consists of a header followed by one or more frames,and a sequence consists

of a header followed by one or more groups of pictures. In a typical30-frame-per-second MPEG encoding,each GOP consists of a single I-frame,four P-frames,and ten B-frames.These frames are to be displayed in the order matching Figure9,that is,every?fteenth frame is an I-frame,every third frame after the I-frame is a P-frame, and each I-or P-frame is immediately followed by two B-frames.The arrows in the?gure show the

dependen-

Figure9:Sequence of display of frames in MPEG cies of P-frames and B-frames on other frames.Note that P-frames may depend on other P-frames.Hence to dis-play the image of a P-frame,it is?rst necessary to have received the previous I-frame and all other P-frames(af-ter that I-frame)on which the current P-frame depends. To display a B-frame,it in necessary to have received the I-or P-frame preceding the B-frame in display order and the I-or P-frame following the B-frame in display order, as well as all other I-and P-frames on which those two frames depend.

Within a GOP,the frames can be encoded in a different order from the order in which they are to be displayed. The?rst frame is always an I-frame.Typically,the?rst two B-frames are to be displayed before this I-frame,even though they occur later in the encoding.(An obvious ra-tionale for this is that the I-frame can be decoded by itself but the B-frames can be decoded only when the I-frame is known.)Similarly,the two B-frames that are displayed before each P-frame are encoded after the P-frame.So the 15frames of a GOP are encoded as shown in Figure10, where the numbers under the frame indicate the order in which these frames are to be displayed.

4.2Adaptation in a video domain

At the beginning of Section4,we presented three adap-tive behaviors.Of these behaviors,load balancing and

201534869

71110141213 GOP header GOP header Sequence header

B

I B B

B

P B

B

P B

B

P B

B

P

Figure10:Sequence of frames in MPEG?le

network reservation can be implemented without regard for the details of the video encoding;an alternative en-coding can be employed without needing to change these behaviors.The implementation of data?ltering to reduce the volume of video data,however,is highly dependent on the video encoding format itself.

In order to perform data?ltering in the UA V proto-type,we employ the technique of reducing the frame rate transmitted from the distributor to the viewer.Similar techniques can be applied elsewhere in the data path,of course,in particular between the UA V itself and the dis-tributor.(To reduce the quality of the inpidual frames displayed,it is necessary to trans-code the contents of the frames themselves;tools to perform this task exist but are not yet used in our application).But the frame rate must not be reduced in such a way as to create a“slow motion effect”;that is,a vehicle that crossed the?eld of view of the UA V camera in2.5seconds should cross the applica-tion display in2.5seconds,and so forth for all other action in the video,in order that the display continue to present a true and up-to-the-moment view from the UA V itself. For the purposes of experiments performed on our proto-type,therefore,we assumed that the UA V transmits video data at the rate of30frames per second,which is received by the distributor at that rate(when system resources per-mit),but the distributor implements an adaptive behavior that sends out a smaller number of frames representing the action that occurs during each second.The subset to be sent is selected by dropping(eliminating)some frames from the video,and sending out the remaining frames at a reduced rate.

Our options for ef?ciently dropping frames are lim-ited by the MPEG encoding format,however,and so we must follow a set of application-speci?c adaptation strate-gies.For example,if we could simply drop every second frame,we would be left with15frames out of every30, and could send the video at the rate of15frames per sec-ond without affecting the apparent speed of motion of the

10

scene.(At this rate the human eye would be able to de-tect a slight?ickering or stroboscopic effect as one image was replaced by the next,because the1/15second inter-val between images is a little longer than the threshold for distinguishing successive still images from true continu-ous change in the scene.)This particular example is im-practical in the videos we worked with,however,because some of the frames dropped would have been I-frames, but16other consecutive frames(all other frames in the same GOP,and the?rst two B-frames in the next GOP) depend directly or indirectly on each I-frame,so drop-ping a single I-frame results in more than0.5second of the video being lost.

Further,in support of the application,one of whose re-quirements is to track moving images as continuously as possible,it is highly desirable to minimize any intervals in the video during which the image remains still.Hence it is not desirable to display,say,the I-frame and nine subse-quent frames in a GOP,and drop the remaining six frames. While this scheme incurs the load entailed in transmitting 18frames per second,it suffers intervals of7/30second during which no motion is seen;this is a longer interval than if the video were displayed at at a steady rate of only 5frames per second(6/30second between frames).From the human-factors point of view,it is desirable that the in-tervals between the correct display times of frames be as uniform as possible.

Because of these issues and the dependencies between frames,the best frame-dropping protocols drop B-frames when only a few frames are to be dropped(because a missing P-frame implies an interval of at least1/5second —six frames—between the correct times of displayed frames).There are20B-frames in each second of video, so this technique can bring the sending rate down to10 frames per second.To drop more frames,P-frames can then be dropped.I-frames should be dropped only if inter-vals of1second or more between images are acceptable. If each frame in the MPEG format contained an exact timestamp showing the time at which it was supposed to be displayed(e.g.,expressed in milliseconds since the in-stant when the video started to be displayed),it would be relatively easy to delete frames from the video one at a time without causing jitter(frames displayed sooner or later than the correct time).But the frames do not in-clude such a timestamp,so there are fewer good options for dropping frames.For example,if we were to drop one frame out of every three,we would be left with a video in which,to minimize jitter,some images in a sequence should be displayed1/30second after the previous image, and some1/15second after.But the typical MPEG video player is designed to read a single frame rate from the se-quence header and to display all frames in the sequence at that constant rate;indeed the MPEG format is designed to support nothing more sophisticated.

A technique for“dropping”frames without incurring this display-timing dif?culty is to replace B-or P-frames with“dummy”frames rather than dropping them en-tirely[HSG00].The dummy frame contains only the min-imal information to allow the viewer to display an image similar to the preceding(or following)image,but omits all the new image information that the B-or P-frame itself would have contained;hence this frame is very small,and the bandwidth required to transmit the video is reduced, although not as much as if the frame were eliminated en-tirely.

For our current implementation we chose to drop frames entirely in such a way that the remaining frames were to be displayed at a constant rate.This implementa-tion provided us with three signi?cantly different levels of QoS among which to adapt the application,as determined by the frame rate:

30frames per second.This is done by transmitting the video intact.When this rate is achieved it repre-sents the highest level of QoS.

10frames per second.This is done by dropping all B-frames from the video,and transmitting all the I-and P-frames.At this level of QoS,most perception of motion in the video scene is preserved,but a care-ful human observer can detect by eye the transitions from one image to the next.

2frames per second.This is done by dropping all P-and B-frames from the video,and transmitting all I-frames.At this level of QoS,the image changes frequently enough for some motion to be judged,but ?ner details of motion(and some very short-lived ac-tions)can be lost entirely.

It is then possible to adaptively switch among these three frame rates by assigning each frame rate to a dif-

11

ferent region of a QuO contract,and setting the frame-dropping protocol at any given time according to the cur-rent region.(We also implemented frame rates of1frame per second and slower,but due to their extremely low lev-els of?delity we excluded these from our adaptive behav-iors on human-factors grounds.)

In actual MPEG videos we examined,mean sizes of I frames were in the range between11500and14000bytes, of P frames in the range between4900and7000bytes, and of B frames in the range between2900and3400 bytes.In the sample video provided by the Navy and se-lected(for reasons unrelated to our UA V implementation or adaptations)as the test case for the UA V application, I-frames averaged approximately13800bytes,P-frames approximately5000bytes,and B-frames approximately 2900bytes.The approximate size in bits of two average GOPs is therefore

(i.e.,near the capacity of a1.5Mbit link).This is the bandwidth requirement of sending one second of the video at the full rate of30frames per second.

If we drop the rate to10frames per second by elimi-nating the B-frames,the bandwidth required,in bits per second,falls to approximately

and if we drop the rate to2frames per second by elim-inating the P-frames as well,the required bandwidth in bits per second falls to approximately

That is,reducing the frame rate from30to10(a67 percent reduction)reduces the bit rate by46percent,and reducing the frame rate from30to2(a93percent re-duction)reduces the bit rate by78percent.These are substantial reductions of bandwidth and other system re-quirements,so it is not hard to?nd system conditions un-der which the full bandwidth is not supportable,but one of the reduced-bandwidth adaptations is.The reduction in bit rate is not proportional to the reduction in frame rate, because the frames that must be dropped?rst are precisely those frames that have the greatest dependency on other frames(and the fewest frames depending on them),and consequently the encoded sizes of these dropped frames are relatively small.On the other hand,reduction in the perceived value of the reduced-frame-rate display to a hu-man viewer also is not proportional to the reduction in frame rate,judging from the informal reactions of peo-ple who have watched demonstrations of the application adapting.

4.3QuO Contract

Figure11shows a QuO contract that adapts the distributor to available resources by increasing or decreasing the rate at which frames are transmitted to viewers.We show it to illustrate the high-level adaptation-oriented abstraction that QuO provides,and to illustrate the isolation of these adaptive behavior aspects from the rest of the code.This contract is substantially similar to the one used in the pro-totype,but with certain details of syntax and non-essential functionality elided in order to?t it legibly within a sin-gle column of text.The contract pides the operational conditions of the distributor into three QuO regions: NormalLoad This region is entered when resources are adequate to transmit the video to the viewers at the full bit rate.In this region,the distributor sends30 frames per second.

HighLoad This region is entered when there are not ad-equate resources to transmit the video at the full bit rate.In this region,the distributor sends10frames per second.Intermediate frames of the video are dropped so that the remaining frames,displayed at the rate of10per second,depict normal-speed mo-tion.

ExcessLoad This region is entered when there are not ad-equate resources to transmit the video even at the re-duced bit rate required for10frames per second.In this region,the distributor sends2frames per sec-ond,again dropping intermediate frames in order to preserve the speed of motion.

This contract communicates with the rest of the system in several ways.First,the system condition object ac-tualFrameRate is set periodically by the distributor; its value is the actual number of frames sent in the pre-vious second.The contract uses this system condition to

12

contract UAVdistrib(

syscond quo::ValueSC timeInRegion,

syscond quo::ValueSC actualFrameRate,

callback InstrCallback instrControl,

callback SourceCtrlCallback sourceControl) {

region NormalLoad(actualFrameRate>=27) {}

region HighLoad((actualFrameRate<27and

actualFrameRate>=8)) {

state Duty until(timeInRegion>=3)

(timeInRegion>=30->Test) {}

state Test until(timeInRegion>=3)

(true->Duty)

{}

transition any->Duty{

sourceControl.setFrameRate(10);

timeInRegion.longValue(0);

}

transition any->Test{

sourceControl.setFrameRate(30);

timeInRegion.longValue(0);

}

}

region ExcessLoad(actualFrameRate<8) {

state Duty until(timeInRegion>=3)

(timeInRegion>=30->Test) {}

state Test until(timeInRegion>=3)

(true->Duty)

{}

transition any->Duty{

sourceControl.setFrameRate(2);

timeInRegion.longValue(0);

}

transition any->Test{

sourceControl.setFrameRate(10);

timeInRegion.longValue(0);

}

}

transition any->NormalLoad{

instrControl.setRegion("NormalLoad");

sourceControl.setFrameRate(30);

}

transition any->HighLoad{

instrControl.setRegion("HighLoad");

}

transition any->ExcessLoad{

instrControl.setRegion("ExcessLoad");

}

};

Figure11:QuO Contract for UA V

gauge the amount of data that the distributor has resources to send.This particular measurement is quite general-purpose;whether the restricted resource is bandwidth,an I/O device,or CPU time,to the extent that this restriction affects the ability of the distributor to transmit video at its desired frame rate,the de?ciency will be detected in the form of a reduced actual frame rate.The prototype imple-mentation averages the rate over a period of one second;

a shorter period may be practical,but the rate must be av-eraged over some period in order to be measurable and accurate.An alternative that might provide a shorter re-action time would be to monitor more basic system con-ditions such as processor load and network load,and to predict when the achievable frame rate is likely to be re-duced rather than merely observing it,but such schemes entail tradeoffs,such as the greater complexity of calibrat-ing the predictions,and the failure to detect performance problems caused by conditions that are not measured.

A disadvantage of estimating the capacity to send frames by measuring only the frames actually sent is that it is dif?cult to detect when there is excess capacity(and when the frame rate might safely be increased).This con-tract addresses this problem by occasionally attempting to send the video?ow at the next higher frame rate from its current setting.The frequency and duration of these “tests”is controlled by a state machine within the current region,which alternates between the Duty and Test states at certain intervals of time.The time in any state is mea-sured by virtue of the system condition timeInRegion, which is set to zero every time there is a state transition and is thereafter incremented once per second.

Based on the value of actualFrameRate,then,the contract selects the correct region from among Normal-Load,HighLoad,and ExcessLoad,which in turn con-trols the frame rate via execution of the sourceCon-trol.setFrameRate callback,which is called on transition to the NormalLoad region or to the Duty states of the other two regions.Then,if the contract is in a Duty state,after the value of timeInRegion passes a predetermined threshold the contract will transition to the Test state for a few seconds,at which time it sets a higher frame rate.The until clause of each state prevents any transitions out of that state for a few seconds,ensuring that a stable measurement of the new achievable frame rate is made;at the end of this time,if the test succeeds (that is,if the actual frame rate is observed to be at the 13

requested rate)a contract reevaluation results in a change of regions.Otherwise,the contract begins a new Duty cy-cle in the same region(unless,of course,insuf?cient re-sources for the HighLoad region force a transition down to the ExcessLoad region).

The instrControl callback enables the contract to communicate with a resource manager that monitors and controls the resource usage and location of application processes.We installed the prototype in an environment controlled by such a resource manager.The transition into the ExcessLoad region caused the contract to execute the code in transition any->ExcessLoad,which in turn transmitted an indicator of the region to the resource manager.The resource manager then restarted the distrib-utor on a different host where more resources were avail-able.

5Domain-speci?c issues

We encountered various issues with the video format and components of our UA V concept prototype that affected the ability to build an application with the functional and adaptive properties we describe.We discuss two of the main issues below.These domain-speci?c issues are illus-trative of the issues of the multiple levels at which adapta-tion interacts with speci?c features of an application do-main,and the dif?culty of composing current off-the-shelf software.

5.1Designed-in latency

The interpolation of B-frames between the other frames of an MPEG video signi?cantly increases the compres-sion of the video compared to what is practically possible using only I-and P-frames.This is simply because the average B-frame is smaller than the average P-frame(the next best candidate for replacement).This is why typical MPEG videos use the format described in Section4.1. The price of this compression is the fact that some video frames are dependent on other frames,not just frames that are displayed earlier but also frames that must be displayed later.

Figure12shows how this affects the latency of an ap-plication such as the UA V.A sequence of images are de-

code,and transmit an I-frame in just1/30second. When one frame is delayed1/10second—and in fact when this happens to one frame out of every three—the entire video display must be delayed by1/10second in order to avoid jitter.And this is merely the delay imposed by the video format itself,not including any latency that inevitably occurs in the actual transmission of the frames from one stage of the system to the next.

5.1.1Partial solution

One approach that would reduce the inherent latency of the MPEG format is to encode the MPEG format using only I-and P-frames,and no B-frames.That is,where currently we are using MPEG video?ows in which each 1/2second of time that passes in the scene is encoded as shown in Figure10,instead we might encode the same15 images as one I-frame followed by14P-frames.

Simply replacing all B-frames by P-frames has the dis-advantage,however,that adaptation by frame-dropping becomes harder.We cannot simply delete two of ev-ery three P-frames,for example,because all P-frames after the?rst P-frame in a GOP are dependent on that ?rst P-frame.So in order to for the distributor to drop frames,either it must trans-code all the P-frames it sends (i.e.,recompute P-frames by decoding and re-encoding the video,or otherwise composing the sequences of P-frames received),or we must send it more I-frames.For example,by sending the original30-frames-per-second video as a repeating sequence of IPP,we could adapt to 10frames per second simply by dropping P-frames,and to lower rates by dropping some I-frames as well. Either of the suggested re-encodings carries a band-width and I/O penalty due to the replacement of more compactly encoded frames by ones that require a longer encoding.Speci?cally,consider the frame sizes of our sample video(described in Section4.2)and assume that the average P-frame size remains the same.Then the number of bits in the encoding of one second of video, consisting of2I-frames and28P-frames(i.e.,merely replacing all B-frames in the existing encoding by P-frames),is

and the cost of sending10I-frames and20P-frames(a repeating IPP sequence)is

which?gures are,respectively,33percent and89percent higher than the cost of the encoding that we used.

5.2Viewer timing idiosyncrasies

Because the MPEG format is often used for applications with very different performance characteristics than the UA V prototype,there are issues that arise with respect to the MPEG viewer.The viewer that we used to obtain our current results(a program derived originally from Berke-ley mpeg

the order of20frames with1/30second interval between frames,and a much longer interval between bursts.

A second dif?culty contributing to this bursty display was that the program set its frame rate at the beginning of each run,immediately on reading the?rst frame rate indi-cator in the MPEG?ow,and never updating it thereafter. This is a natural design for the typical MPEG application, in which a?le really contains video recorded at one and only one frame rate,and these frames are all to be dis-played.Our application,however,required special code to be inserted in order to modify certain internal variables of the viewer at any time during a run.

In the latest versions of the prototype based on this viewer,we smoothed out the burstiness of the display by continually modifying the requested display rate to match the rate at which the viewer proxy forwarded frames to the viewer.This workaround was not entirely satisfactory, however.In particular,the dif?culty of ascertaining how many frames were buffered at any given instant caused dif?culty when the frame rate changed adaptively:for ex-ample,if frames sent at30frames per second are still in the buffer when the viewer is told to reduce the rate to 10frames per second,the viewer displays these frames in slow motion;on the other hand,if we wait too long to tell the viewer to change the frame rate,some of the10-frame-per-second frames will be displayed in fast motion. These dif?culties are in sharp contrast to the experi-ences in modifying program behaviors that were designed to be controlled by the QuO framework.We revisit this point in Section6.2.

For ongoing work,we were induced?nally to make what might be termed a radical“compile-time adapta-tion”:we have completely replaced our MPEG viewer with another implementation,the DVDview player[Far]. Preliminary observations of this player indicated that it contributes a relatively small amount of buffer-induced latency,and it appears to display frames at the rate it re-ceives them.That is,because it does not implement a ?xed set of complex adaptive strategies within the basic functional code itself,this viewer is more reusable,and in particular is suitable as a QuO component in order to achieve better end-to-end behavior in a variety of circum-stances.While further testing is required,this viewer ap-pears promising for the purposes of our prototype appli-cation.6Results

6.1Adaptation controls latency

We performed experiments to test the effectiveness of our adaptive behavior in the UA V application.The three stages were run on three Linux boxes,each with a 200MHz processor and128MB of memory.The video transport was TCP sockets.

At time,the distributor started.Shortly after this, the video began to?ow.At seconds,sec-onds,and seconds,three special load-simulating processes were started on the same host as the distribu-tor,each attempting to use20percent of the maximum processing load(a total of60percent additional process-ing load).This reduced the distributor’s share of process-ing power below what it needed to transmit video at30 frames per second.At seconds,the load was removed.At time seconds(approximately),the experiment terminated.The basic premise is that the full load was applied for a duration of one minute,starting af-ter the pipeline had had time to“settle in,”and ending a few minutes before the end of measurement so we could observe any trailing effects.

This scenario was run twice,once without QuO at-tached and without any adaptation(the control case)and once with a QuO contract causing adaptation(the exper-imental case).For the purposes of this experiment,the only adaptation enabled was to reduce bandwidth by drop-ping frames.

Figure13shows the effect of the increased load on the latency of the video stream.In this graph,the-axis rep-resents the passage of time in the scene being viewed by the UA V,and the-axis represents the“lateness”of each image,that is,the additional latency(in delivery to the viewer)caused by the system load.That is,if all images were delivered with the same latency,the graph would be a constant zero.The label“Load”indicates the period of time during which there was contention for the processor; without QuO adaptation,the video images fall progres-sively further behind starting when the contention?rst oc-curs,and the video does not fully recover until some time after the contention disappears.

Figure14summarizes these results.The lateness val-ues in all these?gures are based on the timing of the I frames,which occur2times per second and(in the ideal)

16

6.2Software engineering with QuO

The effectiveness of QuO as a software engineering framework is exempli?ed in Figure15.In this table,the concern of“display time of frames”refers to the issues de-scribed in Section5.2concerning the rate at which frames are displayed.These issues arose because the viewer im-plementation mixed QoS concerns in an ad-hoc fashion with application function code.Because of this manual tangling of concerns,the code was unnecessarily complex and it was very dif?cult to modify or even fully under-stand its behavior.In other words,adaptation of the pro-gram to new performance criteria(i.e.,fast local decoding and display hardware but variable quality of video input, as opposed to its original domain in which the video input was of uniform quality but decoding and display could suffer delays)required an unacceptably high investment of effort by programmers.On the other hand,changing the parameters of behaviors controlled by QuO often took only minutes(for a simple change in,say,a threshold value)to a day(for more substantial changes in behaviors exhibited).

7Related work

Hemy et al.present an adaptive MPEG transmission ap-plication that also does frame-dropping,but in a slightly different way[HSG00].Where we delete frames en-tirely,they insert“dummy”frames into the MPEG?ow in order to replace the dropped frames.In this way they achieve most of the possible reduction in bandwidth with-out changing the frame rate used by the viewer(although the rate of sending new images is reduced by the same factor as in the UA V prototype).

The Agilos middleware project implements a hierarchi-17

cal adaptive QoS control architecture[LN00],paralleling QuO in some ways,and differing in others.For example, while QuO requires no direct knowledge of other appli-cations that share common resources(though it is possi-ble to link these together via common system condition objects),Agilos incorporates all applications in an envi-ronment,including possibly unrelated applications,into a common framework of application-neutral resource con-trol.

8Future work

A number of extensions and additions to the current work are in progress and are being planned.

The results in Section6were obtained from a version of the UA V prototype that used TCP as its communica-tion protocol.We are currently developing a prototype in which A/V Streams uses the UDP protocol instead.

We are also in the process of implementing bandwidth reservation for the MPEG?ows over A/V Streams.This will enable QuO contracts for the UA V to use a richer set of adaptations,as described in Section4,and will afford opportunities to test UA V adaptation under conditions in which multiple system conditions(such as CPU load and network congestion)vary independently,and in which multiple adaptive strategies(such as bandwidth reserva-tion and frame dropping)may be combined.

In future versions of the UA V application,the distrib-utor will be able to support multiple viewers,dynami-cally created and connected at run time,and indepen-dently adaptable.Moreover,we plan to support multiple distinct video?ows in the system(possibly involving mul-tiple sources and distributors).These features will allow the investigation of the interaction of multiple adapting entities in a complex system.

QuO itself is still an evolving framework—for exam-ple,new syntax for states and for“locking down”re-gions was added to the contract language in order to more clearly express contract features used by the contract of the UA V prototype—and the development of the UA V application will continue interacting with the evolution of QuO,both as a testing ground for improvements to the framework and as a driver of more innovations.9Conclusion

In this paper,we described a prototype multimedia appli-cation,the UA V video distribution system,and the stan-dards(MPEG video and CORBA A/V Streams streaming transport)over which this prototype is implemented.We describe QuO,a framework for distributed object com-puting designed to enable adaptive optimization of dis-tributed software systems.We demonstrated how QuO can interact with a distributed-object(CORBA)applica-tion—even one in which the communication path to be adapted is not a client-server method call and return—in order to implement application-and implementation-speci?c adaptations to system performance issues.Fur-ther,we present empirical results that show that QuO can perform these adaptations effectively and ef?ciently. Moreover,our experience was that the use of the QuO framework made the implementation and redesign of adaptive behaviors easier for developers than ad-hoc methods typically used.These methods involve extensive manual entanglement of code to perform basic functions with code to optimize performance under speci?c condi-tions,resulting in code that is dif?cult to develop,under-stand,and maintain.In contrast,the QuO-based adaptive behaviors are separated from the basic video functions, are easy to understand by inspection,and are easily mod-i?able.

Acknowledgements

This work is sponsored by DARPA and US AFRL under contract nos.F30602-98-C-0187and F33615-00-C-1694.The authors would like to gratefully ac-knowledge the support of Dr.Gary Koob,Thomas Lawrence,and Mike Masters for this research.The au-thors would also like to acknowledge the contributions of Michael Atighetchi,Tom Mitchell,John Zinky,and James Megquier,the Naval Surface Warfare Center(NSWC) Dahlgren,V A(in particular Mike Masters,Paul Werme, Karen O’Donoghue,David Alexander,Wayne Mills,and Steve Brannan),and the DOC group at Washington Uni-versity,St.Louis,and University of California,Irvine,to the research described in this paper.

18

References

[BBN98]BBN Distributed Systems Research Group, DIRM project team.Dirm technical

overview.Internet URL http://www.dist-

701a0e04eff9aef8941e062a/projects/DIRM,1998. [CRS98]M.Cukier,J.Ren,C.Sabnis,D.Henke,J.Pis-tole,,W.Sanders,D.Bakken,M.Berman,

D.Karr,and R.

E.Schantz.AQuA:An adap-

tive architecture that provides dependable dis-

tributed objects.In Proceedings of the17th

IEEE Symposium on Reliable Distributed Sys-

tems,pages245–253,October1998.

[Far]Dirk Farin.DVDview:Software-only MPEG-1/2video decoder.

Internet URL http://webrum.uni-

mannheim.de/math/farin/dvdview. [Gal91]Didier Le Gall.MPEG:a video compression standard for multimedia 701a0e04eff9aef8941e062amu-

nications of the ACM,April1991.

[HSG00]Michael Hemy,Peter Steenkiste,and Thomas Gross.Evaluation of adaptive?ltering of

MPEG system streams in IP networks.In IEEE

International Conference on Multimedia and

Expo2000,New York,New York,2000. [Kic96]Gregor Kiczales.Beyond the black box:Open implementation.IEEE Software,1996.

[KIL96]Gregor Kiczales,John Irwin,John Lamping, Jean-Marc Loingtier,Cristina Videria Lopes,

Chris Maeda,and Anurag Mendhekar.Aspect-

oriented programming.ACM Computing Sur-

veys,28(4es),1996.

[LBS98]Joseph P.Loyall,David E.Bakken,Richard E.

Schantz,John A.Zinky,David Karr,Rodrigo

Vanegas,and Ken R.Anderson.QoS As-

pect Languages and Their Runtime Integration.

Springer-Verlag,1998.

[LN00]Baochun Li and Clara Nahrstedt.QualProbes: Middleware QoS Pro?ling Services for Con?g-

uring Adaptive Applications.Springer-Verlag,

2000.[LSZB98]Joseph P.Loyall,Richard E.Schantz,John A.

Zinky,and David E.Bakken.Specify-

ing and measuring quality of service in dis-

tributed object systems.In Proceedings of

the1st IEEE International Symposium on

Object-oriented Real-time distributed Comput-

ing(ISORC),April1998.

[Mic]Sun Microsystems.The real time speci?cation for java.Internet URL

701a0e04eff9aef8941e062a/aboutJava/communi-

typrocess/?rst/jsr001/.

[MSS99]Sumedh Mungee,Nagarajan Surendran,and Douglas C.Schmidt.The Design and Perfor-

mance of a CORBA Audio/Video Streaming

Service.In Proceedings of the Hawaiian Inter-

national Conference on System Sciences,Jan-

uary1999.

[Obj97]Object Management Group.Control and Man-agement of Audio/Video Streams:OMG RFP

Submission,1.2edition,March1997. [OMG98]OMG.Control and Management of Au-dio/Video Streams,OMG RFP Submission(Re-

vised),OMG Technical Document98-10-05.

Object Management Group,Framingham.MA,

Oct1998.

[OMG00]OMG.CORBA2.4Speci?cation,OMG Tech-nical Document00-10-33.Object Management

Group,Framingham.MA,Oct2000.

[SLM98]Douglas C.Schmidt,David L.Levine,and Sumedh Mungee.The Design and Perfor-

mance of Real-Time Object Request Bro-

701a0e04eff9aef8941e062aputer Communications,21(4):294–

324,April1998.

[SS94]Douglas C.Schmidt and Tatsuya Suda.An Object-Oriented Framework for Dynamically

Con?guring Extensible Distributed Communi-

cation Systems.IEE/BCS Distributed Systems

Engineering Journal(Special Issue on Con?g-

urable Distributed Systems),2:280–293,De-

cember1994.

19

[SZK99]Richard E.Schantz,John A.Zinky,David A.

Karr,David E.Bakken,James Megquier,and

Joseph P.Loyall.An object-level gateway sup-

porting integrated-property quality of service.

In Proceedings of the2nd IEEE International

Symposium on Object-oriented Real-time dis-

tributed Computing(ISORC),May1999. [VZL98]Rodrigo Vanegas,John A.Zinky,Joseph P.

Loyall,David Karr,Richard E.Schantz,and

David E.Bakken.QuO’s runtime support for

quality of service in distributed objects.Pro-

ceedings of Middleware98,the IFIP Inter-

national Conference on Distributed Systems

Platform and Open Distributed Processing,

September1998.

20

本文来源:https://www.bwwdw.com/article/0y8l.html

Top