Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Advanced Wireless Networks - 4G Technologies

.pdf
Скачиваний:
47
Добавлен:
17.08.2013
Размер:
21.94 Mб
Скачать

838 QUALITY-OF-SERVICE MANAGEMENT

of the square matrix can be formed by adding ¯ k times the row of each . The problem

R k

formulation is as

N N

Pk1,n ρk,n

ρk,n {0, 1}

 

PT = min

(21.33)

k=1 n=1

 

 

 

and the constraints become

 

 

 

N

N

ρk,n = 1 k {1, . . . , N }

 

ρk,n = 1 n {1, . . . , N }

and

 

n=1

k=1

 

 

Although the Hungarian method has computation complexity O(n4) in the allocation problem with fixed modulation, it may serve as a base for adaptive modulation.

The Bit loading algorithm (BLA) is used after the subcarriers are assigned to users that

have at least ¯ k bits assigned. The bit loading procedure is as simple as incrementing bits

R

of the assigned subcarriers of the users until PT Pmax. If Pk,n (c) is the additional power needed to increment one bit of the nth subcarrier of kth user Pk,n (ck,n ) = [ f (ck,n + 1) f (ck,n )]k2,n , then the bit loading algorithm assigns one bit at a time with a greedy approach to the subcarrier as {arg mink,n Pk,n (ck,n )}.

BL algorithm

(1)For all n, Set ck,n = 0, Pk,n (ck,n ), and PT = 0;

(2)Select n¯ = arg minn Pk,n (0);

(3)Set ck,n¯ = ck,n¯ + 1 and PT = PT + Pk,n (ck,n );

(4)Set Pk,n (ck,n¯ );

(5)Check PT Pmax and Rk for k, if not satisfied GOTO STEP 2.

(6)Finish.

The Hungarian approach and LP approach with bit loading appear as two different suboptimal solutions to the resource allocation with adaptive modulation. In the sequel they will be referred to as GreedyHungarian (GH) and GreedyLP (GLP).

21.4.1 Iterative solution

The GreedyLP and GreedyHungarian methods both first determine the subcarriers and then increment the number of bits on them according to the rate requirements of users. This may not be a good schedule in some cases, like a user with only one good subcarrier and low rate requirement. The best solution for that user is allocating its good carrier with high number of bits. However, if GreedyLP or GreedyHungarian is used, the user may have allocated more than one subcarrier with lower number of bits and, in some cases, its good subcarrier is never selected. Consider another scenario where a user does not have any good subcarrier (i.e. it may have a bad channel or be at the edge of the cell). In this case, rather than pushing more bits and allocating fewer subcarriers, as in GreedyLP and GreedyHungarian, the opposite strategy is preferred since fewer bits in higher number of subcarriers give a better result. Another difficulty arises in providing fairness. Since GreedyLP and GreedyHungarian are

QoS IN OFDMA-BASED BROADBAND WIRELESS ACCESS SYSTEMS

839

based on a greedy approach, the user in the worst condition usually suffers. In any event, these are complex schemes and simpler schemes are needed to finish the allocation within the coherence time. To cope with these challenges, in the following a simple, efficient and fair subcarrier allocation scheme is introduced with iterative improvement [53].

The scheme is composed of two modules, referred to as scheduling and improvement modules. In the scheduling section, bits and subcarriers are distributed to the users and passed to the improvement module where the allocation is improved iteratively by bit swapping and subcarrier swapping algorithms.

The fair scheduling algorithm starts the allocation procedure with the highest level of modulation scheme. In this way, it tries to find the best subcarrier of a user to allocate the highest number of bits. In Koutsopoulos and Tassiulas [55] the strategy is described by an analogy: ‘The best strategy to fill a case with stone, pebble and sand is as follows. First filling the case with the stones and then filling the gap left from the stones with pebbles and in the same way, filling the gap left from pebbles with sand. Since filling in opposite direction may leave the stones or pebbles outside”. With this strategy more bits can be allocated and the scheme becomes immune to uneven QoS requirements. The fair scheduling algorithm (FSA) runs greedy release algorithm (GRA) if there are nonallocated subcarriers after the lowest modulation turn and the rate requirement is not satisfied. GRA decrements one bit of a subcarrier to gain power reduction, which is used to assign higher number of bits to the users on the whole. FSA is described as follows.

FS algorithm

(1)Set c = M, Select a k, and PT = 0;

(2)Find n¯ = arg minn Pkc,n ;

(3)Set Rk = Rk c and ρk,n¯ = 1, update PT , shift to the next k;

(4)If PT > Pmax, step out and set c = c 1, GOTO STEP 2.

(5)If k, Rk < c, set c = c 1, GOTO STEP 2.

(6)

If {c == 1},

K

N

ρk,n < N , PT > Pmax, run “greedy release” and GOTO

k=1

n=1

 

STEP 2.

 

 

 

(7)

Finish.

 

 

 

The greedy releasing algorithm tends to fill the unallocated subcarriers. It releases one of the bits of the most expensive subcarrier to gain power reduction in order to drive the process. GRA works in the opposite direction to BLA. GRA is described as follows.

GR algorithm

 

 

(1)

¯

c

ρk,n c;

Find {k, n¯ , c¯k¯,n¯

} = arg maxk,n,c Pk,n

(2)

Set c¯k,n = c¯k,n 1, PT = PT Pk¯,n¯ (ck¯,n¯ );

(3)

Set c = ck¯,n¯ 1;

 

(4)

Finish.

 

 

840 QUALITY-OF-SERVICE MANAGEMENT

The horizontal swapping algorithm (HSA) aims to smooth the bit distribution of a user. When the subcarriers are distributed, the bit weight per subcarrier can be adjusted to reduce power. One bit of a subcarrier may be shifted to the other subcarrier of the same user if there is a power reduction gain. Therefore, variation of the power allocation per subcarrier is reduced and a smoother transmission is performed. HSA is described as follows.

HS algorithm

 

(1)

Set PC = ∞;

c

 

(1a)

¯

} = arg maxk,n,c(Pk,n ρk,n ) < PC c;

(2)

Find {k, n¯ , c¯k¯,n¯

Define n Sk , where {ρk,n == 1} for ni ;

(3)

Set n˙ = maxn Pk¯,n¯ (ck¯,n¯ 1) Pk¯,n˙ (ck¯,n˙ ), n˙ Sk ;

(4)

Set PC = Pk¯c¯,n¯ ;

 

 

(4a)

If n˙ > 0, set PT = PT n˙

 

(4b)

Set ck¯,n¯ = ck¯,n¯

1, ck¯,n˙ = ck¯,n˙ + 1 GOTO Step (1a)

(5)

If { PC = mink,n,c(Pkc,n ρk,n )}, finish.

The vertical swapping algorithm (VSA) does vertical swapping for every pair of users. In each iteration, users try to swap their subcarriers such that the power allocation is reduced. There are different types of vertical swapping. For instance, in triple swapping, user i gives its subcarrier to user j and in the same way user j to user k and user k to user i. In Koutsopoulos ad Tassiulas [55], pairwise swapping is modified to cope with the adaptive modulation case. In this case, there is more than one class where each class is defined with its modulation (i.e number of bits loaded to a subcarrier) and swapping is only within the class. Each pair of user swap their subcarriers that belong to the same class if there is a power reduction. In this way, adjustment of subcarrier is done across users, to try to approximate the optimal solution. VSA is described as follows.

VS algorithm

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(1)

pair of user {i, j};

 

c˙

 

c˙

and

 

nˆ

P

maxP

(n),

 

n

 

S ;

 

(1a)

Find P

(n)

= Pi,n Pj,n

 

 

 

 

(1b)

i, j

 

 

 

 

 

 

 

i, j =

i, j

 

 

i

 

Find P j,

i(n)

=

Pc˙

 

c˙

 

 

 

n

Pj,i = max Pj,i (n), n Sj ;

 

 

(1c)

 

j,n

Pi,n

 

 

 

 

Set nˆ ,n P

 

 

nˆ P

 

nˆ

P

 

 

;

 

 

 

 

 

 

 

 

 

 

i, j =

 

i, j +

 

 

 

j,i

 

 

 

 

 

 

 

 

 

(2)

(1d)

Add nˆ ,n Pi, j to the { } list;

 

 

 

 

 

 

 

 

 

 

 

 

Select = max(i,nˆ ),( j,n ) ;

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(3)If > 0, Switch subcarriers and PT = PT GOTO STEP (1a);

(4)If 0, finish.

21.4.2 Resource allocation to maximize capacity

Suppose there is no fixed requirements per symbol and the aim is to maximize capacity. It has been shown in Viswanath et al. [60] that, for point-to-point links, a fair allocation strategy maximizes total capacity and the throughput of each user in the long run, when

PREDICTIVE FLOW CONTROL AND QoS

841

the user’s channel statistics are the same. This idea underlying the proposed fair scheduling algorithm is exploiting the multiuser diversity gain.

With a slight modification, the fair scheduling algorithm for point-to-point communication was extended to an algorithm for point-to-multipoint communication [53]. Suppose the user time varying data rate requirement Rk (t) is sent by the user to the base station as feedback of the channel condition. We treat symbol time as the time slot, so t is discrete, representing the number of symbols. We keep track of average throughput tk,n of each user for a subcarrier in a past window of length tc. The scheduling algorithm will schedule a

subcarrier to a user ¯ according to the criterion n¯ k

 

¯

 

arg max(r

 

/t

 

)

{

k, n¯

} =

k,n

k,n

 

k,n

 

 

where tk,n can be updated using an exponentially weighted low-pass filter described in Viswanath et al. [60]. Here, we are confronted with determining the rk,n values. We can set rk,n to Rk /N , where N is the number of carriers. With this setting, the peaks of the channel for a given subcarrier can be tracked. The algorithm schedules a user to a subcarrier when the channel quality in that subcarrier is high relative to its average condition in that subcarrier over the time scale tc. When we consider all subcarriers the fairness criterion matches with the point-to-point case as

¯

max

N

Rk /Tk , where Tk = n=1 tk,n

k =

k

The theoretical analysis of fairness property of the above relation for point-to-point communication is derived in Viswanath et al. [60]. Those derivations can be apply for point-to- multipoint communication.

21.4.2.1 Performance example

The required transmission power for c bits/subcarrier at a given BER with unity channel gain is [57]:

 

N0

Q1

 

BER

2

f (c, BER) =

 

(2c 1)

3

 

4

where Q1(x) is the inverse function ofQ(x) =

1

et2/2dt

 

 

2π

x

Figure 21.9 shows the average data rates per subcarrier vs total power constraint when there are four users. Each user has a rate requirement of 192 b/symbo1 (maximum rate) and BER requirement of 104. The performance of the iterative approach is close to that of the optimal and difference between suboptimal and iterative approaches decreases as the total transmit power increases.

21.5 PREDICTIVE FLOW CONTROL AND QoS

Even if the dimensioning of network resources has been done correctly and the admission control mechanism is good, the network may go into periods of congestion due to the transient oscillations in the network traffic. For this reason it is necessary to develop a mechanism to quickly reduce the congestion or pre-empt it, so as to cause the least possible

842 QUALITY-OF-SERVICE MANAGEMENT

 

2.8

 

IP

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(b/subcarrier)

 

 

GreedyHungarian

 

 

 

 

2.6

 

GreedyLP

 

 

 

 

 

 

 

Iterative

 

 

 

 

 

2.4

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

efficiency

2.2

 

 

 

 

 

 

 

 

2.0

 

 

 

 

 

 

 

 

Spectral

 

 

 

 

 

 

 

 

1.8

 

 

 

 

 

 

 

 

 

1.636

37

38

39

40

41

42

43

44

Total transmission power (dB)

Figure 21.9 Spectral efficiency vs total transmission power.

 

μl

Link l

 

 

 

 

 

 

ql (n)

 

 

 

 

Cl (n)

 

 

 

 

 

 

 

 

 

 

Vl (n) + al (n)

Figure 21.10 Individual link-level model.

degradation of QoS to the underlying applications [61–81]. 4G networks will carry a mixture of real-time (RT) traffic, like video or voice traffic, and nonreal-time (NRT) traffic, like data. One approach to controlling the NRT traffic is to be able to predict the RT traffic (at the link of interest) at some time in the future, then, based on this prediction, control the NRT traffic.

In Figure 21.10, Vl (n) and αl (n) correspond to the aggregate RT traffic and NRT traffic, respectively, arriving at a link of interest (link l having capacity μl ), at time n. One can then estimate Cl (n), the available link capacity for NRT traffic, at some time in the future. This information would then be used at the network-level to distribute the available link capacities to the NRT flows.

On the network level the available link capacities for the NRT flows is then distributed to maximize throughput (or more generally some utility function), based on appropriate fairness requirements. An example network is shown in Figure 21.11, where flows traverse links with available capacities for the NRT flows calculated at the individual link level. In Chapter 7, the network-level problem has been investigated in the case when the available link capacity for NRT flows at each node is a constant. The problem remains open in the case when the available capacity is time-varying.

PREDICTIVE FLOW CONTROL AND QoS

843

C5 (n)

C4 (n) C6 (n) Flow 3

Flow 1

C2 (n)

Flow 2

C1 (n) C3 (n)

Figure 21.11 Network-level model.

 

μ

 

 

Node of

 

q(n)

 

interest

 

 

 

 

 

a(n)

v(n)

Predictor

Controllerr ller

 

 

Network

 

 

{a1(n), ... , aN(n)}

Nonreal-time traffic

Real-time traffic

 

a1

aN

V1

VL

Traffic

sources

Figure 21.12 System diagram of predictive flow control.

21.5.1 Predictive flow control model

In this section we focus on the individual link-level problem, a single multiplexing point in the network which consists of a link and an associated buffer that serve both RT and NRT traffic. The multiplexing point in the network could be an output port of a router/switch or a multiplexer. The system diagram is shown in Figure 21.12.

In the following V (n) will represent the aggregate amount of RT traffic that arrives at the queue of interest at time n. Vmax : = supn>0{V (n)} will be assumed finite and V (n)

stationary in the mean, i.e. ¯ : = E{ ( )}. The goal is to control the NRT traffic based on

V V n

844 QUALITY-OF-SERVICE MANAGEMENT

predicting the aggregate RT traffic arrival rate at the queue. ai (n) will refer to the available link capacity for the ith NRT traffic computed at time n based on the predicted value of the RT traffic rate. This explicit rate information is sent back to the ith NRT traffic source.

If N is the number of NRT traffic sources and ni , ı = 1, . . . , N is the round-trip delay between the ith NRT source and the destination, then a(n) = iN=1ai (n ni ) is the aggregate NRT traffic arrival to the queue at time n. A control message is propagated from the queue

of interest to the destination and back to the source. ˆi ( ) will represent the predicted value

V n

of V (n) based on the history of V before time n ni . We assume that the predictor is linear.

A simple example can be represented as ˆi ( + i ) = k k i ( ). If ( ) is the -

Z

z

V

k

n

V

h

n

V n

transform of ( ) ¯ and ˆi ( ) the -transform of ˆi ( ) ¯ then ˆi ( ) = ni i ( ) ( )

V n V V z Z V n V V z z H z V z where Hi (z), for NRT traffic source i, is a causal, stable, linear, time-invariant system [74]. It should be noted that Hi (z) will be the same for all sources with the same RTT ni . For example, if there is only one NRT flow with round-trip delay 5, a possible predictor (in

ˆ

(n

+ 5) = (1/2)V (n) + (1/2)V (n 1). In this case, we will have

time domain) could be V1

H1(z) = (1/2)(1 + z1).

 

 

The queue-length, q(n),

at time n at the queue of interest will be determined by

a(n), V (n), and the service rate (link rate) of the queue μ. We assume that the queue

process begins at time = 0 and (0) = 0. For stability, we also require that ¯ < μ. n

q

V

The feedback control scheme is as follows. We predict the aggregate RT traffic rate and use the predicted value to compute ai (n), 1 n N .

21.5.1.1 Predictive flow control algorithm for single NRT traffic model

To start with, we assume that there is only one NRT traffic source a1(n) (or a group of NRT traffic loops with the same round-trip delay n1). So, by definition, a(n) = a1(n n1). Ideally, what we want to achieve is a(n) + V (n) = μ at all time n. However, there are two difficulties in achieving this. First, since we do not know V (n) in advance, we need to estimate its value through prediction, resulting in a certain possibility of error. Second, V (n) could be greater than μ but since a(n) cannot be negative, the sum a(n) + V (n) cannot be made equal to μ. Having in mind the possibility of prediction error and the possibility that V (n) > μ, the NRT traffic a1(n) will be controlled as

a1(n) = [ pμ Vˆ1(n + n1)]+

(21.34)

where is the percentage of output link capacity that we would like to utilize [ > ( ¯ )] p p V

and [x]+ = x, if x 0, and [x]+ = 0, otherwise. Consider now the situation of perfect prediction, and let V (n) have exceeded μ for some time. During the period that V (·) has exceeded μ, the above equation correctly sets a1(·) to zero, but even after V (·) is no longer larger than μ, there could still be a substantial backlog in the queue, during which time a1(·) should be set to zero. However, according to Equation (21.34), the moment V (n) is less than μ, the NRT source is allowed to transmit, thus potentially causing unnecessary congestion at the queue. So, what we will attempt to do is to keep the queue length at the node of interest small, while maintaining a certain level of throughput given by

 

n

[a( j) + V ( j)]

 

 

nlim

j=1

= pμ

(21.35)

 

n

 

→∞

 

 

 

 

 

For this reason we define the control algorithm [81] as follows.

 

 

PREDICTIVE FLOW CONTROL AND QoS

845

Control algorithm (N = 1 case)

 

(1)

Define a virtual queueing process q1(n) and set q1(0) = 0.

 

(2)

q1

(n) = [q1(n 1) + Vˆ1(n) pμ]+. For n 0, we let V (n) = 0.

 

(3)

a1

(n) = [ pμ Vˆ1(n + n1) q1(n + n1 1)]+. For n 0, we let a1(n) = 0.

 

For N NRT traffic sources the algorithm is modified as follows.

Control algorithm (N > 1 case)

(1) Set qi (0) = 0, 1 i N .

(2)

qi (n) = [qi (n 1) + Vˆi (n) pμ]+. For n 0, we let V (n) = 0.

(3)

ai (n) = [ pμ Vˆi (n + ni ) qi (n + ni 1)]+/N .

(4)

For n 0, we let ai (n) = 0.

21.5.1.2 Performance example

Three different alternatives for predictor are used [81]. In A1, the fixed low-pass filter is chosen as HLPF(z) = (1/4)(1 + z1 + z2 + z3). The MMSE predictor is designed as follows. First, a low-pass filter A1 is applied to the high-priority RT traffic. Next, a standard

minimizing mean square error linear predictor HMMSE(z) with the form

Mi

B(i)zni m

 

m=0

m

is calculated based on the low-frequency part of the RT traffic. The final MMSE predictor is zni Hi (z) = HLPF(z)HMMSE(z). Note that HMMSE(z) will require explicit knowledge of the round-trip delays per-flow information ni . In A3 that information is approximated by

 

 

 

 

 

ˆ

( j).

an average value n0for all flows. The prediction error is defined as ( j) = V ( j) V1

The queue length at the node of interest is bounded by [79]

 

 

 

 

 

n

n

 

 

q(n)

q

sup

inf

 

 

 

 

0(n) + 0n0n j=n0+1 ( j)

0n0n j=n0+1

( j)

 

Under the definitions and predictive flow control algorithm defined above, if Vmax <

¯

¯

1, we have [81] q(n) q0(n) 2C1, where C1 is a

and [( pμ V )/(μ

V )] H1(1)

constant that does not depend on n.

 

 

For l = n n0 we define the accumulated error as

 

 

 

n

n

 

 

( j) =

( j)

 

Xn,l =

 

 

j=nl+1

j=n0+1

 

where E{Xn,l } = 0 (because the predictor is unbiased) and

 

n

 

 

 

Var{Xn,l} = Var

( j)

 

 

n

j=nl+1

0

0

n

 

=

C ( j1 j2) =

 

C ( j1 j2) (21.36)

j1=nl+1 j2=nl+1

 

j1=−l+1 j2=−l+1

For the results shown below, V (n) is a generated Gaussian process which is multitimescale-correlated with Cv (k) = 479.599 × 0.999|k| + 161.787 × 0.99|k| +

846 QUALITY-OF-SERVICE MANAGEMENT

×104 10

 

 

MMSE

 

 

 

 

 

 

 

 

 

 

 

 

A1

 

 

 

8

 

 

A3

 

 

}

6

 

 

 

 

 

,1

 

 

 

 

 

n

 

 

 

 

 

 

Var {X

4

 

 

 

 

 

 

 

 

 

 

 

 

2

 

 

 

 

 

 

0

0

20

40

60

80

 

 

 

 

 

I (ms)

 

Figure 21.13 Var{Xn,l } with different predictors. (Reproduced by permission of IEEE [81].)

P (Q>x)

100

 

 

 

101

 

 

 

102

 

 

 

103

 

 

 

104

 

 

 

105

 

 

 

106

600

800

1000

400

x (b)

Figure 21.14 Tail (cumulative queue length) probabilities with different control algorithms. (Reproduced by permission of IEEE [81].)

. × . |k| and ¯ = 100 kb/s. This type of source has often been used to represent 498 033 0 9 V

the multiple time-scale correlation in network traffic [71, 80]. The link capacity is 200 kb/s, and the utilization is set to 98%. The time unit is 1 ms and the unit of the queue length is 1 bit [81]. Var{Xn,l }is shown in Figure 21.13. We can see that, for A1 and A3 predictors, Var{Xn,l } converges to some constant when l is large enough. The asymptotic variance of A3 is also smaller than that of Al. In Figure 21.14, we compare the above control algorithm with the control algorithm that uses Equation (21.34). As we mentioned before, when

ˆ1( ) μ for all , the control algorithms reduce to the same linear equation. However, n

p

V n

REFERENCES

847

when this condition is not true, the two algorithms are quite different. We can see this difference in performance in Figure 21.14. In this simulation, there is only one NRT source with round-trip delay of 5 ms. The RT source and the link capacity are still the same as

in Figure 21.13 To see the difference when the condition ˆ1( ) μ is violated, we use p

V n

the same A3 predictor in both control algorithms. In the figure, the control algorithm is marked C1, and the one that uses Equation (21.34) is marked C2. The utilization is set to p = 98%. Note that when using control algorithm C2, given p, the utilization is not equal to p. In this simulation, we set p such that the measured utilization for C2 is 98%, which is the same as in C1. From this figure, using the same predictor, we can see that the improved control algorithm outperforms the one that uses Equation (21.34).

REFERENCES

[1]R. Dugad, K. Ratakonda and N. Ahuja, A new wavelet-based scheme for watermarking images, in Proc. IEEE Int. Conf. Image Processing, Chicago, IL, 4–7 October 1998,

pp.419–423.

[2]D. Kundur and D. Hatzinakos, Digital watermarking using multiresolution wavelet decomposition, in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, vol. 5, 1998, pp. 2969–2972.

[3]H. Inoue, A. Miyazaki and T. Katsura, An image watermarking method based on the wavelet transform, in Proc. IEEE Int. Conf. Image Processing, Kobe, 25–28 October, 1999, pp. 296–300.

[4]H.J.M. Wang, P.C. Su and C.C.J. Kuo, Wavelet-based blind watermark retrieval technique, in Proc. SPIE, Conf. Multimedia System Applications, vol. 3528, Boston, MA, November 1998.

[5]P. Campisi, A. Neri and M. Visconti, A wavelet based method for high frequency subbands watermark embedding, in Proc. SPIE Multimedia System Applications III, Boston, MA, November 2000.

[6]M.M. Yeung and F. Mintzer, An invisible watermarking technique for image verification, in Proc. IEEE Int. Conf. Image Processing, Santa Barbara, CA, 1997, pp. 680–683.

[7]D. Kundur and D. Hatzinakos, Toward a telltale watermarking technique for tamperproofing, in Proc. IEEE Int. Conf. Image Processing, Chicago, IL, 4–7, October 1998,

pp.409–413.

[8]R.H. Wolfgang and E.J. Delp, Fragile watermarking using the VW2D watermark, in

Proc. SPIE, Security Watermarking Multimedia Contents, vol. 3657, San Jose, CA, January 1999.

[9]P.D.F. Correira, S.M.M. Faria, and P.A.A. Assunqao˜ , Matching MPEG-1/2 coded video to mobile applications, in Proc. Fourth Int. IEEE Symp. Wireless Personal Multimedia Communications, Aalborg, 9–12, September 2001.

[10]Information technology – coding of moving pictures and associated audio for digital storage media at up to about 15 Mb/s – Part 2: Video. ISO, ISO/IECII I72–2, 1993.

[11]F. Yong Li, N. Stol, T.T. Pham and S. Andresen, A priority-oriented QoS management framework for multimedia services in IJMTS, in Proc. Fourth Int IEEE Symp. Wireless Personal Multimedia Communication, Aalborg, 9–12, September 2001.