We continue the review of the performance results from the study of a hyper proxy streaming proxy design. We saw that with the workload we labeled 'web', the hyper proxy provides the best continuous streaming service to the client as compared to the proxy hit and the proxy startup hit schemes. Hyper proxy can reduce jitter by nearly 50% when the cache size is nearly 20 %.
Similar results are observed for the PART workload as shown in Figure 3. When cache size is nearly 20% of the the object size, hyper proxy reduces proxy jitter by 50% by giving up less than 5% in the byte hit ratio. To reduce the delayed startup ratio, the proxy startup hit achieves the best performance. The result is somewhat expected because the scheme targets the reduction in delayed startup ratio. Contrasting this with hyper proxy which aggressively reduces proxy jitter by keeping more segments, the cache space may be used by media objects for which the demand may be terminated early. This lowers the effectiveness of hyper proxy with delayed startup ratio. Lastly with the real workload, hyper proxy works best individually for each metric and overall. It performs better in reducing proxy jitter and delayed startup as well as keeping the degradation in byte hit ratio within tolerable limits.
In conclusion, we see that the proxy designs that were targeting byte hit ratio can be improved by targeting proxy jitter instead because byte hit ratio does not target continuous media delivery which is more important for streaming purposes. The authors for this paper are credited with an optimization model that improves performance against proxy jitter with a small tradeoff increase in byte hit ratio. This tradeoff has been elaborated in the previous discussions. Using this model, the authors have proposed an active prefetching method that determines which segment to bring in to the cache when. Lastly by combining prefetching with proxy caching schemes, the authors have proposed a hyper proxy system that performs well against all the performance studies mentioned.
Similar results are observed for the PART workload as shown in Figure 3. When cache size is nearly 20% of the the object size, hyper proxy reduces proxy jitter by 50% by giving up less than 5% in the byte hit ratio. To reduce the delayed startup ratio, the proxy startup hit achieves the best performance. The result is somewhat expected because the scheme targets the reduction in delayed startup ratio. Contrasting this with hyper proxy which aggressively reduces proxy jitter by keeping more segments, the cache space may be used by media objects for which the demand may be terminated early. This lowers the effectiveness of hyper proxy with delayed startup ratio. Lastly with the real workload, hyper proxy works best individually for each metric and overall. It performs better in reducing proxy jitter and delayed startup as well as keeping the degradation in byte hit ratio within tolerable limits.
In conclusion, we see that the proxy designs that were targeting byte hit ratio can be improved by targeting proxy jitter instead because byte hit ratio does not target continuous media delivery which is more important for streaming purposes. The authors for this paper are credited with an optimization model that improves performance against proxy jitter with a small tradeoff increase in byte hit ratio. This tradeoff has been elaborated in the previous discussions. Using this model, the authors have proposed an active prefetching method that determines which segment to bring in to the cache when. Lastly by combining prefetching with proxy caching schemes, the authors have proposed a hyper proxy system that performs well against all the performance studies mentioned.
No comments:
Post a Comment