Friday, April 24, 2015

Today we continue discussing the remaining modules of the Hyper proxy system and the results. There were two kinds of workloads used. The first kind of workload varied the lengths I found the media objects and the second kind of workload varied the access times of media objects such that the session would close before the full object us downloaded. In addition a third workload involving a capture from real traffic on a server was also used. These three showed different characteristics we use the two metrics to evaluate the workloads one is the delayed startup ratio and the other is a byte hit ratio. The first is the total number of startup delayed requests normalized by the total number of requests . The second is the total amount of data transferred divided by that demanded by all the clients. And we also want to reduce jitter byte ratio.
We now evaluate the performance of the workloads which we label Web for first, Part for second and Real for third. The proxy cache system was also varied to involve three different schemes. The proxy hit represents the adaptive lazy segmentation with active prefetching. The proxy startup hit represents the improved lazy segmentation scheme and active prefetching. And lastly the proxy jitter scheme which represe the hyper proxy system.
For the web workload, the Hyper Proxy provides the best continuous streaming service to the clients while the Proxy Hit ratio performs worst since it increases byte hit ratio. This is more notably so when the cache size is  20% of the total object size in which case the reduction in proxy jitter is nearly 50 % with the hyper proxy.
Hyper proxy achieves the lowest delayed startup ratio followed closely by the proxy startup hit scheme.
The hyper proxy achieves a relatively low byte hit ratio because there is a smaller reduction of network traffic.

No comments:

Post a Comment