Sunday, April 12, 2015

We continue the coverage of some JavaScript libraries. We resume with the next library in the list mentioned. This is groupie. groupie provides semantics of a group where all functions are executed at once and a chain where they are executed in the declared order. The function registrations for group or chain is similar to the previous ones we have seen from the list.
The next library in the list is continuables which exposes the semantics that a unit continuables can be fulfilled. Consequently, many of the can be grouped or chained. What sets this library apart is the ease of use for node.js developers. For example:
Var async_fn = function(Val){
Var continuable = continuables.create();
Process.nextTick(function(){
Continuable.fulfill(Val);
});
Return continuable;
}
Now the continuables can be chained . If the chain ends with error it will be thrown. To prevent the continuable must return something. Therefore error and success cases can be differentiated based on the presence of return values and separate callbacks for these two states can be invoked via continuables.either
Slide exposes semantics similar to Async library and functions registered should not be throwing an error and instead pass it to the callback. The node.js has similar constructs in its low level but this library is puportedly easier. The convention introduces two kinds of functions. actors that take action, callbacks get results.callbacks handle all errors and is therefore the first argument. Callbacks can trap/call other callbacks. Actors pass a callback as the last argument. Actors must not throw and the return value is ignored. The library has a construct called asyncMap which is similar to the group functionality mentioned earlier. Essentially it waits for all the registered actors and callback to be complete. It also has a chain construct that enables one by one continuation.
Step is another library that enable parallel execution in addition and similar error handling.

Saturday, April 11, 2015



There are quite a few patterns in the use of Javascript. Here we cover a few. 

1) Async This pattern is able to flatten the nested callbacks usually seen with making one call after the other.  

a. For example if we have 

i. getMoney(err, success){ 

if (err || !success)  

throw  Error(‘Oops!’); 

callSweetheart(err, success{ 

b. Then this can be serialized with  

Async.chain( function getMoney(callback) { 

  earn(); 

                                           callback(null); 

                          }, 

                         function callSweetheart(callback) { 

                                           dial(); 

                                           callback(null); 

                         },  

                         Function (err){    // callback 

                                           If(err){ 

  Console.log(err); 

                                          } 

                        }); 

c. Chain also does more than serialization and consolidation of callback. It passes the result of one function as a parameter into the other. The parameters are dependent entirely on the previous function except for the last one which must be a callback. 

d. Async.series is also available. This takes multiple functions in series.Each task takes the callback as a parameter. When all the functions are run or if there is an error, then the function is called with the combined results of all tasks in the order they were run. 

var counter = 0; 

Async.series([ 

Function(done) { 

Console.log(counter++); // == 0 

             Done(null, 1); 

     }), 

Function(done){ 

Console.log(counter++);       // == 1 

              Done(null, 2,3); 

}], 

Function (err, one, two) { 

Console.log(err); // = null; 

Console.log(one); // = 1 

2 / 2

Console.log(two); // == [1,2] 



); 

e. Async.parallel is also available. The tasks may not run in the same order as they appear in the array. 

2) Flow.js function defines capabilities very similar to the above. The flow.exec is a convenience function that defines a flow and executes it immediately, passing no arguments to the firstfunction. 

a. Flow.exec(function(){ 

Dosomething(this); 

}, function(err){ 

If (err) throw err; 

DoSomethingElse(this); 

}, function (err, result){ 

If(err)throw err; 

Console.log(result); 



); 


b. Sometimes a step in a flow may need to initiate several asynchronous tasks and wait onall of them before proceeding to the next step. This is called multiplexing and is achieved by passing this.MULTI() instead of this as the callback parameter. 

Flow.exec(function(){ 

                doSomething(this); 

},function(param){ 

                doSomethingDifferent(param1, this.MULTI()); 

   doSomethingDifferent(param2, this.MULTI()); 

}, function(){ 

Okwearedone(); 



}); 

c. There is another convenience function called serialForEach which can be used to apply an asynchronous function to each element in an array of values serially. 

3)  next we discuss the following libraries: 
funk,
futures
groupie
node-continuables
SlideStep
node-inflow
Node-inflow
,
Funk is a software module that provides the following syntax to serialize and parallelize callbacks.
Var funk = require('funk')('serial'); // ctor
Funk.set ('foo', 'bar'); // save results to be called in run ()
//add callback to be executed either in series or parallel:
SetTimeout(funk.add (err, success){
}, 200);
SetTimeout (funk.add (err, success){
},100);


SetTimeout (funk.nothing (), 200);
SetTimeout  (funk.nothing (), 100);

Funk.run (); // both timeout will be called

Future or FuturesJS is another asynchronous toolkit
It provides constructs like
Join
ForEachAsync
ArrayAsync,
          -  someAsync
          -  FilterAsync,
          -  everyAsync,
          -  mapAsync,
          -  reduceAsync,


Join calls any number of asynchronous calls together similar to how pthread_join works or the then() promise works.

Var join = window.create ();
SetTimeout(join.add (), 200);
SetTimeout(join.add (), 100);

Join.notify (function (index, args){
Console.log (" callback # " + index + " " + args);
});

ArrayAsync provides asynchronous counterpart for each of the Array iterate methods.
FilterAsync(['dogs', 'cats', 'octocats'], function  ( next, element){
Do (element) function  (likesIt){
 next(likesIt);
})
}).then (function (newArr){
DisplayLikes(newArr);
})
})();










Friday, April 10, 2015

Today we continue reading the paper on the design of streaming proxy systems.
 We discussed the uniform and exponentially segmented media objects.
 We talked about prefetching and minimum buffer size for such media. The minimum buffer size ensures low resource usage. The pre-fetching gives the scheduling point. It doesn't mean that the jitter can be avoided in all cases. The uniformly segmented media object has an advantage over the exponentially segmented object. It enables in-time prefetching which can begin at a later stage. Even so, continuous media streaming has not been guaranteed. One suggestion might be to keep enough segments cached. this leads us to define a prefetching length as the minimum length of the data that must be cached in the proxy in order to guarantee the continuous delivery when Bs  > Bt.  Prefetching is not necessary when Bs < Bt. Bs is the encoding rate and Bt is the network bandwidth averages respectively. Prefetching length aggregates cached segment length without breaks. Therefore we calculate the number of segments m for continuous delivery. In the case of uniform segmented media objects each segment length is the same. In the case of the exponentially segmented media objects, each cached segment length is twice that of the previous. We review the tradeoff between low proxy jitter and high byte ration and then they byte-hit ratio versus the delayed startup ratio.
#codingexercise

GetEvenNumberRangeSumCubeRootPowerFourteen) (Double [] A)

{

if (A == null) return 0;

Return A.EvenNumberSumCubeRootPowerFourteen();

}
We will take a short break now.

Thursday, April 9, 2015

Today we will continue our discussion on the design of streaming proxy systems. We were discussing active prefetching. Prefetching schemes can reduce proxy jitter by fetching uncached segments before they are accessed. We discussed the cases of both uniformly segmented as well as exponentially segmented media object  For the uniformly segmented scheme, the segments take equal amount of time. Consequently, the segments upto the ratio of Bs/Bt cause proxy jitter. This threshold is determined based on the latest that a segment needs to be fetched. Recall that this position is determined  such that the time it takes to prefetch this segment should not exceed the time that it takes to deliver the rest of the cached data and the fetched data. The minimum buffer size is calculated accordingly as (1- Bt/Bs) L1. This is true for all ranges namely the first cached segment, the cached segments upto the threshold and the cached segments beyond the threshold and after.
In the case of the exponentially segmented object, a similar analysis can be done. Here, we assume Bs <= 2 Bt. When it is not so, no prefetching of uncached segments can be in time for the exponentially segmented objects. If n is the number of cached segments, then for n = 0, we have to prefetch upto (1 + log-base-2(1/(2-Bs/Bt)) segment is necessary to avoid proxy jitter in the cases thereafter. The minimum buffer size is calculated  by using this threshold in the same kind of calculation as above. For n > 0 and less than the threshold,  the proxy starts to prefetch the threshold segment once the client starts to access this object. The jitter is unavoidable between the n+1 th segment to threshold segment and the minimum buffer size is Li times Bt/Bs where Li is that of the threshold. For segments that are larger than the threshold, the prefetching of the n+1th segment starts when the client accesses the first 1 - 2^n / (2 ^n - 1) ( Bt / Bs - 1) portion of the first n cached segment. The minimum buffer size is Ln+1  * Bt / Bs and increases exponentially for  later segments.

Wednesday, April 8, 2015

We continue reading the paper on the design of high quality streaming proxy systems. We were reviewing Active prefetching. For a media content with uniform segmentation, we calculated the minimum buffer length to be the same in all three cases - the first segment, segments upto Bt/Bs and the segments thereafter. We also found the proxy jitter is unavoidable until the threshold.
We now do active pre-fetching for exponentially segmented object. Here we assume Bs  < twice Bt. the average rate of a specific segment is less than twice the average network bandwidth. When Bs >= 2 Bt, no prefetching of the uncached segments can be in time for the exponentially segmented objects.
For the case with no segment cached,  Proxy jitter in this case is inevitable.
For the case with the number of cached segments n to be between 0 and 1 + log-base-2( 1 / (2 - Bs/Bt)) threshold. The proxy starts to prefetch the next segment once the client starts to access this object. When the client accesses the segments between n+1 to the threshold, the proxy jitter becomes inevitable and the number of  and the minimum buffer size is the length of this segment at the threshold * Bt/Bs.

#codingexercise
GetAllNumberRangeSumCubeRootPowerFourteen) (Double [] A)
{
if (A == null) return 0;
Return A.AllNumberSumCubeRootPowerFourteen();
}

#codingexercise
GetAllNumberRangeProductCubeRootPowerSixteen) (Double [] A)
{
if (A == null) return 0;
Return A.AllNumberProductCubeRootPowerSixteen();
}

Tuesday, April 7, 2015

We discuss active prefetching from the paper "designs of high quality streaming proxy systems" by Chen, Wee and Zhang. The objective of active prefetching is to determine when to fetch which uncached segment  so that proxy jitter is minimized.  The paper assumes the media objects are segmented, the bandwidth is sufficient to stream the object smoothly and that each segment can be fetched in a unicast channel. Each media object has its inherent encoding rate - this is the playback rate and is denoted with an average value. Prior session data transmission rates are noted.
For a requested media object, if there are n segments cached in the proxy, then the objective is to schedule the prefetching of the n+1 th segment so that proxy jitter is avoided. At position x, the length  of the to be delivered data is L-x and the  Assuming Bs as the average playback rate and Bt as the average data transfer rate,  we avoid proxy jitter when the time to transfer  sum of all such remaining lengths L-x for each of the n segments and the n+1 segment over Bs is greater than the time for the the n+1 th segment over the average data transfer rate.
This is a way of saying that the prefetch time must not exceed the delivery time. From the equation above, the position can be varied such that the latest prefetch scheduling point is one where the arrival is just sufficient to meet demand. The buffer size would then reach minimum.
Determining the prefetching scheduling point should then be followed by a prefetching scheme and resource requirements.
If the media object is uniformly segmented, then we can determine the minimum buffer size required to avoid proxy jitter. There are three ranges which we are interested in. The first segment, the second segment and the segment upto the ratio Bs/Bt and the segments thereafter will have same minimum buffer size but they may or may not be able to avoid proxy jitter. The threshold for the number of segments we need to prefetch is Bs/Bt.

Sunday, April 5, 2015

Http(s) Video Proxy and VPN services 
  
Video content from providers  such as Netflix, Hulu or even YouTube are not available everywhere. Furthermore, we may want anonymity when viewing the video. Together they provide some challenge to us today to get high quality video streaming services with the same level of privacy as say the Tor project. 
  
In this document, we will review some options and talk about the designs of a streaming proxy.  
  
First of all, we should be specific about the video content. Most providers only do an IP level check to  determine whether they should restrict the viewing. This can easily be tricked by one of the following methods: 
1)   VPN – We can easily create an IP tunnel over the internet to the domain where we can access the video content. This poses little or no risk over the internet and the quality may be as good as local given the ubiquity of such a technique in workplace. 
2)   Proxying – We can hide our IP address from the machine that we want to view the video content and check the same on sites that offer to look up our ip address.  By doing so, we trick the providers into thinking we are local to the country where the service may be unrestricted. 
However both of these are not necessarily guaranteed to be a working option in most cases for reasons such as: 
1)   we may not be at liberty to use the workplace VPN service to watch internet content that is not related to workplace 
2)   even if we do hide our IP address, most Internet service providers may have issues with such strategy or there might be already be address translations that affect our viewing.  
3)   They may require buffering or caching and this does not work well for live video. 
4)   Even proxy caching strategies such as segment based are actually partially caching video content. 
5)   We may still see startup latency or be required to start/stop the content again and again. 
6)   And then the delay by the proxy aka proxy jitter affects continuous streaming 
  
Let us now look at some strategies to overcome this. 
There are really two problems to tackle: 
First, media content broken into segments requires a segment-based proxy caching strategies. Some of these strategies reduce the startup latency as seen by the client. They attempt to fix it by giving higher priority to caching the beginning segments of media objects. The other type of these strategies aim to reduce operational efficiency of the proxy by improving the byte hit ratio.       The highest byte hit ratio can be assumed to be achieved when the segmentation is delayed as late as possible and till some realtime information can be collected 
None of the segmentation strategies can automatically ensure continuous streaming delivery to the client. Such a proxy has to fetch and relay the uncached segments whenever necessary and if there is a delay it results in proxy jitter, something that affects the client rightaway and is very annoying. 
Reducing this proxy jitter is foremost priority. This is where different pre-fetching schemes are involved.  One way is to keep the pre-fetching window and fill in the missing data.  
The trouble is that improving byte hit ratio and reducing proxy jitter conflict with each other. Proxy jitter occurs if the prefetching of uncached segments is delayed. Aggressive prefetching on the other hand reduces proxy efficiency. Prefetched segments may even be thrown away. That is why there is a tendency to prefetch uncached segments as late as possible. Secondly, the improvement in byte hit ratio also conflicts with reducing the delayed startup ratio. 
Chen,Wee and Zhang in their paper “designs of high quality streaming proxy systems”  discuss an active prefetching technique that they use to solve proxy jitter.  And they also improve the lazy segmentation scheme which addresses the conflicts between startup latency and byte hit ratio.  
#codingexercise
GetAllNumberRangeProductCubeRootPowerFourteen) (Double [] A)
{
if (A == null) return 0;
Return A.AllNumberRangeProductCubeRootPowerFourteen();
}