Tuesday, December 1, 2015

Today we review the paper Optimal Distributed Online Prediction using Mini Batches. by Dekel, Shamir, Xiao et al.  This paper mentions that online prediction methods are typically presented as serial algorithms running on a single processor. But to keep up with the increasing data sets,we need to parallelize it. In this context they present a distributed mini-batch algorithm. - a method that converts many serial gradient based online prediction algorithms into distributed algorithms.In addition, this method takes into account communication latencies between nodes in distributed environment. In fact they show linear speedup of distributed stochastic optimization on multiple processors.They also mention advantages of their approach on a web scale prediction problem.
We will discuss this but let us take a short break to discuss an interview question below.
Given a series of letters and another smaller set find the smallest window in the larger set that has all the elements from the smaller.
One way to do this would be to find all starting positions of a sliding window and then to find the minimum of lengths.

No comments:

Post a Comment