Thursday, December 24, 2015

int GetFirstEvenCountOccurence(List<int> numbers)
{
var countable = new Hashtable();
for (int I = 0; I < numbers.Count; I++)
{
 if ( countable.contains(numbers[I])
    countable[numbers[I}] += 1;
else
    countable.Add(numbers[I], 1);
}
for (int I = 0; I < numbers.Count; I++)
{
 if (countable[numbers[I]] %2 == 0)
     return numbers[I];
}
return -1;
}

Today we continue discussing the DMB algorithm where a deterministic update rule is applied in a distributed environment. This technique resembles the serial mini-batch technique described earlier and is therefore called the distributed mini-batch algorithm (DMB).
The algorithm processes the input stream  in batches j = 1,2,  where each batch contains b + mu consecutive inputs where b is the batch size and mu is the additional inputs. In each batch, all of the nodes use a common predictor wj. While observing the first b inputs in  a batch, the nodes calculate and accumulate the stochastic gradients of the loss function f at wj. Once the nodes have accumulated b gradients altogether, they start a distributed vector sum operation to calculate the sum of these b gradients. While the vector sum operation completes in the background, mu additional inputs arrive and the system keeps processing them using the same predictor wj. The gradients for these additional mu inputs are discarded.  Once the vector sum completes, each node holds the sum of the b gradients collected during batch j. Each node divides this sum by b and obtains the average gradient, which is denoted by gj-bar. Each node feeds this average gradient to the update rule phi, which returns a new synchronized prediction wj+1. While only the first b/k gradients are used to compute the next predictor. all b+mu inputs are counted in the regret calculation. If the DMB algorithm is applied to a spanning tree in the network, the aggregation is performed at the root and broadcasted.

No comments:

Post a Comment