We continue our discussion on conjugate gradient methods. First we will discuss the steepest descent method because in our description earlier, we hinted towards excluding certain descent but we were not clear about the choice and the step length. When a matrix is positive definite, the function f(x) is a paraboloid. We want to reduce the function. In the method of the steepest descent, we start at an arbitrary point x0 and slide down to the bottom of the paraboloid. We take a series of steps by choosing a direction and a step length. We choose the direction in which f decreases most quickly, which is the direction opposite f'(x) - the gradient. When we take this step, there will be an error which is the distance from the solution. If we convert this quantity into a vector with the matrix, then this vector called the residual gives the direction of the steepest descent.
Next we choose the step length This is where we see an interesting observation. We want to choose the step length that minimizes f along a line. We are restricted to choosing a point on the intersection of the vertical plane and the paraboloid. We choose a point based on new = old + step-length-alpha scaled residual. We can pick a residual but we don't know what step length to take on it. At the bottom of the paraboloid, the gradient is zero and the function cannot be minimized any more. So the step length is zero and the iteration terminates. The step length minimizes the function when the directional derivative on this search line is equal to zero. The directional derivative can be expressed in terms of the gradient transpose and the residual taken together. Setting it to zero which is saying their dot product is zero means that the two vectors are orthogonal. As we progress down the search line the rate of increase of f are the projections of the gradient onto the search line and the projection is zero at the minimum. We can conveniently pick the next direction as the orthogonal at the minimum along a search line. Thus we find a zig zag path during the iteration which appears because each gradient is orthogonal to the previous gradient.
#codingexercise:
LDAP is the triangle in which we want to put all devices and applications in. How do we authenticate a user using LDAP?
def load_user(id):
ld = ldap.initialize('ldap://localhost:1389')
name = id
ld.simple_bind_s()
basedn = "ou=people,dc=example,dc=com"
filter = "(|(cn=\*" + name + "\*)(sn=\*" + name + "\*))"
results = ld.search_s(basedn, ldap.SCOPE_SUBTREE, filter)
import io
from contextlib import redirect_stdout
with io.StringIO() as buf, redirect_stdout(buf):
ldif_writer = ldif.LDIFWriter(buf)
for dn, entry in results:
ldif_writer.unparse(dn, entry)
output = buf.getvalue()
uname=""
uid=""
if id in output:
for line in output.splitlines(True):
if "dn:" in line:
kvp = line.split().split(',')
uid = str([i.replace("uid=", "") for i in kvp if 'uid=' in i])
if "cn:" in line:
uname = line.replace("cn: ", "")
if uname and uid:
return Users(uname, uid, True)
ld.unbind_s()
return Users('Anonymous', 'unknown', False)
Next we choose the step length This is where we see an interesting observation. We want to choose the step length that minimizes f along a line. We are restricted to choosing a point on the intersection of the vertical plane and the paraboloid. We choose a point based on new = old + step-length-alpha scaled residual. We can pick a residual but we don't know what step length to take on it. At the bottom of the paraboloid, the gradient is zero and the function cannot be minimized any more. So the step length is zero and the iteration terminates. The step length minimizes the function when the directional derivative on this search line is equal to zero. The directional derivative can be expressed in terms of the gradient transpose and the residual taken together. Setting it to zero which is saying their dot product is zero means that the two vectors are orthogonal. As we progress down the search line the rate of increase of f are the projections of the gradient onto the search line and the projection is zero at the minimum. We can conveniently pick the next direction as the orthogonal at the minimum along a search line. Thus we find a zig zag path during the iteration which appears because each gradient is orthogonal to the previous gradient.
#codingexercise:
LDAP is the triangle in which we want to put all devices and applications in. How do we authenticate a user using LDAP?
def load_user(id):
ld = ldap.initialize('ldap://localhost:1389')
name = id
ld.simple_bind_s()
basedn = "ou=people,dc=example,dc=com"
filter = "(|(cn=\*" + name + "\*)(sn=\*" + name + "\*))"
results = ld.search_s(basedn, ldap.SCOPE_SUBTREE, filter)
import io
from contextlib import redirect_stdout
with io.StringIO() as buf, redirect_stdout(buf):
ldif_writer = ldif.LDIFWriter(buf)
for dn, entry in results:
ldif_writer.unparse(dn, entry)
output = buf.getvalue()
uname=""
uid=""
if id in output:
for line in output.splitlines(True):
if "dn:" in line:
kvp = line.split().split(',')
uid = str([i.replace("uid=", "") for i in kvp if 'uid=' in i])
if "cn:" in line:
uname = line.replace("cn: ", "")
if uname and uid:
return Users(uname, uid, True)
ld.unbind_s()
return Users('Anonymous', 'unknown', False)
No comments:
Post a Comment