Today mom was 8 - 4. When mom was at work, I got to watch two shows just like mom told Patti and Patti told me. Dad gave me some cursive writing so I did that. I played with some Legos. After mom came back, I rode bikes for some time. First Kailey rode bikes with me then she went inside. In the night, we all played with chutes and ladders.
Saturday, July 13, 2013
The client logon process
We look at how DC and clients communicate during logon process but in addition we also look at how a RODC gets passwords from a writeable domain controller and caches them locally. Each RODC has a krbtgt account . The krbtgt accounts are the credentials used by DCs to encrypt Kerberos messages. RODC keeps the password for this locally. However, writeable DCs keep all the passwords for krbgt accounts. Let us consider an example where a user logins to his PC and there are two DCs : one read-write and another read only. First the users machine will contact RODC and provide a Kerberos authentication service request packet. When the RODC receives this KRB_AS_REQ packet it will check its local database to see if it already has the user's password. If it does not, it will forward the RWDC and provide the KRB_AS_REQ it received. The RWDC will generate a KRB_AS_REP - a Kerberos authentication service reply and forward it back to RODC. This is then relayed to the user's machine. At this point, the users machine has a Kerberos ticket-granting ticket signed with the domain krbtgt account. Two additional steps are performed next. The RODC will request the RWDC to have the user PC's credential replicated to it. The RWDC will verify if such a policy is permitted and replicate the password.
Now let's look at the process of user authentication. The following are the steps taken when the user attempts to login to his machine.
The user presents his credential to his machine that sends a KRB_AS_REQ to the use. When the RODC receives this request, it will check its local database to see if there's a password cached otherwise it proceeds to the same steps that it took for machine account. However the user logon is still not complete. Before the user can use his workstation, he must obtain a Kerberos service ticket (TGS) for his PC. The user's machine sends the KRB_TGS_REQ including the users TGT from the previous example. However RODC is unable to decrypt since it is encrypted with the domain krbtgt account. RODC transmits the KRB_TGS_REQ to RWDC who replies with the corresponding response. The RODC receives a valid KRB_TGS_REP. At this point instead of forwarding it to the user's machine, the RODC may decide to send an error indicating the ticket has expired. Since RODC now holds cached credentials for the user it is able to construct a new KRB_AS_REP locally and thus a new TGT for the user, encrypt it with the local krbtgt account and transmit it to the user's machine. The user's machine sends a new TGS request to RODC including the new TGT for the user which the RODC is able to decrypt and construct a TGS response permitting the user to use his PC. After completing these steps, the user is able to logon.
We look at how DC and clients communicate during logon process but in addition we also look at how a RODC gets passwords from a writeable domain controller and caches them locally. Each RODC has a krbtgt account . The krbtgt accounts are the credentials used by DCs to encrypt Kerberos messages. RODC keeps the password for this locally. However, writeable DCs keep all the passwords for krbgt accounts. Let us consider an example where a user logins to his PC and there are two DCs : one read-write and another read only. First the users machine will contact RODC and provide a Kerberos authentication service request packet. When the RODC receives this KRB_AS_REQ packet it will check its local database to see if it already has the user's password. If it does not, it will forward the RWDC and provide the KRB_AS_REQ it received. The RWDC will generate a KRB_AS_REP - a Kerberos authentication service reply and forward it back to RODC. This is then relayed to the user's machine. At this point, the users machine has a Kerberos ticket-granting ticket signed with the domain krbtgt account. Two additional steps are performed next. The RODC will request the RWDC to have the user PC's credential replicated to it. The RWDC will verify if such a policy is permitted and replicate the password.
Now let's look at the process of user authentication. The following are the steps taken when the user attempts to login to his machine.
The user presents his credential to his machine that sends a KRB_AS_REQ to the use. When the RODC receives this request, it will check its local database to see if there's a password cached otherwise it proceeds to the same steps that it took for machine account. However the user logon is still not complete. Before the user can use his workstation, he must obtain a Kerberos service ticket (TGS) for his PC. The user's machine sends the KRB_TGS_REQ including the users TGT from the previous example. However RODC is unable to decrypt since it is encrypted with the domain krbtgt account. RODC transmits the KRB_TGS_REQ to RWDC who replies with the corresponding response. The RODC receives a valid KRB_TGS_REP. At this point instead of forwarding it to the user's machine, the RODC may decide to send an error indicating the ticket has expired. Since RODC now holds cached credentials for the user it is able to construct a new KRB_AS_REP locally and thus a new TGT for the user, encrypt it with the local krbtgt account and transmit it to the user's machine. The user's machine sends a new TGS request to RODC including the new TGT for the user which the RODC is able to decrypt and construct a TGS response permitting the user to use his PC. After completing these steps, the user is able to logon.
reports and xslt and load test runs.
XSLT transformation enables test results to be displayed. This is how we prepare the data for display or mailing out to subscribers. The resulting xhtml is easy to share. First we get the results from a trx file or the stored procedure execution in a database. This gives us the data in the form of a xml or dataset. Then we take create the xslt with the summary we would like to see. Note that visual studio has default summary view and results that you can open from a load test run using the open and manage results button on the toolbar. This already converts the summary and the results to an html that can be cut and pasted into any application using object linking and embedding technology. The views we create with xslt merely defines a customized view using headings, row and columns to summarize the data.
The results from a trx file or a stored procedure execution need not be wrapped in a html. It can be converted to xml or excel file with a load test plugin. The load test plugin will simply have an event handler invoked at the end of the relevant test execution and can be written in C#.
Likewise xslt transform and database mail can be written as a SQL Stored procedure. So that newer additions of test runs can trigger database mail. This also scales well to enterprise load. where the runs and the results could be stored in the order of gigabytes. It is easier to design the html and transforms using tools such as report manager and word prior to moving it inside a stored procedure.
Reports can be generated for all kinds of runs. For performance testing, these runs are usually, load test, stress test, and capacity test. The load test determines the throughput required to support the anticipated peak production load, the adequacy of a hardware environment and the adequacy of a load balancer. It also detects functionality errors under load, collects data for scalability and capacity planning. However, it is not designed for speed of response. Stress test determines if data can be corrupted by overstressing the system, provides an estimate of how far beyond the target load an application can go before causing failures and errors in addition to slowness, allows establishing application monitoring triggers to warn of impending failures and helps to plan what kind of failures are most valuable to plan for. Capacity test provides information about how workload can be handled to meet the business requirements, provide actual data that capacity planners can use to validate or enhance their models or predictions, and determines current usage and capacity of the existing system as well as trends to aid in capacity planning. Note that in practice, the most frequently used tests are smoke test which is the initial run of the performance test to see if your application can perform its operation under normal load. For all these runs, reports generation and subscription is somewhat similar.
The results from a trx file or a stored procedure execution need not be wrapped in a html. It can be converted to xml or excel file with a load test plugin. The load test plugin will simply have an event handler invoked at the end of the relevant test execution and can be written in C#.
Likewise xslt transform and database mail can be written as a SQL Stored procedure. So that newer additions of test runs can trigger database mail. This also scales well to enterprise load. where the runs and the results could be stored in the order of gigabytes. It is easier to design the html and transforms using tools such as report manager and word prior to moving it inside a stored procedure.
Reports can be generated for all kinds of runs. For performance testing, these runs are usually, load test, stress test, and capacity test. The load test determines the throughput required to support the anticipated peak production load, the adequacy of a hardware environment and the adequacy of a load balancer. It also detects functionality errors under load, collects data for scalability and capacity planning. However, it is not designed for speed of response. Stress test determines if data can be corrupted by overstressing the system, provides an estimate of how far beyond the target load an application can go before causing failures and errors in addition to slowness, allows establishing application monitoring triggers to warn of impending failures and helps to plan what kind of failures are most valuable to plan for. Capacity test provides information about how workload can be handled to meet the business requirements, provide actual data that capacity planners can use to validate or enhance their models or predictions, and determines current usage and capacity of the existing system as well as trends to aid in capacity planning. Note that in practice, the most frequently used tests are smoke test which is the initial run of the performance test to see if your application can perform its operation under normal load. For all these runs, reports generation and subscription is somewhat similar.
Friday, July 12, 2013
Publishing load test results.
In the Visual Studio, when we open a load test, we see an option to "open and manage Results" in the toolbar. This brings up a dialog box which lists the results associated with the loadtest. Each of these results can be selected and opened. Opening a result brings up the summary view by default. This view can then be cut and paste in the body of an e-mail for reporting. Alternatively it can be exported to a file on the fileshare.
SQL Services Reporting Manager provides great functionality to design custom reports. These reports can draw data using SQL queries. They can also be subscribed with e-mail registration.
Team Foundation Server enables automation of a performance test cycle. The steps involved in a performance test cycle are as follows:
1. Understand the process and compliance criteria
2. Understand the system and the project plan
3. Identify performance acceptance criteria
4. Plan performance-testing activities
5. Design tests
6. Configure the test environment
7. Implement the test design
8. Execute the work items
9. Report results and archive data
10. Modify the pain and gain approvals for Modifications
11. Return to activity 5
12. Prepare the final report.
The first step involves getting a buy-in on the performance testing prior to the testing and to comply with the standards, if any. The second step is to determine the use case scenarios and their priority. The third step is to determine the requirements and goals for performance testing as determined with stakeholders, project documentation, usability study and competitive analysis. The goals should be articulated in a measurable way and recorded somewhere. Plan work items to project plans and schedule them accordingly. This planning is required to line up the activities ahead of time. Designing performance tests involves identifying usage scenarios, user variances and generating test data. Test are designed based on real operations and data, to produce more credible results and enhance the value of performance testing. Tests include component-level testing. Next, configure the environments using load-generation and application monitoring tools, isolated network environments and ensuring compatibility all of which takes time. Test Designs are implemented to simulate a single or virtual user. Next, work items are executed in the order of their priority, their results evaluated and recorded, communicated and test plan adapted. Results are then reported and archived. Even if there are runs that may not all be usable, they are sometimes archived with appropriate labels. After each testing phase, it is important to review the performance test plan. Mark the test plan that have been completed and evaluated and submit for approval. Repeat the iterations. Finally, prepare a report to be submitted to the relevant stakeholders for acceptance.
SQL Services Reporting Manager provides great functionality to design custom reports. These reports can draw data using SQL queries. They can also be subscribed with e-mail registration.
Team Foundation Server enables automation of a performance test cycle. The steps involved in a performance test cycle are as follows:
1. Understand the process and compliance criteria
2. Understand the system and the project plan
3. Identify performance acceptance criteria
4. Plan performance-testing activities
5. Design tests
6. Configure the test environment
7. Implement the test design
8. Execute the work items
9. Report results and archive data
10. Modify the pain and gain approvals for Modifications
11. Return to activity 5
12. Prepare the final report.
The first step involves getting a buy-in on the performance testing prior to the testing and to comply with the standards, if any. The second step is to determine the use case scenarios and their priority. The third step is to determine the requirements and goals for performance testing as determined with stakeholders, project documentation, usability study and competitive analysis. The goals should be articulated in a measurable way and recorded somewhere. Plan work items to project plans and schedule them accordingly. This planning is required to line up the activities ahead of time. Designing performance tests involves identifying usage scenarios, user variances and generating test data. Test are designed based on real operations and data, to produce more credible results and enhance the value of performance testing. Tests include component-level testing. Next, configure the environments using load-generation and application monitoring tools, isolated network environments and ensuring compatibility all of which takes time. Test Designs are implemented to simulate a single or virtual user. Next, work items are executed in the order of their priority, their results evaluated and recorded, communicated and test plan adapted. Results are then reported and archived. Even if there are runs that may not all be usable, they are sometimes archived with appropriate labels. After each testing phase, it is important to review the performance test plan. Mark the test plan that have been completed and evaluated and submit for approval. Repeat the iterations. Finally, prepare a report to be submitted to the relevant stakeholders for acceptance.
Thursday, July 11, 2013
Mailing data and database objects best practices:
Data can be attached to mail in several different file formats including Excel, rtf, csv, text, HTML.
Data can also be included in the body of the mail.
You would need access to MAPI or SMTP to send mails
When sending a data access page, share the database so that users can interact with the page.
Create the data access page using UNC paths so that they are not mapped to local drives
Store the database and the page on the same server.
Publish from a trusted intranet security zone
Send a pointer instead of a copy of the HTML source code
For intranet users, UNC and domains alleviate security considerations while the same can be used to demand permissions for external users.
Always send the page to yourself and view the code before mailing others.
Data can be attached to mail in several different file formats including Excel, rtf, csv, text, HTML.
Data can also be included in the body of the mail.
You would need access to MAPI or SMTP to send mails
When sending a data access page, share the database so that users can interact with the page.
Create the data access page using UNC paths so that they are not mapped to local drives
Store the database and the page on the same server.
Publish from a trusted intranet security zone
Send a pointer instead of a copy of the HTML source code
For intranet users, UNC and domains alleviate security considerations while the same can be used to demand permissions for external users.
Always send the page to yourself and view the code before mailing others.
System generated mails for periodic activities or alerts are common practice in most workplace. There are several layers from which such mails can be generated. SQL Server has a xp called sendmail that can send messages to an smtp server. It needs to be enabled via server configurations. The sendmail xp can be directly invoked from the stored procedures which are very close to the data.
SSRS is another layer from which well-formed reports can be mailed out. These are again designed and sent out from SSRS. The TFS or source control is another place which can send mail. Automated performance reports can also be sent out this way.
SSRS is another layer from which well-formed reports can be mailed out. These are again designed and sent out from SSRS. The TFS or source control is another place which can send mail. Automated performance reports can also be sent out this way.
Tuesday, July 9, 2013
REST API : resource versus API throttling
REST APIs should have cost associated with the resources rather than the APIs because there is no limit on the number of calls made per API. If the response sizes can be reduced with inline filter, then it directly translates to savings for both the sender and the receiver.
Some of the performance degradations occur due to :
premature optimization
guessing
caching everything
fighting the framework
Performance can be improved with
1) finding the target baseline
2) knowing the current state
3) profiling to find bottlenecks
4) removing bottlenecks
5) repeating the above
Request distribution per hour, most requested, http statuses returned, request duration, failed requests etc all help with the analysis. Server logs can carry all this information. Tools to parse the logs for finding these information could help. Process id and memory usage can directly be added to the server logs. Server side and client side performance metrics help to isolate issues.
Benchmarks are available for performance testing of APIs. CDN should not matter in performance measurements. use static file return as baseline. Separate out I/O and CPU bound processes.
Courtesy : rails performance best practices
Some of the performance degradations occur due to :
premature optimization
guessing
caching everything
fighting the framework
Performance can be improved with
1) finding the target baseline
2) knowing the current state
3) profiling to find bottlenecks
4) removing bottlenecks
5) repeating the above
Request distribution per hour, most requested, http statuses returned, request duration, failed requests etc all help with the analysis. Server logs can carry all this information. Tools to parse the logs for finding these information could help. Process id and memory usage can directly be added to the server logs. Server side and client side performance metrics help to isolate issues.
Benchmarks are available for performance testing of APIs. CDN should not matter in performance measurements. use static file return as baseline. Separate out I/O and CPU bound processes.
Courtesy : rails performance best practices
Subscribe to:
Posts (Atom)