Saturday, August 3, 2013
Dual Mode operation
Dual mode operation is common in computing. Take edge routers for instance. As opposed to core routers, Edge routers additionally perform admission control. To provide more context to routers and admission control, lets' review quality of service.
Internet has been serving a single class of service called the Best Effort service. There is no assurances about delivery. Such assurances based on rate and throughput are defined as quality of service. In the absence of quality of service (QoS), applications have to tolerate losses and delays and adapt to congestion. Should we modify the applications to be more adaptive or should we modify the internet to support QoS.
Applications that adapt to the performance of network are called "tolerant real-time" and those that demand hard limits on performance are called "intolerant real-time".
Quality of Service guarantees are based on sharing and congestion studies. In order to study congestion, packets are marked for router to distinguish between different classes and new router policies to treat packets differently. QoS guarantees provide isolation for one class from other classes When allocating different bandwidths to each application flow, bandwidth may not be efficiently used. Therefore another tenet of Qos guarantees is that while providing isolation to different classes, it is preferred to utilize the resources as efficiently as possible. Since most networks have a link capacity that cannot be exceeded. there is a need to control admission. Applications can declare their requirements, and network can block if it cannot provide the resources.
Therefore Quality of Service are provided using packet classification, Isolation: scheduling and policy, high resource utilization and call admission
IntServ and DiffServ are models of providing quality of Service. IntServ stands for Integrated services and DiffServ stands for Differentiated Services. In the IntServ model, QoS is applied on a per flow basis and addresses concerns such as business model and charging. Even in mobile phone networks, this is evident when certain billing options are desirable but not possible. In Differentiated services, the emphasis is on scalability, flexible service model, and simpler signaling. Scalability here means we don't track resources for each of the possibly large number of flows. Service model means we provide bands of service such as platinum, gold and silver.
Courtesy: lecture notes CMU
Internet has been serving a single class of service called the Best Effort service. There is no assurances about delivery. Such assurances based on rate and throughput are defined as quality of service. In the absence of quality of service (QoS), applications have to tolerate losses and delays and adapt to congestion. Should we modify the applications to be more adaptive or should we modify the internet to support QoS.
Applications that adapt to the performance of network are called "tolerant real-time" and those that demand hard limits on performance are called "intolerant real-time".
Quality of Service guarantees are based on sharing and congestion studies. In order to study congestion, packets are marked for router to distinguish between different classes and new router policies to treat packets differently. QoS guarantees provide isolation for one class from other classes When allocating different bandwidths to each application flow, bandwidth may not be efficiently used. Therefore another tenet of Qos guarantees is that while providing isolation to different classes, it is preferred to utilize the resources as efficiently as possible. Since most networks have a link capacity that cannot be exceeded. there is a need to control admission. Applications can declare their requirements, and network can block if it cannot provide the resources.
Therefore Quality of Service are provided using packet classification, Isolation: scheduling and policy, high resource utilization and call admission
IntServ and DiffServ are models of providing quality of Service. IntServ stands for Integrated services and DiffServ stands for Differentiated Services. In the IntServ model, QoS is applied on a per flow basis and addresses concerns such as business model and charging. Even in mobile phone networks, this is evident when certain billing options are desirable but not possible. In Differentiated services, the emphasis is on scalability, flexible service model, and simpler signaling. Scalability here means we don't track resources for each of the possibly large number of flows. Service model means we provide bands of service such as platinum, gold and silver.
Courtesy: lecture notes CMU
Friday, August 2, 2013
OData continued
To create an OData service, here are some of the steps to take:
1) Have the data ready say in a database and test the connection string.
2) Next make a web project with an entity data model, WCF works well with EF
3) Update the model from the database
4) For the API you want to add, create an association by right clicking on the entity and add association
5) Right click on the project and add new Item and WCF data service
6) In the service class : specify the type with the base DataService and in the InitializeService method add the config.SetEntitySetAccessRule
7) Add the JSONPsupportBehaviour attribute to the service class if desired.
8) Test the service by navigating to http://<yoursite>/service.svc
OData is not limited to SQL Databases. And it is definitely not about putting your database on the web. You choose which entities are accessed over the web and you can expand the reach with OASIS standard. OASIS is a global consortium that drives the development, convergence and adoption of web standards.
OData is not limited to WCF services.
In fact, the Asp.Net WebAPI framework makes it easy to author RESTful APIs out of the box. While the WCF DataService wraps the context and locks down the interactions such that we only set the permissions in the constructor to the DbSets from the context, the Asp.Net WebAPI, on the other hand, lets us define the interactions and we define the logic in the methods. Therefore we are need not use EntityFramework. This is the key difference between choosing a service over an API to expose the OData. Thus the CRUD functionality alone calls for a service and anything more calls for an API. Experts suggest that if you are working from the data upwards to expose it, you could merely use the service. If you are working from the clients via methods and want to control access to the model, you choose the API. In the WebAPI framework, the App_Data, models and views could be deleted.
Using Asp.Net WebApi, the endpoints are created easily Many endpoints are created and they can be hosted side by side with non-OData endpoints. The Framework does limit control over any of the layers such as the data model, the back-end business logic and data layer
To create an endpoint with this framework, the following steps should be taken :
Create an MVC controller just the same as a standard one where you use the scaffolding options to point to the model and the Data context class.
The above step generates the WebAPI controller with the corresponding GET api/entity methods .
JSON is the default response type unless XML is specifically requested.
PUT methods are also provided to update the entities.
Next, we convert the controller into an OData Controller by deriving from EntitySetController and specify the entity as the associated type.
Next we add the methods we want to expose and leave out the ones we want to restrict. Notice that this is different from the SetEntitySetAccessRule we use with the DataService.
Next we define a route to find the OData with config.Routes.MapODataRoute("ODataRoute", "odata", model);
And then we rename the controller class. The data can be consumed from clients in say Javascript, PHP or python. It can be tested with Fiddler and curl.
1) Have the data ready say in a database and test the connection string.
2) Next make a web project with an entity data model, WCF works well with EF
3) Update the model from the database
4) For the API you want to add, create an association by right clicking on the entity and add association
5) Right click on the project and add new Item and WCF data service
6) In the service class : specify the type with the base DataService and in the InitializeService method add the config.SetEntitySetAccessRule
7) Add the JSONPsupportBehaviour attribute to the service class if desired.
8) Test the service by navigating to http://<yoursite>/service.svc
OData is not limited to SQL Databases. And it is definitely not about putting your database on the web. You choose which entities are accessed over the web and you can expand the reach with OASIS standard. OASIS is a global consortium that drives the development, convergence and adoption of web standards.
OData is not limited to WCF services.
In fact, the Asp.Net WebAPI framework makes it easy to author RESTful APIs out of the box. While the WCF DataService wraps the context and locks down the interactions such that we only set the permissions in the constructor to the DbSets from the context, the Asp.Net WebAPI, on the other hand, lets us define the interactions and we define the logic in the methods. Therefore we are need not use EntityFramework. This is the key difference between choosing a service over an API to expose the OData. Thus the CRUD functionality alone calls for a service and anything more calls for an API. Experts suggest that if you are working from the data upwards to expose it, you could merely use the service. If you are working from the clients via methods and want to control access to the model, you choose the API. In the WebAPI framework, the App_Data, models and views could be deleted.
Using Asp.Net WebApi, the endpoints are created easily Many endpoints are created and they can be hosted side by side with non-OData endpoints. The Framework does limit control over any of the layers such as the data model, the back-end business logic and data layer
To create an endpoint with this framework, the following steps should be taken :
Create an MVC controller just the same as a standard one where you use the scaffolding options to point to the model and the Data context class.
The above step generates the WebAPI controller with the corresponding GET api/entity methods .
JSON is the default response type unless XML is specifically requested.
PUT methods are also provided to update the entities.
Next, we convert the controller into an OData Controller by deriving from EntitySetController and specify the entity as the associated type.
Next we add the methods we want to expose and leave out the ones we want to restrict. Notice that this is different from the SetEntitySetAccessRule we use with the DataService.
Next we define a route to find the OData with config.Routes.MapODataRoute("ODataRoute", "odata", model);
And then we rename the controller class. The data can be consumed from clients in say Javascript, PHP or python. It can be tested with Fiddler and curl.
Thursday, August 1, 2013
OData
The Open Data Protocol is a data access protocol for the web. OData provides a uniform way to query and manipulate the data sets through CRUD operations. OData endpoints can be created for a subset of these operations. OData differs from its predecessors such as ODBC , OLEDB, ADO.Net and JDBC in that it provides data access over the web. ODBC provided Data access in C language, OLE DB was COM based, ADO.Net was .Net based API and JDBC was for Java. The need for standardized data access is not restricted to relational data where it is served by SQL. Structured data and their access over web such that it can be integrated with SharePoint 2010, WCF data services framework, IBM's web sphere and Open Government data initiative.
OData is built from a standard referred to as the Atom publishing protocol which itself is built on top of the Atom Syndication Format.
Atom is a simple way of exposing a "feed" of data. A feed consists of many entries. Feed and Entry both have required and optional pieces of metadata. The top level element, feed has a title, link, updated author and id child elements, all of which are required except link. The ID associated with an entry is unique to that feed. If a link attribute is provided, it can be absolute or relative. A content element of type xhtml enables the content to be packed into the entry itself.
The Categories table of the Northwind database has the same schema as the Atom protocol. In fact the entire northwind database is available via the OData endpoints.
Atom feed documents are organized into collections, which in turn is organized into workspaces and those in turn to service documents.
CRUD operations are performed via GET, POST, PUT, and DELETE. HTTP and XML support is sufficient to publish OData endpoints.
Entries that are posted can be reviewed via the location of the newly created category.
The Open Data Protocol is a data access protocol for the web. OData provides a uniform way to query and manipulate the data sets through CRUD operations. OData endpoints can be created for a subset of these operations. OData differs from its predecessors such as ODBC , OLEDB, ADO.Net and JDBC in that it provides data access over the web. ODBC provided Data access in C language, OLE DB was COM based, ADO.Net was .Net based API and JDBC was for Java. The need for standardized data access is not restricted to relational data where it is served by SQL. Structured data and their access over web such that it can be integrated with SharePoint 2010, WCF data services framework, IBM's web sphere and Open Government data initiative.
OData is built from a standard referred to as the Atom publishing protocol which itself is built on top of the Atom Syndication Format.
Atom is a simple way of exposing a "feed" of data. A feed consists of many entries. Feed and Entry both have required and optional pieces of metadata. The top level element, feed has a title, link, updated author and id child elements, all of which are required except link. The ID associated with an entry is unique to that feed. If a link attribute is provided, it can be absolute or relative. A content element of type xhtml enables the content to be packed into the entry itself.
The Categories table of the Northwind database has the same schema as the Atom protocol. In fact the entire northwind database is available via the OData endpoints.
Atom feed documents are organized into collections, which in turn is organized into workspaces and those in turn to service documents.
CRUD operations are performed via GET, POST, PUT, and DELETE. HTTP and XML support is sufficient to publish OData endpoints.
Entries that are posted can be reviewed via the location of the newly created category.
ASP.Net Web API is a framework that makes it easy to write RESTful applications. You can host WebAPI. The Web API can be hosted within an ASP.NET application or inside a separate process. The workflow is something like this. The HttpServer receives the request messages and these are converted to an HttpRequestMessage. The message handlers process these messages. Custom handlers can be chained here. Handlers can be global or per route. The latter are configured along with the routes. If the message handler does not create a response on its own, the message continues through the pipeline to the controller. Since the controllers are found because they derive from a wellknown class or implement a well known Interface, the appropriate controller is instantiated and the corresponding action is selected. If the authorization fails, an error response is created and the rest of the pipeline is skipped. Next the model binding takes place and the OnActionExecuting and OnActionExecuted events are triggered before and after the action is invoked. The results are then converted through the handler and then the server. Error responses and exceptions bypass the pipeline. In the model binding stage, the request message is used to create values for the parameters of the actions. These values are passed to the action when the action is invoked. Request Message is broken into Uri, Headers and entity body and these are converted by binders and value providers to simple type or by formatters to complex type. Model binders use the URI and the query string while the formatter reads the message body To convert the results, the return value from the action is checked. If the return type is HttpResponseMessage, it is passed through. If the return type is void, the response is created with status 204 ( no content ). For all other return types, a media-type formatter serializes the value and writes it to the message body.
Wednesday, July 31, 2013
The datetime conversions (SQL Server) include the following functions :
The Convert function is a good general choice but DateName and DatePart provide flexibility.
SET DATEFORMAT dmy is used
SELECT CAST('7/31/2013' AS DateTime)
You can also use different styles to convert to and from text:
SELECT CONVERT(VARCHAR, CONVERT(DATETIME, '31-Jul-2013', 106), 106)
will display 31 Jul 2013
The allowed styles include 0, 100 Default:mon dd yyyy hh:mi AM (or PM)
Style 101 : USA mm/dd/yyyy
Style 102 : ANSI yyyy. mm. dd
Style 103 : British /French dd/mm/yyyy
Style 104: German dd.mm.yyyy
Style 105: Italian dd-mm-yyyy
Style 106: dd mon yyyy
Style 107: mon dd, yyyy
Style 108 : hh:mm:ss
Style 9, 109: Default with milliseconds
Style 110 USA:mm-dd-yyyy
Style 111 : Japan: yyyy/mm/dd
Style 112: ISO:yyyymmdd
Style 13, 113: Europe default with milliseconds and 24hour clock
Style 114: hh:mm:ss:mmm with a 24 hour clock
Style 20, 120 ODBC canonical, 24 hour clock: yyyy-mm-dd hh:mi:ss
Style 21, 121 ODBC canonical with milliseconds, 24 hour clock:yyyy-mm-dd hh:mi:ss.mmm
The Convert function is a good general choice but DateName and DatePart provide flexibility.
SET DATEFORMAT dmy is used
SELECT CAST('7/31/2013' AS DateTime)
You can also use different styles to convert to and from text:
SELECT CONVERT(VARCHAR, CONVERT(DATETIME, '31-Jul-2013', 106), 106)
will display 31 Jul 2013
The allowed styles include 0, 100 Default:mon dd yyyy hh:mi AM (or PM)
Style 101 : USA mm/dd/yyyy
Style 102 : ANSI yyyy. mm. dd
Style 103 : British /French dd/mm/yyyy
Style 104: German dd.mm.yyyy
Style 105: Italian dd-mm-yyyy
Style 106: dd mon yyyy
Style 107: mon dd, yyyy
Style 108 : hh:mm:ss
Style 9, 109: Default with milliseconds
Style 110 USA:mm-dd-yyyy
Style 111 : Japan: yyyy/mm/dd
Style 112: ISO:yyyymmdd
Style 13, 113: Europe default with milliseconds and 24hour clock
Style 114: hh:mm:ss:mmm with a 24 hour clock
Style 20, 120 ODBC canonical, 24 hour clock: yyyy-mm-dd hh:mi:ss
Style 21, 121 ODBC canonical with milliseconds, 24 hour clock:yyyy-mm-dd hh:mi:ss.mmm
A look at some of the SQL constructs in no particular order:
1) common table expression - This is a temporary result set maintained for the duration of a query execution such as an insert, update or delete.
It can be thought of as a substitute for a view and doesn't require the view definition to be stored in the metadata. Follows the syntax: With CTE_Name(Col1, Col2, Col3) (SELECT query to populate) specify the outer query.
A CTE can be written in a recursive manner as follows : With CTE_name(column_name) AS
(CTE_query_definition -- anchor member is defined
union all
CTE_query_definition -- recursive member is defined in referencing cte_name
)
-- statement using the CTE
SELECT * from cte_name
For example With DirectReports (ManagerID, EmployeeID, Level) AS (
select e.ManagerID, e.EmployeeID, 0 as Level from dbo.Employee as e where ManagerID is null
union all
select e.ManagerID, e.EmployeeID, Level + 1 from dbo.Employee as e inner join DirectReports as d on e.ManagerId = d.EmployeeID
)
SELECT ManagerID, EmployeeID, Level from DirectReports
2) Some query hint options include
hash or order group which specifies if the aggregations use hash or order group
Merge or hash or concat specifies which should be used for union
Loop or merge or hash specifies which should be used for join
FAST number_rows that specifies fast retrieval of number_rows
FORCE ORDER specifies that the default source order is preserved.
MAXDOP overrides the max degree of parallelization setting, 0 => max
optimize for unknown informs the query optimizer to use statistical data instead of the initial values for variables
RECOMPILE discards the plan generated for the query after it executes, forcing the query optimizer to recompile
1) common table expression - This is a temporary result set maintained for the duration of a query execution such as an insert, update or delete.
It can be thought of as a substitute for a view and doesn't require the view definition to be stored in the metadata. Follows the syntax: With CTE_Name(Col1, Col2, Col3) (SELECT query to populate) specify the outer query.
A CTE can be written in a recursive manner as follows : With CTE_name(column_name) AS
(CTE_query_definition -- anchor member is defined
union all
CTE_query_definition -- recursive member is defined in referencing cte_name
)
-- statement using the CTE
SELECT * from cte_name
For example With DirectReports (ManagerID, EmployeeID, Level) AS (
select e.ManagerID, e.EmployeeID, 0 as Level from dbo.Employee as e where ManagerID is null
union all
select e.ManagerID, e.EmployeeID, Level + 1 from dbo.Employee as e inner join DirectReports as d on e.ManagerId = d.EmployeeID
)
SELECT ManagerID, EmployeeID, Level from DirectReports
2) Some query hint options include
hash or order group which specifies if the aggregations use hash or order group
Merge or hash or concat specifies which should be used for union
Loop or merge or hash specifies which should be used for join
FAST number_rows that specifies fast retrieval of number_rows
FORCE ORDER specifies that the default source order is preserved.
MAXDOP overrides the max degree of parallelization setting, 0 => max
optimize for unknown informs the query optimizer to use statistical data instead of the initial values for variables
RECOMPILE discards the plan generated for the query after it executes, forcing the query optimizer to recompile
Subscribe to:
Comments (Atom)