Monday, January 24, 2022

Predicate push-down for OData clients (continued)...

 

Predicates are expected to evaluate the same way regardless of which layer they are implemented in. If we have a set of predicates and they are separated by or clause as opposed to and clause, then we will have a result set from each predicate, and they may involve the same records in the results of each predicate. If we filter based on one predicate and we also allow matches based on another predicate, the two result sets may then be merged into one so that the result can then be returned to the caller. The result sets may have duplicates so the merge may have to return only the distinct elements. This can easily be done by comparing the unique identifiers of each record in the result set.

 

The selection of the result is required prior to determining the section that needs to be returned to the user. This section is determined by the start, offset pair in the enumeration of the results. If the queries remain the same over time, and the request only varies in the paging parameters, then we can even cache the result and return only the paged section. The API will persist the predicate, result sets in cache so that subsequent calls for paging only results the same responses. This can even be done as part of predicate evaluation by simply passing the well-known limit and offset parameter directly in the SQL query. In the enumerator we do this with Skip and Take. The OData Client calls with client-driven paging using $skip and $top query options.

When the technology involved merely wants to expose the database to the web as popularly used with OData albeit incorrectly, then each SQL object is exposed directly over the web API as a resource. Some queries are difficult to write in OData as opposed to others. For example,

oDataClient.Resource.Where(x => x.Name.GetHashCode() % ParallelWorkersCount == WorkerIndex).ToList()

will not achieve the desired partition of a lengthy list of resources for faster, efficient parallel data access

and must be rewritten as something like:

oDataClient.Resource.Where(x => x.Name.startsWith(‘A’)).ToList()

:

oDataClient.Resource.Where(x => x.Name.startsWith(‘Z’)).ToList()

The system query options from these are $filter, $select, $orderby, $count, $top, and $expand where the last one helps with joins. Although a great deal of parity can be achieved between SQL and OData with the help of these query options, the REST interface does not form a replacement for the analytical queries possible with purely language options such as those available from U-SQL, LINQ or Kusto. Those have their own place higher up in the stack at the business or application logic layer but at the lower levels close to the database, a web interface separation of concerns between the stored data and its access, the primitives provide a challenge as well as an opportunity.

Let us look at how the OData is written. We begin with a database that can be accessed with a connection string that stores data in the form of tables for entities in a database. A web project with an entity data model is then written to prepare a data model from the database. The web project can be implemented with a SOAP-based WCF or REST based webAPIs and EntityFramework. Each API is added by creating an association between the entity and the API. Taking the example of WCF further since it provides terminology for all parts of the service albeit not obsolete, a type is specified with the base DataService and an InitializeService method, the config.SetEntitySetAccessRule is specified. Then the JSONPSupportBehaviour attribute is added to the service class so that the end users can get the data in the well-known format that makes it readable. The service definition as say http://<odata-endpoint>/service.svc can be expected in json or xml format to allow clients to build applications using those objects representing entities. The observation here is that it uses a data model which is not limited to SQL databases, so the problem is isolated away from the database and narrowed down to the operations over the data model. In fact, OData has never been about just exposing the database on the web. We choose which entities are accessed over the web and we can expand the reach with OASIS standard. OASIS is a global consortium that drives the development, convergence, and adoption of web standards. Another observation is that we need not even use the Entity Framework for the data model. Some experts argue that OData main use case is the create, update, and delete of entities over the web and the querying should be facilitated by APIs from web services where rich programmability already exists for writing queries. While it is true that there are language-based options that can come in the compute layer formed by the web services, the exposure remains a common theme to the REST API design for both the REST API over a service or the REST API over a database. The filter predicate used in those APIs will eventually try to push it into the data persistence layer. In our case, we chose an example of a GetHashCode() operator that is more language based rather than a notion for the database. As demonstrated with the SQL statement example above, the addition of a hash to an entity involves adding a compute column attribute to its persistence. Once that is available, the predicate can automatically be pushed into the database for maximum performance and scalability.

The manifestation of data to support simpler queries and their execution is not purely a technical challenge. The boundary between data and compute is complicated by claims to ownerships, responsibilities, and jurisdictions. In fact, clients writing OData applications are forced to work without any changes to master data. At this point, there are two options for these applications. The first involves translating the queries to those that can work on existing data such as the example shown above. The second involves the use of scoping down the size of the data retrieved by techniques such as incremental update polling, paging, sorting etc. and then performing the complex query operations in-memory on that limited set of data. Both these options are sufficient to alleviate the problem encountered.

The strategic problem for the case with the data being large and the queries being arbitrarily complex for OData clients can be resolved with the help of a partition function and the use of a scatter-gather processing by the clients themselves. This can be compared to the partition that is part of the URI path qualifier for REST interfaces to the CosmosDB store.

OData also provides the ability to batch requests. The HTTP specification must be followed when sending a response. A new batch handler must be created and passed when mapping routing for OData service. Batch or response consolidation will be enabled.

 

Sunday, January 23, 2022

Predicate push-down for OData clients:


Abstract:

Data access is an important operational consideration for application performance, but it is often not given enough attention on architecture diagrams. The trouble with data access is that it is often depicted by a straight-line arrow on the data path diagrams between a source and a destination. But the size of data and the queries that can be run over the data might result in vast temporal and spatial spread of bytes transferred and incur varying processing delays. When it is overlooked, it might bring about additional architectural components such as background processors, polling mechanisms and redundant technology stacks for fast path. This article discusses some of the challenges and remediations as it pertains to OData which exposes data to the web.

Description:

The resolution for improving performance of queries is that it mostly involves pushing predicates down into the database and more so even to the query execution and optimization layer within the database so that the optimizer has a chance to determine the best query plan for it.

 

In the absence of a database, there will be emulation of the work of the query execution inside the database and this is still not likely to be efficient and consistent in all cases simply because it involves an enumeration-based data structure only in the web-service layer.

 

On the other hand, the database is closest to the storage, indexes and organizes the records so that they are looked up more efficiently. The query plans can be compared and the most efficient can be chosen. Having an in-memory iteration only data structure will only limit us and will not scale to size of data when the query processing is handled at the service layer rather than at the data layer.

 

Predicates are expected to evaluate the same way regardless of which layer they are implemented in. If we have a set of predicates and they are separated by or clause as opposed to and clause, then we will have a result set from each predicate, and they may involve the same records in the results of each predicate. If we filter based on one predicate and we also allow matches based on another predicate, the two result sets may then be merged into one so that the result can then be returned to the caller. The result sets may have duplicates so the merge may have to return only the distinct elements. This can easily be done by comparing the unique identifiers of each record in the result set.

 

Saturday, January 22, 2022

 Azure private connectivity for Key vaults and storage account:

The private connectivity for Azure resources might be surprisingly hard to guarantee if the following instructions are not followed. These include:

1. Private Link and VNet Integration monitoring: The purpose of private connectivity is to prevent data exfiltration. Private links provide in-depth protection against the threat. 

2. To privately connect to a service, create an endpoint.

3. To privately render a service, create a private link service or a private resource. The existing service must be behind a load balancer.

4. When we create a private endpoint, it must have the same region as the vnet from which the connections originate.

5. The same virtual network can be added to the resource along with the subnet from which the connections originate. Trusted Microsoft services are already allowed to bypass this firewall.

6. When a private endpoint is added, the following connections can no longer be made

a. Connections required from clients and workstations that are part of the organization or home office or via Hypernet. This can be mitigated by adding an entry via the DNS maintained by the organization.

b. Applications and services that were honored based on service tags. There is no inbuilt support for service tags in the firewall configuration.

c. 3rd party solutions that are using any custom script or tooling to access the resource.

7. 6 a. can be addressed by adding the DNS records to the organizational DNS. The name records can be looked up using the DIG web interface and these might look like the following.

myvault.azure.net@8.8.4.4 (Default):  Copy results to clipboard

myvault.azure.net. 60 IN CNAME data-prod-wu2.vaultcore.azure.net.

data-prod-wu2.vaultcore.azure.net. 44 IN CNAME data-prod-wu2-region.vaultcore.azure.net.

data-prod-wu2-region.vaultcore.azure.net. 44 IN CNAME azkms-prod-wu2-b.trafficmanager.net.

azkms-prod-wu2-b.trafficmanager.net. 10 IN A 52.151.47.4

azkms-prod-wu2-b.trafficmanager.net. 10 IN A 51.143.6.21

azkms-prod-wu2-b.trafficmanager.net. 10 IN A 52.158.236.253

Here the CNAME record is used from above to add a new CNAME record to the DNS of the organization with the Zone specified as that of the private link as in privatelink.vaultcore.azure.net and FQDN as data-prod-wu2.vaultore.azure.net in this case. Alternatively, an A record with the corresponding IP Address can also be used.

8. For any connections that were excluded from switching to private connectivity, their originating IP addresses can be added to the firewall exception.

9. Disable public access is an option that can guarantee no public access to the resource, but the above steps will help that option is found too restrictive and unacceptable.



Friday, January 21, 2022

 

Public versus private connectivity for Azure Resources:

Azure resources are globally unique. They can be reached from anywhere on the internet by name or by their public IP addresses. A variety of clients such as mobile phones, personal laptops and remote departments can connect to them. They are cloud resources, so they bring the best practice from deployment, availability and service-level agreements.

Azure cloud resources are also used internally by organizations as their Information Technology resources. They must be protected from external access. One way to secure these resources from undesirable access is to use private connectivity.

The private connectivity avoids all relays over the internet. Azure resources no longer need public IP addresses in this case and can be reached on their private ip address. All the firewall rules for port restriction against the public IP address access goes away. The result is a cleaner, low latency network access and this is both safe and secure.

When the Azure resource is a storage account or a key vault, the access is not straightforward because there could be many applications and services on premises or remote that use them. In these cases, those consuming services may have their own virtual network. When we visit the portal for reviewing the storage account or the key vault, its networking section shows the options to disable internet access. When this option is selected, the consuming services can still reach the resource if they are on the same virtual network, but all external access will be prohibited. This is helpful towards eliminating internet-based access for these resources. A less restrictive option would be to list all the virtual networks from which accesses may originate so that these resources are no longer accessible from anywhere else including the internet and except for those virtual networks. These consuming services on the registered virtual networks can still access the resource over the internet. Their public connectivity is not disrupted

Another option is to dedicate a private endpoint or link to the resource so that the private connectivity is established. This option is helpful for both on-premises and cloud services that use these resources. Their usage is narrowed down to just these resources.

When the services and the resources are in different regions, they must have vnet peering because vnets do not stretch between regions Peering helps services and resources to connect across virtual networks

Finally, for remote clients that want to access the azure resources and cannot avoid the internet, they can do so over VPN. In this case, a VPN gateway and Point-2-site connectivity is required.

Thursday, January 20, 2022

 

Data Import and Export from a database:

This is a continuation of a series of articles on operational engineering aspects of Azure public cloud computing that included the most networking discussions on Azure DNS which is a full-fledged general availability service. This article focuses on data import and export from a database

Data migration is a necessity when archiving or moving from one platform to another, and for redundancy. Since databases have relational data, import and export of data must consider the schema. When only the schema needs to be transferred, all the SQL objects can be serialized as Xml and this is contained in a single DACPAC file. A Data-tier application is a logical database management entity that defines all the SQL server objects. With the data included with the schema, the exported format is a BACPAC file. It can be stored in Azure Blob Storage or on premises.

The advantage of using Data tier application formats is that the DACPAC enables a DBA to list and validate behaviors from different source and targets. It offers a chance to determine failures and data loss in the case of say an upgrade. DAC tools can provide an upgrade plan. A script driven exercise does not offer that visibility. DAC also supports versioning to help the developer who authors it and the DBA who uses it to maintain and manage the database lineage through its lifecyle.  The primary use case for a DACPAC is the propagation of an existing database through development, test and production environments or in the reverse direction. The BACPAC is used from a source database to a new database and even on another server.

For an export to be transactionally consistent, there must be no write activity occurring during the export or the data must be exported from a copy which is transactionally consistent. If the BACPAC file is stored in the Azure Blob Storage, then there is a maximum size limit of 200GB. It cannot be exported to Azure Premium storage or to a storage behind a firewall or to an immutable storage. The fully qualified file name must be limited to 128 characters and must exclude specific special characters. If the operation exceeds 20 hours, it may be canceled. User must be a member of the dbmanager role or assigned create database permissions

Azure SQL managed instance does not support exporting a database to a BACPAC file using the Portal or PowerShell. Instead, the SQLPackage or SQL Server management studio must be used. This does not mean that there is no such functionality in Portal or PowerShell. For example, New-AZSqlDatabaseExport and New-AzSqlDatabaseImport are available but large database export or import take a long time and may fail for many reasons. SQLPackage utility is best for scale and performance. Data Migration Service can migrate a database from a SQL Server to an Azure SQL database. Bcp utility is shipped along with SQL server and it can also be used to backup and restore data

Like Export, Import speed can be maximized by providing more and faster resources, scaling the database, and compute size during the import process. It can be scaled down after the import is successful. SQLPackage, Portal and Powershell help with import just as it does for export.

The DAC operations that are supported include extract, deploy, register, unregister, and upgrade. A BACPAC supports primarily import and export operations.

 

Wednesday, January 19, 2022

 

This is a continuation of the sample queries written for Azure Public Cloud for diagnostic purposes. The topic was introduced in this article earlier.

Sample Kusto queries:

1)      When log entries do not have function names, scopes or duration of calls:

source

| where description Contains "<string-before-scope-of-execution>"

| project SessionId, StartTime=timestamp

| join (source

| where description Contains "<string-after-scope-of-execution>"

| project StopTime=timestamp, SessionId)

on SessionId

| project SessionId, StartTime, StopTime, duration = StopTime - StartTime

 

| summarize count() by duration=bin(min_duration/1s, 10)

 

| sort by duration asc

 

| render barchart

 

2)      Since the duration column is also relevant to other queries later

source  | extend duration = endTime – sourceTime

 

3)      When the log entries do not have an exact match for a literal:

source

| filter EventText like "NotifyPerformanceCounters" 

| extend Tenant = extract("tenantName=([^,]+),", 1, EventText)

 

4)      If we wanted to use regular expressions on EventText:

source

| parse EventText with * "resourceName=" resourceName ",

totalSlices=" totalSlices:long * releaseTime=" releaseTime:date ")" *

| valid in~ ("true", "false")

5)      If we wanted to read signin logs:

source                            

| evaluate bag_unpack(LocationDetails)

| where RiskLevelDuringSignIn == 'none'

   and TimeGenerated >= ago(7d)

| summarize Count = count() by city

| sort by Count desc

| take 5

        6) If we wanted to time bucket the log entries:

source

| where Modified > ago(7d)

| summarize count() by bin(Modified, 1h)

| render columnchart

 

        7) If we wanted to derive just the ids:

Source

| where Modified > ago(7d)

| project  Id

 

        8) If we wanted to find the equivalent of a SQL query, we could use an example as follows:

             EXPLAIN

SELECT COUNT_BIG(*) as C FROM StormEvents

Which results in

StormEvents

| summarize C = count()

| project C

 

This works even for wild card characters such as:

EXPLAIN

SELECT * from dependencies where type like “Azure%”

Which results in

dependencies

| where type startswith “Azure”

 

And for extensibility as follows:

EXPLAIN

SELECT operationName as Name, AVG(duration) as AvgD FROM dependencies GROUP BY name

Which results in

dependencies

| summarize AvgD = avg(duration) by Name = operationName

     9)  If we wanted to process JsonPath, there are KQL Functions that process dynamic objects. For example:

datatable(input :dynamic)

[

dynamic({‘key1’: 123, ‘key2’: ‘abc’}),

dynamic({‘key1’: 456, ‘key3’: ‘fgh’}),

]

| extend result = bag_remove_keys(input, dynamic([‘key2’, ‘key4’]))

The above can also be written with json query language

| extend result = bag_remove_keys(input, dynamic([‘$.key1’]))

which results in

{‘key2’:’abc’}

 

 

 

 

 

 

Tuesday, January 18, 2022

 Disaster recovery using Azure DNS and Traffic Manager:

This is a continuation of a series of articles on operational engineering aspects of Azure public cloud computing that included the most networking discussions on Azure DNS which is a full-fledged general availability service.

This article focuses on disaster recovery using Azure DNS and Network Traffic Manager. The purpose of disaster recovery is to revive functionality after a severe loss for the application. The level of revival may be graded as unavailable, partially available or fully available. A multi-region architecture provides some fault tolerance and resiliency against application or infrastructure by facilitating a failover. The region redundancy helps achieve failover and high availability but the approaches for disaster recovery might vary from business to business. The following are listed as some of the options.

-          Active-passive with cold standby: In this failover solution, the VMs and other applications are running in the standby mode are not active until there is a need for failover.Backups, VM Images and resource manager templates continue to be replicated usually to a different region. This is cost-effective but takes time to complete a failover.

-          The active/passive with pilot light failover solution sets up a standby environment with minimal configuration.  The setup has only the necessary services running to support only a minimum and critical set of applications. The scenario can only execute minimal functionality, but it can scale up and launch more services to take bulk of the production load if a failover occurs. Data mirroring can be setup with a site-to-site vpn.

-          the active passive with warm standby is setup such that it can take up a base load and initiate scaling until all instances are up and running. The solution isn’t scaled to take full production workload, but it is functional. It is an enhancement over the previous approach but short of a full-blown approach.

Two requirements that come from this planning deserve callouts.  Firstly, a deployment mechanism must be used to replicate instances, data and configurations between primary and standby environments. The recovery can be done natively or third-party services. Secondly, a solution must be developed to divert network/web traffic from the primary site to the secondary site. This type of disaster recovery can be achieved via Azure DNS, Traffic Manager for DNS or third-party global load balancers.

The Azure DNS manual failover solution for disaster recovery uses the standard DNS mechanism to failover to the backup site. It assumes that both the primary and the secondary endpoints have static IP addresses that don’t change often, an Azure DNS zone exists for both the primary and secondary site and that the TTL is at or below the RTO SLA set in the organization. Since the DNS Server is outside the failover or disaster zone, it does not get impacted by any downtime. The user is merely required to make a flip. The solution is scripted and the low TTL set against the zone ensures that no resolver around the world caches it for long periods. For cold standby and pilot light, since some prewarming activity is involved, enough time must be given before making the flip. The use of Azure traffic manger automates this flip when both the primary and the secondary have a full deployment complete with cloud services and a synchronized database. The traffic manager routes the new requests to the secondary region on service disruption. By virtue of the inbuilt probes for various types of health checks, the Azure Traffic Manager falls back to its rules engine to perform the failover.