Saturday, July 31, 2021

 

Introduction: This is a continuation of a series of articles on Azure services starting with the SignalR service mentioned here. The Azure analysis service is next in the portfolio of Azure cloud services we started to review. This is a fully managed platform-as-a-service that provides enterprise-grade data models in the cloud. It uses advanced mashup and modeling features to combine data from multiple data sources, define metrics, and secure the data in a single trusted tabular semantic data model. The data model provides an easier and faster way for users to perform ad-hoc data analysis using tools like PowerBI and Excel.

Azure analysis service is different from other services in that, it provides a one-stop-shop for all tabular data models across hybrid data sources and even at higher compatibility levels than others. Tabular models are key concepts of relational modeling and are articulated by definitions in tabular model scripting language as well as tabular object models. With the help of tables we can define partitions views row-level security bidirectional relationships and translations. Since the data can be accumulated across cloud and on-premises data sources, it provides a consistent and enterprise-grade semantic data model which can be used with power tools like power BI and excel.

Performance of analysis services is well known by virtue of its support for partitioning enables loads to be increased incrementally, queries to be run parallelly and, memory to be consumed in an efficient manner. There are also advanced data modeling features like calculated tables and all DAX functions. Regardless of the size of the data that is transferred to the server, the analysis service supports in-memory models that can be refreshed with cache data or data directly from the sources. It offers support for recognizing Azure logged-in users and service principals so that querying and analysis can be run under their security context which enables all the auditing benefits.  Background operations and unattended refresh can be automated, it offers a variety of languages to integrate with its SDK, REST APIs, and PowerShell cmdLets. In comparison to a standalone SQL Server analysis service, this cloud service offers unprecedented virtualization of both relational databases as well as warehouses, which makes querying and analysis quite easy.

The service is hosted on a single server, and it can scale to several servers with simple deployment techniques that are common to many Azure services where the infrastructure can be provisioned declaratively using ARM resource templates. The Azure analysis service is supported in regions throughout the world with varying types of SKUs and pricing options. With the help of replicas, this Azure service can scale-out queries in a distributed manner and still maintain relatively low latency in query responses. It does this by creating a query pool with up to seven additional query replicas and these replicas would be assigned to the same region, it is quite possible to dramatically increase performance with premium and higher sizes of the deployments.

The analysis service stands out by itself among its peers of analysis services and pipelines produced from a combination of heterogeneous products by being a native cloud service. It conforms to the strict and stringent demands from the cloud computing provider and thus meets industry standards and government compliance requirements in terms of privacy, security, and data protection.

Friday, July 30, 2021

 

Introduction: This article is a continuation of the previous articles on Azure services. In this article we talk about content delivery network on Azure. This is a distributed network of servers that deliver web content to users typically resources for the web pages such as JavaScript, Stylesheet and HTML that are downloaded from content delivery network. CDNs that are closest to the application or clients are used so that there is little or no latency. Azure CDN can also accelerate dynamic content which cannot be cached, by leveraging networking optimizations such as the Point-of-Presence (POP) location and the route optimization via border gateway protocol benefits of using Azure CDN include better performance, large scaling and distribution of user requests.

The design of Azure CDN is very similar to that of object storage. Both perform geo-replication and automatic synchronization between virtual datacenters which is a term used to denote shared-nothing collection of servers or clusters. Both leverage some form of synchronization with the help of say, message-based consensus protocol. Azure storage is also a service that provides BLOB storage, but the CDN is hosted as its own service and comes with its arm resource that can be used to provision one or more CDNs. As with all Azure services the CDN service also provisions an Azure resource backed by an Azure resource manager template. When the resource is provisioned, it can be used to download content from the network.  ARM templates are infrastructure-as-a-code and policy-as-a-code so they can be used for achieving a desired state of the infrastructure and for orchestration.

Azure CDN is used for a variety of purposes suggest the following:

1)      delivering static resources for client applications as described earlier for websites

2)      delivering public static and shared content to devices

3)      serving entire websites that consist only of public static content

4)      streaming video files to clients on demand

5)      enabling faster access to public resources from Azure CDN POP locations

6)      Improving the experience for users who are further away from data centers

7)      supporting the Internet of Things by scaling to a huge number of devices that can access content

8)      handling traffic surges without requiring the application to scale

Some of the challenges involved when planning CDN involve deployment considerations about where to deploy CDN and a few others. For example, these include versioning and cache control of the content testing of the resources independent of the publications search engine optimizations and content security in addition CDN service must provide disaster recovery and backup options so that the data is not lost and is highly available system engineering design looks down upon CDN because of the costs involved if it is easier to scale the servers without requiring the planning of content delivery network which saves costs because the resources are co-located and there are easier options to scale. The customer would integrate the publication of their content which can be done with the help of the CDN

 

Thursday, July 29, 2021

 Introduction: This article is a continuation of a series of articles on Azure services starting with the signal R service we described earlier. The Azure communication service is a cloud-based service that allows communication into your application communication in the form of voice and video calling rich text chat and SMS. The applications are relieved from knowing the media encodings and real-time networking requirements of using these communication technologies in a do-it-yourself approach and are instead onboarded to a welcoming SDK.  Custom client endpoints, services, and even publicly switched telephone networks can be connected to this communications application. Even phone numbers can be acquired directly. Services can make use of Session Initiation Protocol (SIP) and session border controllers which connect PSTN carriers 

Applications that make use of the Azure communication services client libraries leverage one of the following two common scenarios: 1) business to consumer scenario and the 2) consumer to consumer scenario.  The B2C scenario is focused on voice, video, and text chat available from a custom browser or mobile application for individuals’ interaction with a business. It operates with a voice response system as well as integration with Microsoft Teams which is a communication and collaboration tool that facilitates employees of an organization to communicate with one another.  The consumer-to-consumer scenario is built on engaging social spaces with voice video and rich text chat.  As with all Azure services, the communication services also provision an Azure resource declared via an Azure resource manager template.  When the resource is provisioned, it can be used to get a phone number or to send an SMS from the application. The first user access token allows the clients to authenticate. Afterward, it's just renewed. The use of an arm resource template helps with the standardization of this resource in the Azure service portfolio. Like other services, it provides connection strings and a resource object to manage and use. The resource group name and the subscription are required for this resource to be provisioned settings may vary but they can be specified as parameters to the template. Cleanup is as easy as removing the resource group to which the resource belongs and for the removal of all dependencies. 

When the application is used to get a phone number the provisioned resource allows the selection of a number type and the capabilities associated with the number and the geographies and toll-free are two types of numbers. A toll-free number helps with the outbound calling and inbound and outbound SMS features and is slightly more expensive than the geographies number type. Phone numbers can be customized and even purchased. Registration of phone number, its lookup, and reverse lookup could be tried via the Azure portal. 

The application can also send SMS messages. A simple object model can be used to represent the resource in the CSharp language, and the SMS client can be instantiated with the help of a connection string and authenticated with the server, then it is a matter of justice calling the send method on the SMS client to send a direct message. It is also possible to broadcast a message by including multiple phone numbers as a parameter to the send method. 

Integration with other applications such as Microsoft Teams makes this service uniquely appealing for collaboration and communication scenarios. For example, this application can be joined to a team meeting with a UI control that declares a text box on a form to take the team's meeting link. With a click of a button, the application can connect to the team's meeting. JavaScript callbacks can be registered for events such as isRecordingActive, changed state, and others. The benefits of voice, video, and rich text chat cannot be overemphasized in gaining user attention. 



Wednesday, July 28, 2021

The following section details some of the benefits of the ARM resource which includes  

1) repeatable results: The templates define the desired state, so the invocations are idempotent and deterministic. Entire Azure infrastructures can be described by ARM templates. 

2) orchestration: Operations can be ordered and resources can be deployed in parallel.  The deployment can occur with one-touch rather than a sequence of imperative commands. 

3) Modular files: These break the template into smaller reusable components so that the costs are driven down in favor of composition and reusability. 

4) Extensibility: The deployment script is an extension of the templates so a variety of automation can be introduced into the workflow. 

5) Testing: The ARM test tool kit can validate the templates with the execution of a PowerShell script. This reduces error and saves time. 

6) Tracked deployments: The history of the deployment as well as the parameter passing can be reviewed. This makes it easy to troubleshoot. 

7) Governance: A policy as a code framework allows enforcement of policies and provides remediations for non-compliant resources. The templates support this. 

8) Export: Templates can be exported allowing the same resource to be provisioned in different regions or even cloud types. 

9) Integration with pipelines: CI/CD can be supported by the integration of pipelines that facilitate application and infrastructure updates. 

Tuesday, July 27, 2021

 When the product is a foundational service, a term used to refer to a service hosted in the base of a public cloud provider, the concerns for claim provisioning, token service, managed service identities, secrets management and security configuration dashboards must be addressed in modules dedicated to this layer. Any service that sits on top of the foundational service has the luxury of utilizing the cloud provider published resource manager templates and can use them interchangeably for different technologies representing these functionalities of an IAM.  The choice of technologies used in the foundational layer become rather restricted and almost a Do-it-yourself approach. The services built in the cloud, on the other hand, experience rich functionalities and interchangeable with those from competing vendors. The costs of DIY approach are known to be high, but the flexibility and resource efficiency are undeniable. We review some of the essential functions that these technologies must implements and the support for their automations.

With an identity claim model, an application or web service is no longer responsible for the following: 1) authenticating users 2) storing user accounts and passwords, 3) calling membership providers like enterprise directories to lookup user information 4) integrating with identity systems from other organizations and 5) providing implementations for several protocols to be compliant with industry standards and business practice. All the identity related decisions are based on claims supplied by the user.  An identity is a set of attributes that describe a principal. A claim is a piece of identity information. The more slices of information an application receives, the more complete the pie representing the individual.  Instead of the application looking up the identity, it merely serializes them to the external system. A security token is a serialized set of claims that is digitally signed by the issuing authority. It gives the assurance that the user dd not make up the claim. An application might receive the claim via the security header of the SOAP envelope of the service. A browser-based web application arrives through an HTTP POST from the user’s browser which may later be cached in a cookie if a session is enabled. The manner might vary depending on the clients and the medium, but the claim can be generalized with a token.  Open standards including some well-known frameworks are great at creating and reading security tokens. A security token service is the plumbing that builds, signs and issues security tokens. It might implement several protocols for creating and reading security tokens but that is hidden from the application.   

The relying parties are the claim-aware applications and the claims-based applications. These can also be web applications and services, but they are usually different from the issuing authorities. When it gets a token, the relying parties extract claims from the tokens to perform specific identity related tasks  

 A claim is a combination of a claim type, right, and a value. A claim set is a set of claims issued by an issuing authority.  A claim can be a DNS, email, hash, name, RSA, sid, SPN, system, thumbprint, Uri, and X500DistinguishedName type.  An evaluation context is a context in which an authorization policy is evaluated. It contains properties and claim sets and once the evaluation is complete, it results in an authorization context once authorization policies are evaluated. An authorization policy is a set of rules for mapping a set of input claims to a set of output claims and when evaluated, the resulting authorization context has a set of claims sets and zero or more properties. An identity claim in an authorization context makes a statement about the identity of the entity. A group of authorization policies can be compared to a machine that makes keys. When the 

Monday, July 26, 2021

Continued from previous post...

 

 

Example for server-side implementation:

using Microsoft.AspNet.SignalR;

using Microsoft.AspNet.SignalR.Hubs;

using System;

using System.Collections.Generic;

using System.Linq;

using System.Threading.Tasks;

using System.Web;

 

namespace SignalR.EditFile

{

     

    public static class UserHandler //this static class is to store the number of  users conected at the same time

    {

        public static HashSet<string> ConnectedIds = new HashSet<string>();

    }

 

    [HubName("editFile")]   //this is for use a name to use in the client

    public class EditFileHub : Hub

    {

        public void editFile(int x, int y) // this method will be called from the client, when the user edits a file

        {    

            Clients.Others.fileEdited(x, y); // this method will send the coord x, y  to the other users but the user draging the shape

        }

 

        public override Task OnConnected() //override OnConnect, OnReconnected and OnDisconnected  to know if a user is connected or disconnected

        {

            UserHandler.ConnectedIds.Add(Context.ConnectionId); //add a connection id to the list

            Clients.All.usersConnected(UserHandler.ConnectedIds.Count()); //this will send to ALL the clients  the number of users connected

             return base.OnConnected();

        }

 

        public override Task OnReconnected()

        {

            UserHandler.ConnectedIds.Add(Context.ConnectionId);

            Clients.All.usersConnected(UserHandler.ConnectedIds.Count());

            return base.OnConnected();

        }

 

        public override Task OnDisconnected()

        {

            UserHandler.ConnectedIds.Remove(Context.ConnectionId);

            Clients.All.usersConnected(UserHandler.ConnectedIds.Count());

            return base.OnDisconnected();

        }          

    }

}

Sunday, July 25, 2021

An article on using Azure SignalR service:

 


Introduction:

This is an article about Azure SignalR service. It simplifies the process of adding real time web functionality to applications over HTTP that allows the services to push content updates to connected clients. The payload can be single page web or mobile application content transfers updated without the need to pull the server or submit new HTTP requests for updates in a way that allows for syncing of devices from a single web server over HTTP. The devices can be connected to the server via the control plane which represents those devices to the SignalR service as entities to which it sends notifications. The scenario is one of synchronization or web update and it is common to many applications and services where data is pushed from the server to client in real time. The benefits cannot be overemphasized when it concerns actions such as gaming voting polling or auction. The dashboard and financial market data, sales update and multiplayer game leaderboard can be maintained with this application It can support chat and chatbot applications, real time shopping assistance, messengers and location services. It is also very helpful towards targeted ads, collaborative applications, push notifications and real time broadcasting or some other scenarios in which Azure signal R service can be used. Finally, automation is a core component in many workflows and that can also make use of triggers for upstream events. The idea behind SignalR is the building of real-time web applications using WebSocket which is an optimal transport for service and events, and it avoids having the client pull for the server signal service. It provides native programming experience with both asp.net core and asp.net. The synchronization functionality of web servers can now be offloaded to its own module but remain core component of web applications and services. Blazor is used on the server side.  This service can be used with a wide range of clients spanning mobile applications to IoT devices. The transport as well as its programmability with a variety of languages makes it convenient to use and integrate with other clouds services such as Azure functions and event grid. By itself or when used together with other cloud scale traffic, it can scale to multiple instances and millions of client connections. Switching to signal R service removes the need to manage backplanes that handles scale and client connections at the same time. It also provides compliance and security that Azure is known for. It's even possible to utilize just Azure functions and SignalR without any web applications to build service. Real time applications can be supported in multiple languages enabling interoperability. Finally, SignalR features support a wide range of management routines with respect to notifications and the clients that receive them.

Let’s compare this with a Do-it-yourself approach:

Saturday, July 24, 2021

 

This article continues from the previous one for claim provisioning and is dedicated towards security token service.

A security token service can relieve this end-to-end workflow by performing authentication for clients including services and users and providing security tokens for clients to present to the applications. It can support authentication federation for passive clients as well as a trust protocol for the active clients. It can implement a variety of authentication and authorization protocols including OpenID and OAuth while remaining aligned with enterprise authentication guidelines. It can provide a control plane for other services to integrate, and this plane can be internal without any risk of allowing customer access to identity providers. It can be provided regionally for application affinity and for isolated deployments. It can feature in an organization’s service inventory and leverage it for interacting with other services.

One of the instances of the Security Token Service can be used to support dialtone service requests. This follows the same routine as any other Security Token Service instances but with the dedicated purpose of providing dialtone response to other services. Any application using this service must support the right affinity to the dialtone instance. A dialtone service is a self-contained instance with a backup that is running on an infrastructure separate from that of others.

A dialtone service contributes towards resiliency. The local authentication supports cached security group memberships to provide continuity of DevOps account authentication if the Active Directory is not available. In such a case, the client manually selects the local dsts authentication option when requesting a security token.

The issuing authority for a security token does not have to be of the same type as the consumer. Domain controllers issue Kerberos tickets and X.509 certificate authorities issue chained certificates. A token that contains claims is issued by a web application or web service that is dedicated to this purpose. This plays a significant role in the identity solution.

Failover for active clients is achieved automatically if the authentication client is making simultaneous token request calls to both the primary and backup instances, with a preference based on the waittime  for the primary. Passive clients can achieve this using a manager which detects that the primary issuance endpoint is unavailable and routes traffic to the backup within half a minute.

The relying parties are the claim-aware applications and the claims-based applications. These can also be web applications and services, but they are usually different from the issuing authorities. When it gets a token, the relying parties extract claims from the tokens to perform specific identity related tasks

Interoperability between issuing authorities and relying parties is maintained by a set of industry standards. A policy for the interchange is retrieved with the help of a metadata exchange and the policy itself is structured. Sample standards include Security Assertion Markup Language which is an industry recognized XML vocabulary to represent claims. 

A claim to token conversion service is common to an identity foundation. It extracts the user principal name as a claim from heterogeneous devices, applications and services and generates an impersonation token granting user level access to those entities.

Friday, July 23, 2021

 <#

SYNOPSIS

This scripts provisions a new service claim identity as discussed in previous post.

#>

[CmdletBinding()]

param(

    [Parameter(Mandatory = $True, HelpMessage="the claim value to use")]

    [string]$claimType, # example: "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"


    [Parameter(Mandatory = $True, HelpMessage="the claim value to use")]

    [string]$claimValue,


    [Parameter(Mandatory = $True, HelpMessage="The service identity name")]

    [string]$sciName, # example: "dmsi-wus3-prod01-srs1"


    [Parameter(Mandatory = $True, HelpMessage="The service tree identifier")]

    [string]$serviceTreeId,


    [Parameter(Mandatory = $True, HelpMessage="The subscription identifier")]

    [string]$subscriptionId,


    [Parameter(Mandatory = $True, HelpMessage="The custom instance")]

    [string]$customInstance,


    [Parameter(Mandatory = $True, HelpMessage="The service account")]

    [string]$serviceAccount,


    [Parameter(Mandatory = $True, HelpMessage="The region")]

    [string]$region,


    [Parameter(Mandatory = $False, HelpMessage="If the claim is scoped.")]

    [string]$isScoped = $False,


    [Parameter(Mandatory = $False, HelpMessage="The azure environment/cloud name")]

    [string]$environmentName = "prod"


)


Ipmo \\location\client.dll 

Connect-ProviderActiveClient Prod

$request = New-ProviderCreateManagedServiceClientIdentity

$request.Name = $sciName

$request.ServiceTreeId = $serviceTreeId

$request.CustomInstance = $customInstance

$request.ClaimProvisionings = @()

$claim = New-ProviderClaimProvisioning

$claim.ClaimInstance.Type = $claimType

$claim.ClaimInstance.Value = $claimValue

$claim.ScopedToServiceAccount = $serviceAccount

$claim.IsUnscoped= $isScoped

$request.ClaimProvisionings.Add($claim)

$request.Region = $region

$request.Subscriptions = @()

$request.Subscriptions.Add($subscriptionId)

$request | Add-ProviderManagedServiceClientIdentity


#codingexercise:
Q: An array A of N elements has each element within the range 0 to N-1. Find the smallest element P such that every value that occurs in A also occurs in sequence A[0], A[1] ... A[P] 

For example, A = [2,2,1,0,1] and the smallest value of P is 3 where elements 2,2,1,0 contain all values that occur in A. 

A:   

public int getPrefix(int[] A) { 

Int prefix = Integer.MIN_VALUE; 

Int n = A.length; 

Int visited = new int[n]; 

for (int i = 0; i < n; i++) { 

     if (visited[A[I]] == 0){ 

         visited[A[I]] = 1;  

         Prefix = I; 

     } 

} 

return prefix; 

}