Monday, April 30, 2018

I have been working on a user interface for customers to upload content to a server for some processing. Users may even have to upload large files and both the upload and the processing may take time. I came across an interesting jquery plugin for the purpose of showing a progress bar.
The jQuery-File-Upload plugin gives an example as follows:
$(function () {
    $('#fileupload').fileupload({
        dataType: 'json',
        done: function (e, data) {
            $.each(data.result.files, function (index, file) {
                $('<p/>').text(file.name).appendTo(document.body);
            });
        }
    });
});

The upload progress bar is indicated this way:
$('#fileupload').fileupload({
     :
    progressall: function (e, data) {
        var progress = parseInt(data.loaded / data.total * 100, 10);
        $('#progress .bar').css(
            'width',
            progress + '%'
        );
    }
});

Notice that the function depends on data and this notion can be borrowed to the server side where given a request id, the server can indicate progress
function updateProgress() {
        if (stopProgressCheck) return;
        var webMethod = progressServiceURL + "/GetProgress";
        var parameters = "{'requestId':'" + requestId + "'}";

        $.ajax({
            type: "POST",
            url: webMethod,
            data: parameters,
            contentType: "application/json; charset=utf-8",
            dataType: "json",
            success: function (msg) {
                if (msg.d != "NONE") { //add any necessary checks
                    //add code to update progress bar status using value in msg.d
                    statusTimerID = setTimeout(updateProgress, 100); //set time interval as required
                }
            },
            error: function (x, t, m) {
                alert(m);
            }
       }); 
    }
Courtesy : https://stackoverflow.com/questions/24608335/jquery-progress-bar-server-side
The alternative to this seems to be to use the HTTP status code for 102 processing.
#application : https://1drv.ms/w/s!Ashlm-Nw-wnWtkN701ndJWdxfcO4

Sunday, April 29, 2018

We were discussing the benefits of managed RDS instance. Let us now talk about the cost and performance optimization in RDS.
RDS supports multiple engines including Aurora, MySQL, MariaDB, PostgreSql, Oracle, SQL Server. Being a manged service it supports provisioning, patching , scaling, replicas, backup/restore and scaling up for all these servers. It supports multiple Availability zones. Lower TCO as a managed service - it provides more focus on differentiation which makes it attractive to managed instances big or small.
The storage type may be selected between GP2 and IO1. The former is general purpose and the latter is for high consistency and performance.
Depending on the volume size the burst rate and the IOPS rate on the GP2 needs to be monitored. There is a limit imposed by GP2 and as long as we have credit on this limit, the general purpose GP2 serves well.
Compute or memory can be scaled up or down. Storage can be upto 16 GB. There is no downtime for storage scaling. 
Failovers are automatic. Replication is synchronous. Availability zone is inexpensive and enabled with one click. Read Replicas relieves pressure on the source database with additional read capacity. 
Backups are managed with automated and manual snapshots. Transaction logs are stored every 5 minutes. There is no penalty for backups. Snapshots can be copied across regions. A backup can restore an entirely new database instance.
New volumes can be populated from Amazon S3. A VPC allows network isolation.  Resource level permission control is based on IAM access control.  There is encryption at rest and SSL based protection for transit. There is no penalty for encryption. Moreover, it is centralized with access and audit of key activity.
Access grants and revokes are maintained with an IAM user  for everyone including the admin. Multi-factor authentication may also be setup. 
CloudWatch may provide help with monitoring. The metrics usually involve those for CPU, storage and memory, swap usage, read and write, latency and throughput and replica lags. Cloudwatch alarms are similar to on-premises monitoring tools.  Performance insights can be gained additionally by measuring active sessions, identifying sources of bottlenecks with an available tool, discovering problems with log analysis and windowing of timelines.
Billing is usually in the form of GB months.
#application : https://1drv.ms/w/s!Ashlm-Nw-wnWtkN701ndJWdxfcO4

Saturday, April 28, 2018

We were discussing the benefits of managed RDS instance:

The managed database instance types offer a range of CPU and memory selections. Moreover their storage is scaleable on demand. Automated backups have a retention period of 35 days and manual snapshots are stored in S3 for durability. An availability zone is a physically distinct independent infrastructure. Multiple availability zones each of which is a physically distinct independent infrastructure comes with database synchronization so they are better prepared for failures. Read replicas help offload read traffic. Entire database may be snapshot and copied across region for greater durability. Compute, Storage and IOPS are provisioned constitute the bill.

Performance is improved with offloading read traffic to replicas, putting a cache in front of the RDS and scaling up the storage or resizing the instances. CloudWatch alerts and DB Event notifications enabled databases to be monitored.

In short, RDS allows developers to focus on app optimization with schema design, query construction, query optimization while allowing all infrastructure and maintenance to be wrapped under managed services.

RDS alone may not scale in a distributed manner. Therefore software such as ScaleBase allows creation of a distributed relational database where database instances are scaled out. Single instance database can now be transformed into multiple-instance distributed relational database. The benefits from such distributed database include massive scale, instant deployment, keeping all RDS benefits from single-instance, automatic load balancing especially with lags from replicas and splitting of reads and writes, and finally increased ROI with no app code requirements.

Does multi-model cloud database instance lose fidelity and performance over dedicated relational database ?
The answer is probably no because a cloud scales horizontally and what the database server did to manage partition is what the cloud does too. A matrix of database servers as a distributed database model comes with the co-ordination activities. A cloud database seamlessly provides a big table. Can the service-level agreement of a big table match the service-level agreement of a distributed query on a sql server ? The answer is probably yes because the partitions of data and corresponding processing are now flattened.

Are developers encouraged to use cloud databases as their conventional development database which they move to production ? This answer is also probably yes and the technology that does not require a change of habit is more likely to get adopted and all the tenets of cloud scale processing only improves traditional processing. Moreover queries are standardized in language as opposed to writing custom map-reduce logic and maintaining a library of those as a distributable package for No-Sql users.

#codingexercise https://ideone.com/Ar5cOO

Friday, April 27, 2018

The previous post was on Events as a measurement of Cloud Database performance with a focus on the pricing of the cloud database.  We factored in the advantages of a public cloud managed RDS over a self-managed AWS instance namely: upgrades, backup and failover are provided as a services, there is more infrastructure and db Security, the database appears as a managed appliance and the failover is a packaged service.

The managed database instance types offer a range of CPU and memory selections. Moreover their storage is scaleable on demand. Automated backups have a retention period of 35 days and manual snapshots are stored in S3 for durability. An availability zone is a physically distinct independent infrastructure. Multiple availability zones each of which is a physically distinct independent infrastructure comes with database synchronization so they are better prepared for failures. Read replicas help offload read traffic. Entire database may be snapshot and copied across region for greater durability. Compute, Storage and IOPS are provisioned constitute the bill.

Performance is improved with offloading read traffic to replicas, putting a cache in front of the RDS and scaling up the storage or resizing the instances. CloudWatch alerts and DB Event notifications enabled databases to be monitored.

In short, RDS allows developers to focus on app optimization with schema design, query construction, query optimization while allowing all infrastructure and maintenance to be wrapped under managed services.

RDS alone may not scale in a distributed manner. Therefore software such as ScaleBase allows creation of a distributed relational database where database instances are scaled out. Single instance database can now be transformed into multiple-instance distributed relational database. The benefits from such distributed database include massive scale, instant deployment, keeping all RDS benefits from single-instance, automatic load balancing especially with lags from replicas and splitting of reads and writes, and finally increased ROI with no app code requirements.

#codingexercise https://ideone.com/pyiZ7C 

Thursday, April 26, 2018

The previous post was on Events as a measurement of Cloud Database performance with a focus on the pricing of the cloud database. 
With the help of managed services in the cloud, deployments are simple and fast and so is scaling. Moreover, patching, backups and replication are also handled appropriately. It is compatible to almost all applications. And it has fast, predictable performance. There is no cost to get started and we pay for what we use. 
Initially the usage costs are a fraction of the costs incurred from on-premise solutions. Moreover for the foreseeable future, the usage expenses may not exceed on-premise solutions. However, neither the cloud provider nor the cloud consumer controls the data which only explodes and increases over time.
Traditionally disks kept up with this storage requirements and there was cost for the infrastructure but since the usage is proportional to the incoming data, there is no telling when the tradeoff between managed database service costs exceeds the alternatives. 
The metering of cloud database usage and the corresponding bills will eventually become significant enough for corporations to take notice when more and more data finds its way to the cloud. Storage has been touted to be zero cost in the cloud but compute is not. Any amount of incoming data is paired with compute requirements. While the costs may be granular per request processing, it aggregates when billions of requests are processed in smaller and smaller duration. Consequently, the cloud may no longer be the least expensive option. Higher metering and more monitoring along with dramatic increase in traffic add up to substantial costs while there is no fallback option to keep the costs under the bar. 
The fallback option to cut costs and make them more manageable is itself a very important challenge. What will money starved public cloud tenants do when the usage only accrues with explosion of data regardless of economic cycles? This in fact is the motivating factor to consider a sort of disaster management plan when public cloud becomes unaffordable while it may even be considered unthinkable today. 
I'm not touting the on-premise solution or the hybrid cloud technology as the fall back option. I don't even consider them as a viable option regardless of any economic times or technological advancement. The improvements to cloud services has leaped far ahead of such competition and the cloud consumers are savvy about what they need from the cloud. Instead I focus on some kind of easing of traffic into less expensive options primarily with the adoption of quality of service for cloud service users and their migration to more flat rate options. Corporate spending is not affected but there will be real money saved when the traffic to the cloud is kept sustainable. 
Let us take a look at this Quality of Service model. When all the data goes into the database and the data is measure only in size and number of requests, there is no distinction between Personally Identifiable Information and regular chatty traffic. In the networking world, the ability to perform QoS depends on the ability to differentiate traffic using <src-ip, src-port, dest-ip, dest-port, protocol> With the enumeration and classification of traffic, we can have traffic serviced with different QoS agreements using IntServ or DiffServ architecture. Similar QoS managers could differentiate data so that the companies not only have granularity but also take better care of their more important categories of data.

New:
#codingexercise: https://1drv.ms/u/s!Ashlm-Nw-wnWtjSc1V5IlYwWFY88

Wednesday, April 25, 2018

The previous post was on Events as a measurement of Cloud Database performance. Today we focus on the pricing of the cloud database. 
It may be true that the cost of running a cloud database for high volume load may be significantly cheaper rather than the cost of running the same database and connections on-premise. This comes from the following managed services:
Scaling
High Availability
Database backups
DB s/w patches
DB s/w installs
OS patches
OS installation
Server Maintenance
Rack & Stack
Power, HVAC, net
However, I argue that the metering of cloud database usage and the corresponding bills will eventually become significant enough for corporations to take notice. This is due to the fact that pay as you go model albeit granular is in fact only expected to increase dramatically as usage grows. Consequently, the cloud may no longer be the least expensive option. Higher metering and more monitoring along with dramatic increase in traffic add up to substantial costs while there is no fallback option to keep the costs under the bar. 
The fallback option to cut costs and make them more manageable is itself a very important challenge. What will money starved public cloud tenants do when the usage only accrues with explosion of data regardless of economic cycles? This in fact is the motivating factor to consider a sort of disaster management plan when public cloud becomes unaffordable while it may even be considered unthinkable today. 
I'm not touting the on-premise solution or the hybrid cloud technology as the fall back option. I don't even consider them as a viable option regardless of any economic times or technological advancement. The improvements to cloud services has leaped far ahead of such competition. Instead I focus on some kind of easing of traffic into less expensive options primarily with the adoption of quality of service for cloud service users and their migration to more flat rate options. Corporate spending is not affected but there will be real money saved when the traffic to the cloud is kept sustainable. 
#codingexercise: https://ideone.com/F6QWcu 
#codingexercise: https://ideone.com/pKE68t

Tuesday, April 24, 2018


Events as a measurement of Cloud Database performance
In the previous post, we talked about cloud databases. They come with the benefits of the cloud and can still offer at par performance with on-premise database and perhaps better Service-Level Agreements. Today we talk about a test case to measure performance of a cloud database, say Azure Cosmos Database using it as a queue.  I could not find a comparable study to match this test case.
Test case: Let us say, I stream millions of events per minute to CosmosDB and each bears a tuple <EventID, Byte[] payload> The payload is irrelevant other than forcing a data copy operation for each event thereby trying to introduce processing time so I use the same payload with each event.
The EventIDs range from 1 to Integer.MaxValue. I want to store this in a  big table where I utilize the 255 byte row key to support horizontal scaling of my table.
Then I want to run a query every few seconds to tell me the top 50 events by count in the table.
The Query will be :
SELECT events.ID id,
COUNT(*) OVER ( PARTITION BY events.ID ) count,
ORDER BY count DESC;
Which semantically has the same meaning as GROUP BY COUNT(*)
This is a read-only query so the writes to the table should not suffer any performance impact. In other words, the analysis query is separate from the transactional queries to insert the events in a table.
In order to tune the performance of the Cloud Database, I set the connection policy to
·         Direct mode
·         Protocol to TCP
·         Avoid startup latency on first request
·         Collocate clients in same Azure region for performance
·         Increase number of threads/tasks
This helps to squeeze the processing time of the events as they make their way to the table. The number of Request Units (RU) will be attempted to exceed 100,000
This high throughput will facilitate a load that the analysis query performance may become slower over time with the massive increase in data size.
Conclusion – This test case and its pricing will help determine if it is indeed practical to store high volume traffic in the database such as from mobile fleet or IoT devices.


Monday, April 23, 2018

Databases can scale up when deployed on-premise. How about the migration to cloud databases ? What are some of the differences ?
This post tries to find those answers. On-premise database can scale only under constraints requiring hardware resources, manpower and costs That said on-premise database can work without internet access, provide high tps, keep the data and applications in house, and can store sensitive data.
Database servers such as SQL server have also become cloud ready and even provide mission critical cloud performance while lately they have been focusing on advanced analytics and rich visualizations. Higher end database servers that require say 64 cpus are often an on-premise only choice..
We don't talk about data warehouses in this section because they are different from databases. That said Azure SQL DB and DW both work in the cloud as relational stores.
Azure Data lake Analytics and Azure data lake store are non-relational and while they are hosted in the cloud, the corresponding on-premise analytics system is usually fulfilled by Hadoop.
Analytics include such things as Federated Query, Power BI, Azure machine learning, and Azure Data Factory
Sql Server on an Azure VM is considered as IaaS while virtualized databases in the cloud is considered PaaS. In the latter case, the Azure Cloud Services manage scale and resilience while allowing the application developers to focus on application and Data. Regardless of the choice, they use consistent tools for all the database options above.
Migrating database from single servers to Azure VMs is straightforward but MDMs and other databases that are company assets involve more chores.

#sqlexercise : https://goo.gl/Zz1yoV

Sunday, April 22, 2018

I came across an interesting discussion on how to efficiently search for  multiple predicates and support paging.
The resolution is that we almost always have to push predicates down into the database and more so even to the query so that the optimizer has a chance to determine the best query plan for it.
In the absence of a database, we will be emulating the work of the query execution inside the database and are still not likely to be efficient and consistent in all cases simply because we have an enumeration based data structure only in the web-service layer.
On the other hand, the database is closest to the storage, indexes and organizes the records so that they are looked up more efficiently. The query plans can be compared and the most efficient can be chosen. Having an in-memory iteration only data structure will only limit us and will not scale to size of data.
That said, the predicates are expected to evaluate the same way regardless of which layer they are implemented in. If we have a set of predicates and they are separated by or clause as opposed to and clause, then we will likely have a result set from each predicate and they may involve the same records in the results of each predicate. If we filter based on one predicate and we also allow matches based on another predicate, the two result sets may then be merged into one so that the result can then be returned to the caller. The result sets may have duplicates so the merge may have to return only the distinct elements. This can easily be done by comparing the unique identifiers of each record in the result set.
The selection of the result is required prior to determining the section that needs to be returned to the user. This section is determined by the start,offset pair in the enumeration of the results. If the queries remain the same over time, and the request only varies in the paging parameters, then we can even cache the result and return only the paged section. The API will likely persist the predicate, resultsets in cache so that subsequent calls for paging only results the same responses. This can even be done as part of predicate evaluation by simply passing the well known limit and offset parameter directly in the SQL query. In the enumerator we do this with Skip and Take.

#codingexercise : https://1drv.ms/u/s!Ashlm-Nw-wnWti-TEUQKcdNRtPgJ
#sqlexercise : https://goo.gl/Zz1yoV

Saturday, April 21, 2018


Reconstructing a sequence of numbers from separate smaller sub-sequences.
Let us consider a sequence 1,3,7,2,4,5,9,8
And we are given a set of sub-sequences as
1,3,2,4,9
1,7,4,5,9,8
There are duplicates but they appear only once in each list. The orders of the subsequences appear the same in the sub-sequences as they appear in the main sequence.
The question is can we reconstruct the original sequence from the given sub-sequences deterministically.
On the surface, we know that we can inter-leave the m elements in the n-1 gaps between the other subsequence excluding duplicates. That gives us a large number of combinations. These combinations do not indicate the unique master sequence of distinct numbers.
Moreover, the number of entries from one sequence into a gap of the second sequence is variable from one to the number of elements following the choice inclusive. The choice of elements for these gaps follows the well-known stars and bars problem which yields a binomial coefficient number of combinations.
Consequently, it does appear that there may be no solution to this problem.
If the elements were sorted, then this problem would have been a merged list where we pick the element in the destination based on the smaller element from the first and the second array.
To make a choice of picking one element from one of the arrays, we need to sequentially visit the elements in both arrays. We advance one element in either array each time. In order to do so, our choice has to be in the same order as the master sequence.
We may give some weight to the choice of the element in both arrays. This assignment of weight has to factor in the index of the element if available from both arrays. For example the number 4 appears in both arrays above but since the index is lower in the second array rather than the first array, it will be picked  and the other 4 will be ignored. Moreover, the first few elements of the first subsequence before the number 4 in the first array will be picked before the element 4 in the second array is encountered. We could use this availability of duplicates as a way to progress as long as there are sufficient duplicates in both indicating the choice of the array.
Therefore if the element 7 in the above subsequence was preceded by a 3, the elements in the master sequence could have been reconstructed to the point of 1,3,7,2,4.


Friday, April 20, 2018

Challenges cited from health information collaboration in Information Technology projects
The design of IT systems for collaboration in data sharing is itself quite a challenge given that publishers and subscribers do not adhere to any standard. In the health information industry, such designs are considered even more difficult because they also have to satisfy the demands of sponsors and users. 
In the law enforcement industry, legal requests and responses may even have a uniform standard across participating companies and law enforcement agencies. The goal here was in timeliness and the availability of data. Health information sharing standards have a slightly different area of emphasis. They have plenty of data when it comes to collections but privacy and boundary rules have many manifestations. Although they may not be subject to the same rules as personally identifiable information (PII), the health records have their own conformance requirements which have different interpretations and implementations.
The techniques for information sharing include:
1) Gateway between two heterogenous systems such as across companies
2) Message Bus between enterprise wide organizations with different areas of emphasis
3) Web-services  within an organization for the sharing of data that is not exposed directly from data stores
4) And exposed data sources such as databases and their connectors

The techniques are important only for what can be applied in a given context. However, the time-held tradition had been to store the data in a database or archive in a data warehouse and connect those data stores via web services.
The pull and push between publishers and subscribers do not necessarily come with a protocol. Participating systems can choose whatever works for them 
In the absence of any protocol or enforcement, information sharing falls largely on the level of collaboration between participants. Such collaboration is particularly relevant in coalescing health information across boundaries. 

#codingexercise 
https://ideone.com/XHJ2II

Thursday, April 19, 2018

Standard Query Operators in all programming languages
We were discussing standard query operations 
There may be some new queries in practice today that were not as prevalent as earlier. These queries may originate from machine learning where they augment current data mining practice of grouping, ranking, searching and sorting. Such queries call for some new algorithms which are expressed in the form of code.  Such logic together with the query operators above are augmenting the query language to form a newer language.
When we are missing the standard query operators in a programming language while a certain language has taken the lead with expanded libraries to cover the newer querying techniques, we have two ways to overcome it. Some plugins may provide these operators but such plugins often change the denominator over which the original language was executable. 
First, write code in the existing broader reachability language with the same kind of primitives in a shared module so that they can be replaced with whatever comes next in the language revisions.
Second, adopt the plugins and upgrade your language and toolset to take advantage of better expressions, testability and brevity in the logic artifacts that we keep in version control.
Both these techniques require extra work over the development that we currently do for the sake of meeting business goals. Consequently the use of standard query operators might incur cost but the benefits far outweigh the costs since we are aligning ourselves to the general direction in which queries are being expressed.
It may be interesting to note that queries get a lot more variety from systems other online transactional processing. Analytical processing, Business Intelligence stacks and reporting systems come with far more complex queries.  Data warehouses provide immense datasets to play with the queries. Data mining and other techniques are consistently expanding the definition of queries. In addition, the notion of batch processing and stream processing have been introduced Finally, clustering algorithms, decision trees, SVMs and neural-net based machine learning techniques are adding query logic that was not anticipated in the SQL query language. Universal query language is now trying to broaden the breadth of existing query language and standardize it. With these emerging trends, the standard query operators do not appear to be anywhere close to being discontinued and instead are likely to be expanded where possible. 

Wednesday, April 18, 2018

Standard Query Operators in all programming languages
Let us say we have a list of records. We want to perform some queries on the list to find information such as which records match a given criteria.  Other well known queries include grouping, eliminating duplicates, aggregating, finding one record, counting records etc. If we have two lists, we may want to join the lists, find the intersection or find the records that appear in one but not the other.
These are called standard query operations. They are standard because databases have traditionally described these operations in great detail and the operators in their query language are not only thorough but are also emulated up the layers in the software stack where usually the database powers all the layers above.
Many complex queries are often translated to these standard query operators to make them simpler to understand. Consequently these standard query operators become primitives and a convenience to standardize across several business areas where queries are used. 
There may be some new queries in practice today that were not as prevalent as earlier. These queries may originate from machine learning where they augment current data mining practice of grouping, ranking, searching and sorting. Such queries call for some new algorithms which are expressed in the form of code.  Such logic together with the query operators above are augmenting the query language to form a newer language.
Yet not all programming languages come with libraries to facilitate even the standard query operators while a certain language has taken the lead with expanded libraries to cover the newer querying techniques. Some plugins may provide these operators but such plugins often change the denominator over which the original language was executable. 
There are two ways to mitigate such discordance:
First, write code in the existing broader reachability language with the same kind of primitives in a shared module so that they can be replaced with whatever comes next in the language revisions.
Second, adopt the plugins and upgrade your language and toolset to take advantage of better expressions, testability and brevity in the logic artifacts that we keep in version control.
Both these techniques require extra work over the development that we currently do for the sake of meeting business goals. Consequently the use of standard query operators might incur cost but the benefits far outweigh the costs since we are aligning ourselves to the general direction in which queries are being expressed.
#codingexercise http://js.do/code/209859 


Tuesday, April 17, 2018

We were discussing why Interface description language don't become popular as compared to their alternatives. We said that interfaces or contracts provide the comfort for participants to work independently, offload validation, determine functional and non-functional requirements but the alternative to work with granular stateless requests that are well documented are a lot more appealing. 
We also discussed the trade-offs between state driven sessions and stateless APIs when discussing client side application development. We noted that there is a wonderful ecosystem of browser based client side software development using standard jargons and tools.
Today we see that there are differences to client side and application side software development. For example, client-side javascript based applications require to mitigate security threats that are fundamentally different from server side code. Those client scripts could run anywhere and on any device. Threats such as cross site scripting, man-in-the-middle attacks, sql injection attacks, cross-origin resource sharing etc are all vulnerabilites exploited from the client side.
Consistency in the way we describe our markup, stylesheets and scripts help smooth out the organization of code on the client side. However, they help only so much. There is nothing preventing the developer from moving logic forward from server to client side. When we did have fat clients, they were touted as sophisticated tools. With the help of Model-View-Controller architecture, we seemed to move to different view-models for tabs on the same web page and remove the restriction between rendering and content for the page. UI developers suggested that view-models make a page rather limited and highly restrictive in workflows requiring more navigations and reducing usability in general. The logic on the client side can also be further simplified without compromising rendering or usability. For example, we could make one call versus several calls to the API and data source. The entire page may also be loaded directly as a response from the backend. While API may define how the user interface is implemented, there is little need for user interface to depend on any state based interface. Therefore the argument holds even in front-end development.

#codingexercise : water retained in an elevation map :  https://ideone.com/HBEn3F

Monday, April 16, 2018

We were discussing why Interface description language don't become popular as compared to their alternatives. We said that interfaces or contracts provide the comfort for participants to work independently, offload validation, determine functional and non-functional requirements but the alternative to work with granular stateless requests that are well documented are a lot more appealing. 
Contracts are verbose. They take time to be written. They are also brittle when business needs change and the contract requirements change.
Contracts are also static and binding for the producer and consumer. Changes to the contract are going to involve escalations and involvement.  
Contracts whether for describing services or for component interactions. They are generally replaced by technologies where we use pre-determined and well-accepted verbs and stateless design.
The most important advantage of stateless design and syntax of calls is that each request is granularly authenticated, authorized and audited. By making a one to one relationship between a request and a response and not having the client span multiple requests with a shared state, we avoid the use of dangling resources on the server side and state management on clients.  The clients are required to be handling more preparation for each request but this happens to be a convenience when all the requests share the same chores.   
The other advantage to stateless design is that much of the requests now follow well-established protocols. These protocols come with important benefits. First they are widely accepted in the industry. Second they move away from company specific proprietary contracts.  Third the wide acceptance fosters a community of developers, tools and ecosystems. 
The popularity of a technology is determined not only by the simplicity of the product but by the audience using the technology. When tools, development environment and products are promoting stateless design over stateful contracts, no matter how much we invest in interface specification language, its adoption will be very much dependent on the audience who will generally go with their convenience. 
Protocols can be stateful and stateless. There is no doubt that either may come useful under different circumstances. Contracts still come useful in being a descriptor for a service in its three-part description - Address, Binding and Contract.  But when each service behaves the same as every other cookie-cutter service and a well-known binding and contract, there may not be anything more needed than address. 
Leaving the communications aside, let us look at the proposed benefit of a client-side state management. Certainly, it improves the performance of numerous intermediary requests if the setup and teardown are the ones taking the most performance hit. Also, the client has an opportunity to keep the reference on the server-side resource for as long as they want and the underlying communication channel holds. Proponents argue a stateful design may result in higher performance for trunk or bulk callers instead of retail individual callers since this reduces the number of clients to a manageable few. While this may be true, it underscores the stateful design as more of an exception rather than the norm. 
#codingexercise: https://ideone.com/3ZY49O