Sunday, April 15, 2018

Why doesn’t Interface description language become popular? 
Software is designed with contracts primarily because it is difficult to build it all at once.  Interfaces form a scaffolding that lets components be designed independently. A language used to define interfaces is enhanced to include many keywords to derive maximum benefit of describing the contract.  Software engineers love the validation that can be offloaded once the criteria is declared this way. Yet Interfaces or contracts do not become popular. Technologies that used such contracts as Component Object Model (COM) and Windows Communication Framework (WCF) were replaced with simpler and higher granularity and stateless mechanisms. This writeup tries to enumerate the reasons for this unpopularity. 
Contracts are verbose. They take time to be written. They are also brittle when business needs change and the contract requirements change. Moreover, they become difficult to read and orthogonal to the software engineers efforts with the component implementation. On the other hand, tests are improved because the success and the failure of the components as well as their functional and non-functional requirements can now be determined.  If the software does not work as expected, it might end up to be a miss or incorrect specification in the contract. 
Contracts are also static and binding for the producer and consumer. If they are static, it is easy to ship it to the consumer for offline software development. At the same time, the consumer might need to articulate changes to the contract if the interface is not sufficient. This bring us to the second drawback that changes to the contract are going to involve escalations and involvement.  
Contracts whether for describing services or for component interactions, are generally replaced by technologies where we use pre-determined and well-accepted verbs so that the lingo is the same but the payloads differ. Here we can even browse and determine the sequence of operations based on the requests made to the server. This dynamic discoverability of the necessary interactions helps eliminate the need for a contract. Documentation also improves the need to have explicit contracts and the chores needed to maintain them.  
Conclusion: Contracts provide the comfort for participants to work independently, offload validation, determine functional and non-functional requirements but the alternative to work with granular stateless requests that are well documented are a lot more appealing. 

#sqlexerrcise
Consider a set of players each of whom belongs to one and only one league. Each player may have several wins and losses as part of the games between leagues. A league may have any number of players.
Define a suitable schema and list all those players who have more wins than losses.
write a table valued function for SELECT player, count(*) as wins from GAMES where result='win' GROUP BY PlayerID
write a table valued function for SELECT player, count(*) as losses from GAMES where result='loss' GROUP BY PlayerID
Use the results A and B from above to determine count as
SELECT A.PlayerID, A.wins, B.losses from A INNER JOIN B on A.PlayerID = B.playerID where A.wins - B.losses > 0;
or we could use the PIVOT operator with the sum aggregate in this case.



Saturday, April 14, 2018

Today I'm taking a break from my previous post below on Java Interfaces to discuss a small coding question I came across  and post it here for thoughts:
/*
Implement a class with following methods
put( key, value );

delete( key );
getRandom(); // returns one of the values added randomly
Additional Requirements : No duplicates:
No dups.
    -    update existing
*/
public class MyNoDuplicateContainerWithGetRandom
{
    // for map based access
    private Dictionary<Object, Object>> dict;
    private List<Object> keys; // array
 
    public MyContainer() {
        dict = new Dictionary<Object, Object>> ();
        keys = new List<Object>();
    }
 
    public void put(Object key, Object value)
    {
        if (dict.ContainsKey(key)){
            dict[key] = value;
        }else
        {
            using(tr = new TransactionScope()){
            dict.Add(key, value);
            keys.Add(key);
            }
        }
    }
 
    public Object getRandom(){
        var r = new Random();     
        int index = r.Next(list.Count());
        Object key = keys[index];
        return dict[key];
    }
 
    public Object delete(Object key){
        if (dict.ContainsKey(key))
        {
                using (tr = new TransactionScope()){
                Object value = dict.Remove(key);
                list.Remove(key);
                }
        }
        return null;
    }

  • https://1drv.ms/w/s!Ashlm-Nw-wnWtheIEHpU4Jua0V79

Friday, April 13, 2018

Interface annotator:
Abstraction is one of the pillars of Object Oriented programming. When applied to behavioral representation, it is demonstrated with the use of interfaces in language. An interface enables any class to implement a certain abstraction of behavior via the contract described. Unit tests are often written using the mocking of these interfaces. This facilitates tests to be written even before the classes are implemented. This notion is often referred as the Liskov Substitution Principle. The purpose of this writeup is to describe a user library that gather statistics on the call usages automatically.
Description: API monitoring and mechanics are well understood because we often leverage it in production support of critical API implementations. These monitoring services are configured to read the number of calls made, the failures associated and record metrics of availability and latency. Rules and alerts may also be written to take actions on the associated records. The purpose and pervasiveness of API monitoring is never questioned. However the work involved is significant and therefore applied to external facing services.
Developers generally do not have such instrumentation and call statistics other than profiling which comes at a very high cost. Something simpler in terms of just success failure and invocation counts cannot easily be determined from the runtime without some kind of instrumentation. Some do write annotations that can be added to methods to capture this information, however decorating each and every method is generally tiresome.
On the other hand, if every interface could gather such statistics and the results could be persisted from a run of the application, then such data could become valuable to query. Collecting information automatically on interfaces could be supported with the help of a library. In other words, monitoring as opposed to profiling could be helpful even pushed down to methods on the interfaces within a service and not just the methods that are externally facing and invoked via APIs.
Interfaces are used for several techniques one of which is the inversion of control principle. This lets implementations to be invoked from the interfaces so that they are automatically available within the service. Consequently unlike classes, they are heavily shared and statistics more appealing because the usage is not limited or scoped.
Conclusion: Simple statistics on interfaces if collected automatically could provide significant insight without the encumberance of profiling.
Courtesy : http://users.ics.aalto.fi/kepa/publications/KahLamHelNie-RV2009.pdf
https://ideone.com/saidLd

Thursday, April 12, 2018

 Today I'm going to continue discussing Microsoft Dynamics AX.  SAP, ERP and CRM solutions are a world of their own. We have not looked at these in detail earlier so we start with Dynamics. The user interface for Dynamics provides a variety of tools for viewing and analyzing business data. Data is usually presented in Reports. Reports can be standard-reports, Auto reports or ad hoc reports. Tasks might take longer to execute. Sometimes its best to let it run elsewhere and at some other time where they can be prioritized, queued and executed. Documents may be attached to data. This is helpful in cases where we need to add notes and attachments to the data.  The enterprise portal enables web based access to the dynamics instance on a server installed in the enterprise network. Access is role based such as employees, Sales representatives, consultants, vendors, customers etc. This completes a brief overview of the user interface.
The General Ledger is probably the core financial section of the user interface. It is used to record fiscal activities for accounts for a certain period. For example, it has accounts receivable and accounts payable sections.  The entries in the ledger may have several parameters, defaults, sequences, dimensions, dimension sets and hierarchies. In addition, tax information and currencies are also included.
Cost accounting is another key functionality within Dynamics AX.The cost of overheads and the indirect costs to the entities can be setup as cost categories and they can be defined for use within cost accounting. These cost categories can have a link to the ledger accounts. This way ledger transactions show up under the appropriate category. Costs can even be redistributed among other categories and dimensions so that the cost structure of the business can be analyzed. The dimensions of a cost usually include the cost center and the purpose.
Dynamics AX also has a bank section. We can create and manage company bank accounts and associate activities and artifacts such as deposit slips, checks, bills of exchange and promissory notes. We can create bank groups, bank transaction types and the bank accounts that the company has in each bank group and check the layouts for the bank accounts. Standard query operations may then be applied to the bank data. Reconciling, generating summaries and printing statements are also made possible. We can make a payment by check, setup a method of payment for checks, delete checks, create a deposit slip, unposted checks, a check on a ledger account or to reconcile bank accounts.
As we see the financial sections within Dynamics AX, we note that it is dedicated towards what helps the business process. This is what differentiates this product from other related products. Dynamics is all about convenience.
#codingexercise
https://ideone.com/BiivdE

Wednesday, April 11, 2018

Implementation of a circular queue

public class CircularQueue {

private List<int> items;

private int head;

private int tail;

public CircularQueue(int capacity)

{

items = new List<int>(Enumerable.Repeat(capacity, 0));

head = 0;

tail = 0;

}

public void Enqueue(int val)

{

if (head == tail)

{

     // throw new UnsupportedException

     tail = (tail  + 1) % capacity;

}

items[head] = val;

head = (head + 1) % capacity;

}

public int Dequeue()

{

if (tail == head)

{

throw new UnsupportedException();

}

int val = items[tail];

tail = (tail + 1) % capacity;

return val;

}

}



head  tail

0       0



E

D

DE

DD

EE

ED

EDE

EED

DEE

EEE

DDD

DDE

#minesweeper

https://ideone.com/kPGtZ5

https://ideone.com/ACLtJs

Tuesday, April 10, 2018


System Center Manager for Azure and AWS
Introduction: Enterprise on-premise server management was a well-known routine that was arguably facilitated very well by System Center Operations Manager. It involved monitoring an asset using several sources on that computer and providing a central console to view this information across assets. It enabled consolidation of tools while bringing a single console to monitor services and assets from simple to complex. With the move to cloud, the assets exploded in size and regions including and not limited to a variety of servers, platforms, hardware and vendors. We discuss how this was overcome technically in brief.
Description: The Systems Center Management Pack for Windows Azure lets us now instrument Azure assets for availability, performance monitoring and reporting using SCOM. The Azure Fabric management pack discovers PaaS and IaaS components from Azure subscriptions and communicates directly with an Azure web service to deliver cloud based management data to SCOM.
At the heart of this technology is the notion of a cloud web service and agents on the inventory. The SCOM agent is already fluent in gathering data from a variety of sources and this was made easier by the operating system features that remained consistent if not better across versions. With the explosion of data sources across assets, a central cloud services scales up to meet the challenge. The SCOM Console continues to provide a holistic picture while providing the detailed drill down that it used to for every single asset.
However, organizations were known to have a heterogeneous environment consisting of a mix of Windows and flavors of Linux. Having multiple monitoring and alerting technologies on different operating system flavors was traditionally a maintenance challenge but limiting the ability to scale to a public cloud made it more so.  The SCOM agent from windows server allowed monitoring Unix/Linux flavors in existing infrastructure with just a few more steps:
1)      Create accounts to be used for monitoring and agent maintenance.
2)      Add the created accounts to the appropriate UNIX/Linux profiles
This  was facilitated by Management packs for Unix/Linux from Microsoft as shown here and with guidance as published here
With the move to Cloud, the SCOM agents for these OS Flavors were now deployed to Azure VM instances and with the discovery of these instances, the Azure Fabric management pack would facilitate the inside and outside perspectives of the cloud hosted VMs. If an application went down, the failure could be quickly narrowed down to the VM OS, the Azure Service or the Azure storage.
However the same could not be said for multiple public clouds such as Azure and AWS. With the availability of Amazon EC2 Systems Manager that has changed too and now we can automatically collect software inventory, OS patches, system images, and configure both Windows and Linux Operating systems.
With new era convention of facilitating programmatic access to all the applications and services, there are now APIs to talk to either cloud systems managers. Thus organizations can choose leverage and promote a consistent terminology for both these clouds.
Even with hybrid resources such as containers and clusters, production support may be standardized with system manager spanning these resources as well. Containers and clusters provided their own monitoring and health management solutions. From a systems center perspective, it made no difference if we queried the information directly from the operating system via our own agent or via the host that makes an API available so long as the information can be standardized.  System centers already work with different versions of Operating System flavors and technologies from hybrid cloud. At the same time, well known container vendors and proprietary cluster managers already make web APIs ubiquitous.  Mashing up the APIs with the expectations of the System Center for a uniform and consistent view to the layer above, then becomes straight-forward. However care must be taken to make sure that behavior and quirks of the vendor systems are tolerated.  Whether it is from System Center to the layer below or within the containers and clusters layer, the techniques for monitoring and rotations involving setup and teardown are similar and the best practice for both may suitably be shared.
Conclusion:  Systems Center Managers are available at cloud level doing away with proprietary and custom software both in house or purchased for managing the IT assets of the organization.

#codingexercise

Rotate a n x n matrix by 90 degrees:

        static void matrixRotate(ref List<List<int>> A, int r0, int c0, int rt, int ct)

        {            
            if (r0 >= rt) return;
            if (c0 >= ct) return;
            var top = new int[ct-c0+1];
            int count = 0;
            for (int j = 0; j <= ct-c0; j++){
                  top[count] = A[0][j];
                  count++;
            }
            count--;
            for (int j = ct; j >= c0; j--)
            A[c0][j] = A[ct-j][0];
            for (int i = r0; i <= rt; i++)
            A[i][c0] = A[rt][i];
            for (int j = c0; j <= ct; j++)
            A[rt][j] = A[ct-j][ct];
            for (int i = rt; i >= r0; i--) {
                   A[i][ct] = top[count];
                   count--;
            }
            matrixRotate(ref A, r0+1, c0+1, rt-1, ct-1);
        }

// Before:
1 2 3
4 5 6
7 8 9

// After:
7 4 1
8 5 2
9 6 3

// Before
1 2
3 4

// After
3 1
4 2

Some others: https://ideone.com/IdbbBp
https://ideone.com/690TtN