Tuesday, December 12, 2017

We continue discussing software improvements from earlier posts. We were talking about Services available via APIs. Their versioning and organization have an industry wide recognized best practice and are therefore, quite staple. At the same time presence of user interface in the API responses is also a viable option for improving interactions with end users
As services morph from old to new they may also change their dependencies both in the version and in the kind of service. While user interface may become part of the responses as the response body, it may be better to separate the UI as a layer from the responses so that the service behind the API may be consolidated while the UI may be customized on different clients. This keeps customization logic away from the services while enabling the clients to vary on plugins, browsers, applications and devices. Some request responses may continue to keep their UI because they may not accept any other form of data transfer but the XMLHttpRequest that transfers the data from the browser to the server is still a client based technology and doesn't need to be part of the server response body. Another reason why servers choose to keep their responses to include user facing forms and controls is that they want to enforce same origin and strict client registrations and their redirections. By requiring that some of their APIs to be internal, they also restrict others from making similar calls. APIs do have a natural address, binding and contract properties that allow their endpoints to be secured but client technologies on the other hand do not require such hard and fast rules. Moreover, with services relaying data via calls, strict origin registration and control may still be feasible.
Another way to work with services is to write a SDK so that clients don’t have to make the REST calls directly. This is a best practice favored by many cloud providers who know their cloud computing capability will be called from a variety of clients and devices. By exposing the resources directly as objects and methods they enable far easier integration rather than with individually crafting the API calls. Moreover a variety of programming languages can be used to provide the SDK which widens the client base. The SDKs are provided as a mere convenience so this could be something that the developer community might initiate and maintain.SDK. Another benefit to using the SDK is that now the API versioning and service maintenance onus is somewhat reduced as more customers use the SDK rather than the wire calls. SDKs also support platform independent and interoperable programming which helps test and automation.
Similarly command line interface CLI also improve the customer base. Tests and automations can make use of commands and scripts to drive the service. This is a very powerful technique for administrators as well as for day to day usage by end users. Command line usage has one additional benefit - it can be used for troubleshooting and diagnostics. While SDKs provide an additional layer and are hosted on the client side, command line interface may be available on the server side and provide fewer variables to go wrong. With detailed logging and request-response capture CLI and SDK help ease the calls made to the services.

Monday, December 11, 2017

We continue discussing software improvements from earlier posts.
API versioning and organization have an industry wide recognized best practice and are therefore, quite staple. At the same time presence of user interface in the API responses is also a viable option for improving interactions with end users
API versioning is mentioned as a best practice for clients and consumers to upgrade And there is one very useful property of the HTTPS protocol where a request can be redirected so that the users may be kept informed at the very least or their calls translated to newer versions in full service solutions. It is true however that versioning is probably the only way for providing an upgrade path to users. In order that we don't take on the onus of backward compatibility, the choice to offer both old and new becomes clearer. It is also true that not all customers or consumers can move quickly to adopt new APIs and the web service then takes the onus of maintaining the earlier behavior in some fashion. As web services mature and they become increasingly hard and unwieldy, they do not have any other way to maintain all the earlier behavior without at least relieving the older code base from newer features and providing them in a new offering.
As services morph from old to new they may also change their dependencies both in the version and in the kind of service. While user interface may become part of the responses as the response body, it may be better to separate the UI as a layer from the responses so that the service behind the API may be consolidated while the UI may be customized on different clients. This keeps customization logic away from the services while enabling the clients to vary on plugins, browsers, applications and devices. Some request responses may continue to keep their UI because they may not accept any other form of data transfer but the XMLHttpRequest that transfers the data from the browser to the server is still a client based technology and doesn't need to be part of the server response body. Another reason why servers choose to keep their responses to include user facing forms and controls is that they want to enforce same origin and strict client registrations and their redirections. By requiring that some of their APIs to be internal, they also restrict others from making similar calls. APIs do have a natural address, binding and contract properties that allow their endpoints to be secured but client technologies on the other hand do not require such hard and fast rules. Moreover, with services relaying data via calls, strict origin registration and control may still be feasible.

Sunday, December 10, 2017

We reflected on a few ideas of improvement for software related to identity management here.  We continue with a few more ideas. As software matures, many engineers choose to be dainty about the improvements they make often requiring very little change to existing data paths or processes. They do this for several reasons. First, because code reuse and original design decisions work favorably. Second altering existing code paths often runs the risk of regression. Third the cost of churn generally rises higher than the benefits. Also, much of the changes are usually in the form of technical debt – a term used to refer to the corrections needed as features are released by taking shortcuts which create instant debt. It is assumed that this debt is paid as customers pay for the usage of the new features. Moreover, it is easier to add workflows to existing software by reading data and transforming it for analysis and reporting given that the online processing needs to be as efficient and streamlined as possible so it remains available to the customers.  And finally, the organization may find it easier make value propositions with vertical silos instead of horizontal modifications. While databases used to form the shared data infrastructure and services were differentiated for the value propositions, today organizations prefer to move from infrastructure view to microservice model with the introduction of new services and the retiring of old services.
Regardless of the intention, scale, scope and impact of the changes to the code base, most improvements suffer from the malaise that there are not enough tests and that it is too expensive and even prohibitive to have the adequate surface coverage for acceptance. It is widely believed that most of the tests are investments for feature launches as well as customer satisfaction. Therefore, tests are just as much currency as any other artifact. The presence of a well written test can enforce expectations from the software as early as design time and all the way through the software development cycle. In this regard, one of the improvements we could consider is to increase API based testing for the service. Generally, an n-tier service has tests at all levels starting from the database unit tests at the back-end, api level tests at the service layer, and front-end tests for the front end. Out of these, the design of the APIs and their tests poses the maximum benefit in the tradeoffs for value of tests and the technical debts. At the same time all non-functional aspects such as security and performance can still be covered. Yet APis are hard to keep tidy in the face of ever increasing improvements and business changes to the software. APIs like most customer facing software interactions suffer from the same liabilities over time. Fortunately, the versioning and retirement policies can be enforced for all clients with the added advantage that more and more clients accept HTTPS based APIs over other forms of interface. As devices, platforms and ecosystems change, the APIs present an evergreen investment that can stand the test of time better than others. Moreover, they naturally lend themselves to automation, diagnostics and monitoring than most other software.
API versioning and organization have an industry wide recognized best practice and are therefore, quite staple. At the same time presence of user interface in the API responses is also a viable option for improving interactions with end users

Saturday, December 9, 2017

Tools that can help with Identity Management:
Almost any web application would require a user to sign in so that the application can know and keep track of the activities taken by the user. When the users are in the range of hundreds of millions, identity becomes important software in itself. As software engineers take care of maintaining the code base and the processes associated with the release of improvements to the software, they will frequently rely on tools and tests.  The following are some of the tools that can help with this tremendous responsibility.
First, a web application such as this will rely heavily on backend services that may work exclusively for specific purposes such as phone contact information lookup, address information lookup, authentication and auditing, utilities such as issuing tokens for logged in users and many others. Each of these services may also be hosted in different environments that are used to prepare and test the code prior to its release to production. Consequently, these services could very well be counted as a dependency for the software. In order that we keep track of the dependencies and for troubleshooting issues with the software, we could consider command line tools that enable Application Programming Interface (API) calls to the service alone. These tools help use the service in isolation while also provide a command line convenience to gain information on individual resources such as an account or a number lookup. While curl – a popular command line tool can be used to call services via APIs hosted over the HTTPS, most enterprise services are secured and some of these APIs often require a pre-amble where one api called before another.  These tools can come in helpful for this purpose. These tools come in helpful for triage and to work in isolation from the tests and code. Moreover these tools are for the convenience of getting data or setting state and not for operations or monitoring which generally have their own infrastructure based on impact radius.
Second, the entire software consuming the dependencies regardless of its organization may also be considered a service with its own APIs. Writing command line convenience tools to drive this software with an api or a sequence of apis can become very useful for diagnostics and automation. Consider the possibility of writing automations with scripts that are not restricted to writing and testing code. Automation has its own benefits but not having to resort to code widens the audience. Along the lines of automation, we mentioned convenience but we could also site security and policy enforcement.  Since the tools are designed to run independently and with very little encumbrance, they can participate in workflows previous unimagined with existing codebase and processes.
Conclusion: While testing and coding are primary activities for software development in any organization, they can be improved with empowering the individuals who are engaged in these activities. These tools help fill this gap.

#codingexercise
We were discussing finding closest numbers between the elements of three sorted integer arrays. 
Another usage of finding the closest numbers in these arrays to a given input value is to discover overlapping range of sorted numbers. In order to do this on number line, we could choose max(start1,start2,start3) and min(end1,end2,end3) for (start,end) pair for every array. But by finding the closest of the min and max values, we can determine the overlapping range projection on any one of three arrays. Then we can choose the projection with the smallest number of elements 
Determining the overlapping region projections helps shrink the search space for any operations involving all three arrays. It may not be useful for sum, average, mode or median but it is helpful for partitioning the number line to determine the presence of a number in any array. If the number is not in the overlapping range then we exclude one of the arrays and binary search for it outside the projections in other two arrays.

Separation of range helps not only for looking up a number but also in performing similar operations on the overlapping projections in each array. For example, finding the average of  projections just got a whole lot more easier and accurate. And if the numbers are all positive, their product also became easier to approximate.

Friday, December 8, 2017

We continue discussing the personal coder. The personal coder can also become the personal recommender.The recommender is a personal advisor who can use intelligent correlation algorithms. For example, it can perform collaborative filtering using the owner's friends as data point. In this technique, the algorithm finds people with tastes similar to the owner and evaluates a list to determine the collective rank of items. While standard collaborative filtering uses viewpoints from a distributed data set, the recommender may adapt it to include only the owner's friends. 
 The recommender might get geographical location of the user, the time of the day and search terms from the owner. These are helpful to predict the activity the owner may take. The recommender does not need to rebuild the activity log for the owner but it can perform correlations for the window it operates on. If it helps to build the activity log for year to date, then the correlation can become easier by translating to queries and data mining over the activity log. The recommender does not have to take any actions. Whether the owner chooses to act on the recommendations or publish it on Facebook is entirely her choice. This feedback loop may be appealing to her friends and their social engineering application and it is a choice for the friends to participate The recommender is also enabled to query trends and plot charts to analyze data in both spatial and temporal dimensions. It could list not only the top 5 in any category but also the items that remained in the top 5 week-after-week. These and such other aggregation and analysis techniques indicate tremendous opportunity for the recommender to draw the most appealing and intuitive information layer over any map.  A recommender can query many data sources over the web. For example, it can query credit card statements to find patterns in spending and use it to highlight related attractions in a new territory. Wouldn’t it be helpful to have our login pull up restaurants that match our eating habits instead of cluttering up the map with all those in the area? Similarly, wouldn't it be helpful to use our social graph to annotate venues that our friends visited? The recommender may be uniquely positioned to tie in a breadth of resources from public and private data sources. Many web applications such as del.icio.us and Kayak make data available for browsing and searching.  
#codingexercise

We were discussing finding closest numbers between the elements of three sorted integer arrays. 
Another usage of finding the closest numbers in these arrays to a given input value is to discover overlapping range of sorted numbers. In order to do this on number line, we could choose max(start1,start2,start3) and min(end1,end2,end3) for (start,end) pair for every array. But by finding the closest of the min and max values, we can determine the overlapping range projection on any one of three arrays. Then we can choose the projection with the smallest number of elements 
Determining the overlapping region projections helps shrink the search space for any operations involving all three arrays. It may not be useful for sum, average, mode or median but it is helpful for partitioning the number line to determine the presence of a number in any array. If the number is not in the overlapping range then we exclude one of the arrays and binary search for it outside the projections in other two arrays.

Thursday, December 7, 2017

We continue with our discussion of the personal coder. We browse our email inbox from any computer and kiosk. Interacting with our own personal assistant on any speech recognition medium may just be as easy to bring up as that. With the standardization of the protocol between a speech recognizing hardware and the personal assistant, the two can be separately and allowed to vary independently giving rise to the notion of personal assistant conjured up on the kiosk. 
The personal assistant can also improve human interactions with the help of emotions and feelings. Personal robots such  as the retail available Jibo already demonstrate this but we have cited a deeper and broader set of skills in our write-up so far. If the form and the functionality as described here can be combined, it will lead to pervasive use of the personal assistant. 
Personal assistant can not only be interactive, humorous and informative, it can also help with visualization and drawing with verbal commands using the help of a projector lamp and a wall. This capability is often touted in movies but not facilitated on the personal assistant device. 
Personal assistant might also be paired with a virtual reality handset so that the assistant and the owner can be following the same canvas. This helps the owner to share with the assistant for replay on the projector mentioned above. Both the projector and the canvas are specialized functions of the personal assistant and not necessarily something that needs to be taken out of a notebook or desktop. However, enabling a tracking device with the assistant improves a whole lot more possibilities other than the owner's commands.  
There are two advantages of having the personal assistant maintain an activity log: 
First, the personal assistant is uniquely positioned to be a trusted confidential passive associate 
Second, the collection of activities helps boost different traits and habits of the owner leading to positive lifestyles.  
Together the personal assistant and the owner might achieve singularity as man and machine increasing the impact that the individual makes. 
#codingexercise
We were discussing finding closest numbers between the elements of three sorted integer arrays. We don't have to sweep all three arrays if we need to determine the relative range of each array. We take the sentinel values of a given array and find its closest numbers in the other arrays. Then we compare the sentinels of the other arrays with the closest numbers found in that array. This gives us a comparison of how wide or narrow the other arrays are compared to one of them.It gives us an idea of the relative positioning of the arrays within each other whereas we could only tell their position on the number line with the sentinels. 
Another usage of finding the closest numbers in these arrays to a given input value is to discover overlapping range of sorted numbers. In order to do this on number line, we could choose max(start1,start2,start3) and min(end1,end2,end3) for (start,end) pair for every array. But by finding the closest of the min and max values, we can determine the overlapping range projection on any one of three arrays. 
finally we can choose the projection with the smallest number of elements 

Wednesday, December 6, 2017

We continue with our discussion of the personal coder. Let us now consider the first differentiation. If the assistant is mobile, then it is likely to be classified a robot. The movement of the device is mostly with respect to an object. When we introduce motors and an ability to govern them by way of co-ordinate space, we are working with robotics. The assistant targets only the owner. It does not have to move around to translate commands into tasks. This is why we separate robotics from the capabilities an assistant has.
Second, the assistant is not an appliance but a software over an appliance. When an appliance collects and sends data over the internet, it can be considered an internet-of-things. In the case of an assistant, the data pertaining to the owner is maintained by the assistant and actively indexed and worked with. Moreover, the assistant need not be globally unique or have its own ipv6 address to do its job. This is where we differentiate from the lower level sensors and devices that are otherwise also connected to the internet.
There are two different kinds of pressure exerted in this middle layer for a "personal assistant". The pressure from the layer of IoT devices is that they can proliferate to include dedicated assistant responsibilities and may no longer require the owner to talk to one assistant only. This avoids having to relay commands and reduces the assistant from a being a single point of failure. At the same time, it makes it natural for the owner to turn to the function specific device and command in just the same way as they would an assistant for working with her data locally. This is also more intuitive for the user to know that a device by its function. Moreover, elaborate setup of applications and devices to the assistant is now avoided. In other words, the personal assistant has to remain smarter and differentiate from the device specific functions that they can specialize in. One way to do this is to provide more intelligent services over a unification platform for seamless experience, relay and consistency. If the assistant could treat these devices as plugins, the ecosystem for the assistant can grow better.
Similar arguments come from the mobile robot inclusive top layer where the assistant does not enter. The robot may assume the responsibilities of the "personal assistant" given the hardware and support it gets. Since it is mobile it is the equivalent of butler services for the customer. However, the digital assistant can continue to grow its presence by being more affordable and comprehensive in its support for taking commands from a robot as well. These kinds of improvements are easy to imbibe in the personal assistant while enjoying the power of cloud computing available on a voice activated command.
Another area of improvement in building the jargon for the personal assistant is to create associations with proper or common names for devices that are not necessarily on the internet. For example, it should be easy for the assistant to pair with a headphone over Bluetooth network and allow the user to listen to music on his headphone. The personal assistant merely has to recognize that the word headphone refers to a specific device that it can scan the Bluetooth network for and connect. Most desktop computers already have a hardware profile and can scan for hardware changes. They are also automatically able to download the necessary drivers to let the device function properly. In a way the personal assistant can become a headless desktop so long as the essence of communication continues to remain voice based. There is also a demarcation of concerns between the portable laptop and the portable personal assistant. One is general purpose computing and the other is merely customer facing. In a way, the personal assistant would benefit immensely to communicate with a desktop for all computing tasks.
Sometimes it is harder to ask if the desktop based computing is going to be demarcated from the personal assistant? The personal assistant is not the same as the desktop computer because it derives its value from being closest to the owner. It can be considered a frontend while the desktop may act as the backend. This connection between the desktop and the personal assistant is not just an extension from desktop computing but a protocol that spans different hardware, platforms and computing.
In its virtual form, a personal assistant may also be given any avatar. If we were to visualize the assistant it could be a photo frame on the desk that can show different personas as suited to the end user. By making the speech recognition software as thin as possible on the front end and backing it up with heavy duty cloud services, we can make the assistant appear differently at a time or a place.
#codingexercise
We were discussing finding closest numbers between the elements of three sorted integer arrays. We don't have to sweep all three arrays if we need to determine the relative range of each array. We take the sentinel values of a given array and find its closest numbers in the other arrays. Then we compare the sentinels of the other arrays with the closest numbers found in that array. This gives us a comparison of how wide or narrow the other arrays are compared to one of them.It gives us an idea of the relative positioning of the arrays within each other whereas we could only tell their position on the number line with the sentinels.