Monday, October 2, 2023

 

Sample customized queries for dashboard visualizations from Overwatch schema:

1.       select sku, isActive, any_value(contract_price) * count(*) as cost from overwatch.`dbucostdetails`

group by sku, isActive

having isActive = true;

 

sku        isActive cost

jobsLight            True 0.30000000000000004

interactive          True   1.6500000000000001

sqlCompute       True              0.66

automated         True 0.30000000000000004

 

2.       SELECT created_by, count(*) FROM (SELECT DISTINCT cluster_id, created_by FROM overwatch.`cluster`)

GROUP BY created_by

ORDER BY count(*) desc

limit 1000;

 

created_by              count(1)

JobsService        20051

User1    13

User2    13

User3    6

User4    3

User5    2

User6    1

 

3.       SELECT cluster_id, SUM(uptime_in_state_S) as uptime FROM overwatch.clusterstatefact

GROUP BY cluster_id

ORDER BY uptime DESC

limit 1000;

 

cluster_id           uptime

0822-134022-ssn7p7zy   2656586.3910000008

0909-211040-g7gw6ze     2655716.523000001

0914-142202-nx0u3s1a   2634530.8240000005

0907-170325-qf4ypd19   2611126.8639999996

0109-204324-dba1c5o   2602285.5589999994

0831-160354-2gds4r56     2601205.147000001

0728-171334-wqfvw8lm              2599745.636

1220-150950-1xfqwfeq              2533890.514

0828-204151-rqw3um2a   1986805.3609999998

0302-190420-h8rv9prn   1983515.9470000002

0803-144506-g98h4fl2   1975430.0520000001

0908-095703-w31xe9fb   1842740.3310000005

0917-185549-g4n3dqjl              1052153.248

0918-031805-t3zdjacw              1002694.213

 

4.       SELECT created_by, sum(total_dbu_cost) as sum_dbu_cost FROM

(SELECT distinct cluster_id, job_id, created_by, terminal_state, total_dbu_cost from overwatch.jobruncostpotentialfact where  terminal_state = "Succeeded")

GROUP BY created_by

HAVING created_by != 'null'

ORDER BY sum_dbu_cost desc

limit 1000;

 

 

created_by              sum_dbu_cost

User1   253.60490000000007

User2     83.07065199999978

User3     80.84025400000019

User4    58.004314

User5     56.34171099999961

User6     49.40466399999997

User7    12.238729

User8    2.528845

User9   1.4531079999999597

User10   0.4258950000000001

User11  0.30644

User12 0.17414799999999972

 

 

Sunday, October 1, 2023

Network for applications

 

Azure application gateway and app services are created for access from the public internet. When organizations want to take these resources private, they often struggle to maintain business continuity with their own network structures, rules and the limitations and errors when attempting to wire them together. This article explains how these resources can be effectively made private with little or no disruptions.

Both these resources are complicated with many features and configurations feasible. Even the networking section provides many choices under incoming and outgoing sections. Some of the encountered and dreaded errors are 403 and 502. Code hosted in the app service might find that they are able to connect to a store or event hub if they have vnet integration and they might want to have a private dedicated connection with another resource or network, yet when these options are added they have requirements different from one another.  For example,  to create a private endpoint, the private endpoint network policies must be disabled, the subnet must have no delegation and must have available IP addresses. Disabling the private endpoint network policies might be hard to find on the Management Portal User Interface  When the endpoints are created, they must be associated with the privatelink.azurewebsites.net dns zone for them to be reached from other resources. Certain subnets cannot be used simply because they have a conflicting resource already placed there. The private endpoint and the vnet integration must not share the same network.

Consequently, the approach of taking a resource private requires the organization to pre-create subnets and even a DNS zone specifically for ‘privatelink.azurewebsites.net’. Then the other resources must be connected to the app service. In the case of application gateway, it requires a DNS zone group to be created so that the application gateway can resolve the app services by their names. This step is often overlooked after the endpoints are created on the app services  Similarly private virtual links must be created.

It is in the interest of the deployment to create a single unified virtual network on which all the resources and their networks are placed. Often distinct virtual networks aka vnets result from independent initiatives, and they require peering or links to be established. The same is true when creating too many subnets because they exhaust the IP address ranges which are often underutilized. The connected devices to subnets have their IP address in the subnet’s CIDR and this information comes handy to know which subnets are unused and can be reused for other purposes. Once the subnet and vnet are created, then the options to add network security groups and gateways can be decided. The traffic from the virtual networks and subnets are hard to visualize but by enumerating the resources and their default route to the internet, it is possible to place the gateways appropriately  Otherwise those resource might not have outbound internet connectivity.

Finally, for the application gateway to be allowed access resources and networks as its backend pool members, its address must be allowed on all the access restrictions of those resources and networks.

A working example of this description is available here: network4apps.zip

Saturday, September 30, 2023

 

This is a continuation of a previous article on the use of Artificial Intelligence and Product Development. This article talks about the bias against AI as outlined in reputed journals.

A summary of the bias against AI is that some of it comes from inaccurate information from generative AI. Others come from the bias served up by the AI tools. These are overcome with a wider range of datasets. AI4ALL for instance, works to feed AI a broad range of content to be more inclusive of the world. Another concern has been over-reliance on AI. A straightforward way to resolve this is to balance the use of AI with those requiring skilled supervision.

The methodical approach to managing bias involves three steps: First, data and design must be decided. Second, outputs must be checked and third, problems must be monitored.

Complete fairness is impossible due in part to decision-making committees not being adequately diverse and choosing the acceptable threshold for fairness and determining whom to prioritize being challenging. This makes the blueprint for fairness in AI across the board for companies and situations to be daunting. An algorithm can check whether there is adequate representation or weighted threshold, and this is in common use but unless equal numbers of each class is included in the input data, these selection methods are mutually exclusive. The choice of approach is critical. Along with choosing the groups to protect, a company must determine what the most important issue is to mitigate. Differences could stem from the sizes of the group or accuracy rate between the groups. The choices might result in a decision tree where the decisions must align with the company policy.

Missteps remain common. Voice recognition, for example, can leverage AI to reroute sales calls but might be prone to failures with regional accents. In this case, fairness could be checked by creating a more diverse test group. The final algorithm and its fairness tests need to consider the whole population and not just those who made it past the early hurdles. Model designers must accept that data is imperfect.

The second step of checking outputs involves checking fairness by way of intersections and overlaps in data types. When companies have good intentions, there’s a danger that an ill-considered approach can do more harm than good. An algorithm that is deemed neutral can still result in disparate impact on different groups. One effective strategy is a two-model solution such as the generative adversarial networks approach. This is a balanced approach between the original model and a second model where one checks for individual’s fairness. They converge to produce a more appropriate and fair solution.

The third step is to create a feedback loop. Frequently examining the output and looking for suspicious patterns on an ongoing basis, especially where the input progresses with time, is important. Since bias goes unnoticed usually, this can catch it. A fully diverse outcome can look surprising, so people may reinforce bias when developing AI. This is evident in rare events where people may object to its occurrence and might not object if it fails to happen. A set of metrics such as precision and recall can be helpful. Predictive factors and error rates are affected. Ongoing monitoring can be rewarding. For example, demand forecasting by adapting to changes in data and correction in historical bias can show improved accuracy.

A conclusion is that bias may not be eliminated but it can be managed.

 

Friday, September 29, 2023

 This is a summary of the book titled “The Power of Not Thinking” by Simon Roberts who is a business anthropologist and describes embodied knowledge that is not inculcated into Artificial Intelligence. Embodied knowledge derives from the body through the movement, muscle memory, sight, hearing, taste, smell, and touch. It includes experiences that evoke deep sensory memories that allow us to take actions without thought and pattern recognition. These embedded memories enable us to feel, rather than merely reason our way through many decisions. He makes a case for companies to pair data with experiential learning.

A long tradition has created a dichotomy between mind and body where thinking is part of the brain, but we learn in ways different from computers. He takes the example of driving and explains that we take the wheel, feel the road, engage both our body and brain and our common sense, and master it over time until we can engage in it on autopilot. AI on the other hand is dependent on sensors and recognizing patterns, processing them in milliseconds and responding immediately. Neither can cope with every driving situation, but the more experienced drivers can afford to do so automatically.

The idea that mind and body are different, also called the Cartesian dualism, regards the body as a thing that the mind operates. By dismissing senses and emotions as unreliable inputs, this worldview initiated the scientific method, experimentation, and evidence-based thinking. Human intellect is not merely a product of the brain but also the body’s engagement with the surroundings that forges comprehension of the world. Both the body and the brain gain knowledge. Experience and Routine helps us create embodied knowledge.

Embodied knowledge is acquired through the following five methods:

Observation – this is an experience involving the whole body where for example we feel the grip, hear the racket hitting the ball and trigger the same reactions in the brain and the body when we actually do so.

Practice – We learn to ride a bike by observing others ride. Acquiring new skills like skiing or sailing demands experience, practice, observation, and instruction. With more experience and practice, we can do the activity without thinking.

Improvisation - AI is still governed by supervised learning and big data. On the other hand, judgements based on incomplete information proves crucial. For example, firefighters learn to sense how structures will collapse because they can feel it.

Empathy – is about how another person uses a tool or navigates the world, go beyond reading about it or talking to them

Retention – when we taste or smell, memories flood the mind, demonstrating that recollection resides in the body as well as the brain.

Firms spend a lot to collect and crunch data but through experience, decision makers can better utilize the data. When leaders at Duracell wanted to understand their market for outdoor adventures, they pitched tents in the dark, cooked in the rain, and slept in a range of temperatures. This helped them pair their insights with the data analysis and the resulting campaign was one of the most successful. The author asserts that statistics can tell a relevant story, but they have limited ability to tell a nuanced human story. Policymakers just like business leaders can also benefit from this dual approach and the author provides examples for that as well.

Software developers are improving AI and robots by introducing state read from sequences and they have also found that AI that learns through trials and errors is also able to do better than some of the humans in the most complex games. At the same time, it is our embodiment that makes our intelligence hard to reproduce.

Thursday, September 28, 2023

This is a continuation of a previous article on the use of Artificial Intelligence and Product Development. This article talks about the bias against AI as outlined in reputed journals.

A summary of the bias against AI is that some of it comes from inaccurate information from generative AI. Others come from the bias served up by the AI tools. These are overcome with a wider range of datasets. AI4ALL for instance, works to feed AI a broad range of content to be more inclusive of the world. Another concern has been over-reliance on AI. A straightforward way to resolve this is to balance the use of AI with those requiring skilled supervision.

The methodical approach to managing bias involves three steps: First, data and design must be decided. Second, outputs must be checked and third, problems must be monitored.

Complete fairness is impossible due in part to decision-making committees not being adequately diverse and choosing the acceptable threshold for fairness and determining whom to prioritize being challenging. This makes the blueprint for fairness in AI across the board for companies and situations to be daunting. An algorithm can check whether there is adequate representation or weighted threshold, and this is in common use but unless equal numbers of each class is included in the input data, these selection methods are mutually exclusive. The choice of approach is critical. Along with choosing the groups to protect, a company must determine what the most important issue is to mitigate. Differences could stem from the sizes of the group or accuracy rate between the groups. The choices might result in a decision tree where the decisions must align with the company policy.

Missteps remain common. Voice recognition, for example, can leverage AI to reroute sales calls but might be prone to failures with regional accents. In this case, fairness could be checked by creating a more diverse test group. The final algorithm and its fairness tests need to consider the whole population and not just those who made it past the early hurdles. Model designers must accept that data is imperfect.

The second step of checking outputs involves checking fairness by way of intersections and overlaps in data types. When companies have good intentions, there’s a danger that an ill-considered approach can do more harm than good. An algorithm that is deemed neutral can still result in disparate impact on different groups. One effective strategy is a two-model solution such as the generative adversarial networks approach. This is a balanced approach between the original model and a second model where one checks for individual’s fairness. They converge to produce a more appropriate and fair solution.

The third step is to create a feedback loop. Frequently examining the output and looking for suspicious patterns on an ongoing basis, especially where the input progresses with time, is important. Since bias goes unnoticed usually, this can catch it. A fully diverse outcome can look surprising, so people may reinforce bias when developing AI. This is evident in rare events where people may object to its occurrence and might not object if it fails to happen. A set of metrics such as precision and recall can be helpful. Predictive factors and error rates are affected. Ongoing monitoring can be rewarding. For example, demand forecasting by adapting to changes in data and correction in historical bias can show improved accuracy.

A conclusion is that bias may not be eliminated but it can be managed.

 


Wednesday, September 27, 2023

 

This is a continuation of a previous article on AI for product development.  Since marketing is one of the core influences on product development, this article reviews how AI is changing marketing and driving rapid business growth.

Marketers use AI to create product descriptions. Typically, this involves words and phrases that come from research on target audience but when the same is used by marketers over and over again, it can become repetitive. AI rephrasing tools can help teams find new ways of describing the most prominent features of their products.

Content marketers are often caught up in the task of creating more content but it’s equally important to optimize the content that’s already on the site. As content gets older, it becomes dated and less useful which brings down the SERP. When a particular URL is provided, AI can inform the keywords its ranking that URL for and which keywords need a boost. This helps marketers go further.

AI is most used in data analytics. Performance of various content types, campaigns and initiatives used to be time consuming just by virtue of sourcing it from various origins and the tools varied quite widely. Now teams empower themselves to quickly get and analyze the data in which they are interested. Business Intelligence teams continue to tackle complex data, but it is easier to get started with data analytics for most users.

AI can also help optimize marketing activities by providing insights into customer behavior and preferences, identifying trends and patterns, and automating processes such as content creation, customer segmentation and more. AI initiatives achieve better results and help the marketing strategy better connect with the customers.

Website building, personalized targeting, content optimization, or even chatbot assistance for customer support are some well-known areas for AI based enhancements. AI content generation can help accelerate content creation. Fact-checking information in the articles and ensuring that messaging and tone are aligned with the brand voice continue to require supervision.

The right tool for the right job adage holds truer than ever in the case of AI applications. Technology and infrastructure can evolve with business as it grows, and long-term investments certainly help with the establishment of practice. Text to Text and Text-to-Image generators are popularized by tools like ChatGPT and DALL-E 2. These make use of large language models, natural language processing, and artificial neural networks. The caveat here is that different tools are trained on different models. It is also possible to mix and match, for example using ChatGPT to create a prompt and then use the prompt with DALL-E 2 or Midjourney. Social media platforms like Facebook and Instagram offer ad targeting and audience insights. Email marketing platforms like Mailchimp provide AI powered recommendations for subject lines and send times.

Some of the bias against AI comes from inaccurate information from generative AI. Others come from the bias served up by the AI tools. These are overcome with a wider range of datasets. AI4ALL for instance, works to feed AI a broad range of content to be more inclusive of the world. Another concern has been over-reliance on AI. A straightforward way to resolve this is to balance the use of AI with those requiring skilled supervision.

 

Tuesday, September 26, 2023

 Continued from previous post...

Third, AI can change how to collect customer feedback. A minimum viable product is nothing more than a good start and feedback loop with the target audience is essential to taking it to completion. Until recently, product analytics has been largely restricted to structured or numerical data. Notable and eminent AI experts argue that this is merely 20% of the data and that companies have the remaining as unstructured and in the form of documents, emails, and social media chatter. AI is incredibly good with analyzing large amounts of data and even benefits from being tuned with more training data. Compare this with focus groups that are not always accurate representations of customer sentiment, and this leaves the product team vulnerable to potentially creating a product that does not serve its customers well. These same experts also make a case for the generative AI to help convert customer feedback into data for business.

Fourth, AI can help with redefining the ways teams develop products. It involves how engineers and product managers interact with the software. In the past, professionals were trained in the use of software-products-suite to the point where they were designated experts who understood how each piece worked and imparted the same via training to others. With AI, new team members can be onboarded rapidly by letting the AI generate the necessary boilerplates or prefabricated units or provide a more interactive way of getting help on software and hardware tools.  What used to be wire diagrams and prototyping can now be replaced with design examples with constraints provided to chatbots. The interface seems just as human as a chat interface, so nothing about the internals of machine learning needs to be known to those wishing to use the interface.

Finally, AI helps with creativity as well. Machine learning algorithms are already used to learn patterns of transforming inputs to outputs and then apply that pattern to unseen data. The new generative models can even take this process a step further by encoding state between the constant stream of inputs which not only helps to get a better understanding of such things as sentiments but also generate suitable output without necessarily understanding or interpreting each input unit of information. This is at the core of capturing how a software engineer creates software, a designer creates a design, or an artist creates an art.

By participating in the thinking behind the creation, AI is poised to extend the abilities of humans past their current restrictions. Terms like co-pilots are beginning to be used to describe this co-operative behavior  and come to the aid of product managers, software engineers, and designers.

The ways in which AI and humans can improve each other towards the development of a product is a horizon filled with possibilities and some trends are already being embraced in the industry. Customer experience is shifting in favor of self-service with near human like experience via interactive chats and industrial applications that leveraged machine learning models are actively replacing their v1.0 models with generative v2.0 models. More interactive and engaging experiences in the form of recommendations, or spanning across content, products or frameworks are certainly being envisioned. By virtue of both the data and the analysis models, AI can not only improve but redefine the product development process.

Experimentation at various scopes and levels is one way to increase our understanding of the role AI can play and this is getting a lot easier to get started. It is even possible to delegate the knowledge of machine learning to tools that can work across programmatic interfaces regardless of the purpose or domain of the applications. Just as prioritizing the use cases were a way to improve the return on investment for a product, AI initiatives must also be deliberated to determine the high-value engagements. In similar fashion, leadership and stakeholder buy-ins are necessary to articulate the value addition in the bigger picture as well as to take questions to cast away any rumored concerns such as privacy and data leakages. When convincing the leadership for investments,  the limitation of the role of AI to a trusted co-pilot is required. Lastly, the risks of not investing in AI could also be called out.