One of the aspects that is not often called out is that these app services must be protected by web application firewall that conforms to the OWASP specifications. This is addressed with the use of an application gateway and FrontDoor. With slight differences between the two, both can be leveraged to switch traffic to an alternate deployment but only one of them is preferred to switch to a different region. FrontDoor has the capability to register a unique domain per backend pool member so that the application received all traffic addressed to the domain at the root “/” path as if it was sent to it directly. It also comes with the ability to switch regions such as between centralus and east us 2. Application gateway, on the other hand, is pretty much regional with one instance per region. Both can be confined to a region by directing all traffic between their frontend and backends to go through the same virtual network. Networking infrastructure is probably the biggest investment that needs to be made up front for BCDR planning because each virtual network is specific to a region. Having the network up and running allows resources to be created on-demand so that the entire deployment for another region can be created only on-demand. As such an Azure Application Gateway or Front Door must be considered a part of the workload along with the other app services and planned for migration.
Workload #3: Analytical workspaces: As with most data science efforts, there will be some that require interactive deployment versus those that can be scheduled to run non-interactively. Examples of these workspaces include Azure Databricks and Azure Machine Learning. The characteristics of this kind of workload is that they are veritable ecosystems by themselves and one that relies heavily on compute and externalizes storage. Many workspaces will come with external storage account, databases and Snowflake warehouses. Another characteristic is that these resources often require both public and private plane connectivity, so workspaces that are created in another region must re-establish connectivity to all dependencies, including but not limited to private and public source depot, container image repositories, and external databases and warehouses and by virtue of these dependencies being in different virtual networks, new private endpoints from those virtual networks become necessary. Just like AKS, the previous workload, discussed above, manifesting all these dependencies that have been accrued over time might be difficult when they are not captured in IaC. More importantly, it’s the diverse set of artifacts that the workspace makes use of in terms of experiments, models, jobs and pipelines which may live as objects in a catalog of the workspace but import and export of those objects to another workspace might not pan out the way as IaC does. With a diverse and distinct set of notebooks from different users and their associated dependencies, the task of listing these itself might be hard much less the migration to a new region. Users can only be encouraged to leverage the Unity Catalog and the version control of all artifacts external to the workspace but they lack the rigor of databases. That said, spinning up a new workspace and re-connecting different data stores might provide a way for the users to be selective in what they bring to the new workspace.
No comments:
Post a Comment