Building Virtual Machine automated vertical scalability with VMSS in Azure

Are you planning to lift and shift VMs into the Cloud? Or have you done migration and now looking for a way how to scale them automatically?

Well, this article can be right for you!

When it comes to lifting and shifting the application and/or services hosted on the Virtual Machines (VM) from an on-premise environment to the Cloud, building a strategy and design of how to achieve some sort of automatization in VM scaling can be a challenging task.

In general, some type of scalability can be achieved with Virtual Machines as they are but it’s going to be very inopportune.

That’s why it is very important to work on cloud-based infrastructure design prior to the lifting-and-shifting process itself. And remember, VM scaling needs to happen automatically.

The approach I like to teach others is to automate everything that follows repetitive cycles.

But hang on, what if I scale up horizontally in the Cloud by adding extra HW resources (RAM, CPU) to logical machine computing power hosting my VM? .. Yes, this may work but with some scripting to be done first and that is less likely going to be repetitive with the same set of input properties…

But what if full scaling automatization can be accomplished with a higher running cost efficiency and as little configuration work as possible?

Yes, that is all possible these days and I am going to share how to use one of the options from the market.

The options I liked to pursue one project of mine is coming from the Microsoft Azure Resources stash.

Why is that?

It’s not a secret that I’ve worked with Azure since the early saga beginning. Therefore, I have built a long experience with the Azure platform. On the other hand, I have to admit that Azure Software Engineers have done a great job of building platform APIs and Web Wide UX/UI interface (Azure Portal) to make this process seamless and as easy to use as possible. More on my driving decision factors later …

Let’s get started

The Azure resources I have been mentioning here in the prologue are:

VMSS in Azure portal
Azure Compute Galleries in Azure portal
Azure Load Balancers in Azure portal

My reasons for choosing Azure

Every project has different needs and challenges coming from the business domain requirements. More importantly, rational justification on the economical side of the project complexity is mostly the driver of the project’s technological path in the design stage.

For this project, I was lucky because the customer I designed this solution for had part of the business applications and services in Azure already. Also, the customer big ambitious plans to migrate everything else from an on-premise data center to the Cloud in the near time horizon just made my decision more sealed, and therefore Cloud in Azure was the way to go.

Infrastructure diagram

Let’s get a better understanding of the designed system infrastructure from the simplified infrastructure diagram below.

Take it with a grain of salt as the main purpose of it is to highlight the main components used in the project and discuss these in this post.

VMSS simplified infrastructure diagram

What I like most about the selected Azure stack

  • VM redundancy across multiple data centers globally
  • has the ability to multiply VM instances as needed with an option to resize the instance computing power when needed (RAM, CPU, etc. => vertical scaling)
  • high service availability and resilience (subject to infrastructure design – in my case, I provisioned a total of two VMSSs, one geographically different data center each)
  • I like the flexibility of building my rules in VMSS on which the system decides whether VM instances go up or down in the quantity
  • Azure traffic balancer can be linked to VMSS easily
  • the VMSS service can provision up to 600 VM instances (and that is a lot!)
  • the Azure Compute Gallery (ACG) service is able to replicate images globally, supports image versioning and auto-deployment of the latest model to VM running instance (and that was a hot feature for me)

Steps to Provision Services in Azure

In a nutshell, follow these steps to provision Azure services and build the cloud infrastructure from the ground up:

  1. Lift and shift the VM into Azure (I can recommend using the Azure Migrate service to start this process)
  2. Create a new Azure resource: Azure Compute Gallery
  3. Go to the running instance of the VM and capture and generalize the image of the migrated VM
Capturing VM state into the image, Azure portal
Selecting an option to Generalized VM captured state into the image
  1. Create two replicated images (for one datacentre each)
Two replicated images setting
  1. Save the image into Azure Compute Gallery created in step 2
  2. Create two new Azure resources: Virtual Machine Scale Set (in geographically different data centers as per settings in ‘Target regions’ in step 4 for Scaleset redundancy capabilities)
  3. Create scale-out/in rules in VMSS

Scale-out/in VMSS rules example

The screenshot image below shows the example of setting up the Scaling rules for one of the VMSS instances.

VMSS scaling rules example

As you can see in my default profile in the picture above, this VMSS instance is not running any VM instances by default (Minimum = 0). But rather, spins up some (scaling out) based on these criteria:

  1. The main VMSS instance hosted in datacenter A increases on average CPU (or)
  2. The load balancer availability drops below 70% in a given timeframe

Very similar rules are used in the reverse process, aka scaling in.

If you’re planning to use a similar concept in your solution, count a VM operation system booting time in your metrics if high VM-hosted service availability and responsiveness are important to meet.

Microsoft Azure recently introduced a new feature called Predictive autoscale with Pre-launch setup (at the time of writing this article in preview only) which should solve a VM boot time issue for most of the use case scenarios. It works based on cyclical workload patterns determined by machine learning and predicts scaling out needs action in advance.

I like to say, that using Machine learning capabilities in this sort of behavior analysis is a very smart move from Microsoft forward.

I think VMSS has a lot to offer to businesses starting their journey to the Cloud.

The process of setting the infrastructure up is not complicated and can be done over UI/UX design in the Azure portal in no time. The VMSS scaling rules offer a lot of options to choose from and the level of integration with other types of Azure resources is on a very mature level, too.

Thanks for staying, subscribe to my blog and leave me a comment below.

cheers\

Why is CDN very important for your static HTTP content?

Almost everyone heard about CDN but what that actually is?

Explained: CDN (Content delivery network) is the set of geographically distributed servers (proxy servers if you like) that cache the static content on running physical hardware which speeds up the actual download to its destination.

The global CDN network
The global CDN network in Example

Now, let me explain why the CDN network is such a big player in the Solution infrastructure and why no Solution developer/architect should overlook this.

But before we go any further let me mention another term which is: response latency.

Explained: In other words, the time needed for to download the Website content entirely to the consumer (End-user) device.

And as you can imagine, this is another very important factor to have your eye on if want to keep your audience engaged to service-provided content as long as possible.

Low latency means a better User responsiveness/experience with the Website (Web service).

The question is, how to achieve the lowest latency possible? … there are two ways how to do it:

  • to use a very fast network for content delivery, or
  • to cache the content as closely as possible geographically to your audience

… the combination of both of these is the ultimate state towards which the global network is going (near real-time response).

And as all of you probably understand by now, to get the best ROI in the time you put into the content it is very important to have your infrastructure in the best shape possible. Keeping your visitors happy by serving them content as fast as possible helps to build better Website awareness and audience growth.

What CDN service provider do I use?

Among all of the CDN providers, I have come across, Cloudflare is the one I was attracted to most.

.. for many reasons:

The main one is that the service is offering reasonably good DDoS protection shielding and well-distributed and fast CDN server nodes.

Cloudflare account dashboard

To me, it is almost unbelievable that all of that for as much as $0! Yes, all of that can be yours for FREE! Very sweet deal, don’t you think? (btw, I am not participating in any affiliate program!)

Setting all of that up is a really straightforward and well-documented process.

If you want to know more visit this guide on how to set it all up.

The entire configuration process becomes even easier if having a domain name address purchased separately from the Web hosting (easier to maintain the DNS servers configuration over the Domain name provider portal – which every solid domain name provider has).

Another feature Cloudflare provides is the fast route finder across the Cloudflare network called Argo, which helps to decrease loading time and reduces bandwidth costs.

I have been using this service for one of my clients who is providing Address lookup and Address validation services over REST API web services hosted in the Cloud in multiple geographically different data centers and I must say that the customer experience has been very positive since.

In numbers, I was able to reduce an HTTP response latency time down from 1.4s to 0.5s! And these are very good performance improvements for a business where time is of the essence.

I am leaving this link here if interested to know more about this.

Anyway, thank you again for visiting this post, I hope you have enjoyed reading and let me know what CDN provider you’re using!

cheers\

Distributed System Architecture: Modern Three-Tier

Modern Three-Tier

The most used infrastructure architecture for SMEs and data-oriented small businesses operating on a global scale.

When it comes to the technology stack, and if talking about Web Applications then the most common setup I have seen all over the place is React.js being used for SPA or PWA frontend as a client/presentation layer, node.js as a Rest API Server/business layer and Cassandra (open source) as distributed (optionally cloud-based), fault-tolerant, well-performed, durable, elastic, …, supported (don’t forget on decent support from the community!), decentralized and scalable database/persistent layer.

Your database does not have to tick all of these boxes from above (apart from being distributed), but if you’re going to put all that effort into building this type of infrastructure, you want to make sure that the database meets as many required features from being modern and long-lasting solution sitting in infrastructure as possible (think about your development ROI Devs!!).

The way it works is that the client application (fetched from the store or application server) is capable of handling the user tasks by itself on the client device with the data supported over the API (node.js) and in the event of Server API running out of the breath, a new instance of the Application API server will be provisioned (new node is getting created, horizontal scaling -> scaling out/in).

Database, as it stands in this model, does not have this scaling capability but can scale up or down instead as needed (service is given more system resources, vertical scaling -> scaling up/down).

An illustration of how it’s getting all wired up together

1.1 Modern 3-Tier Distributed System Architecture

Summary

Pros

  • great logical separation and isolation with a lot of room for cybersecurity policy integration
  • not-complex architecture when it comes to problem investigation and troubleshooting
  • easy to medium complexity to get the infrastructure up and ready for development and maintenance (less DevOps, yay!)
  • an easy option to replicate infrastructure on user localhost for development purposes (just makes it all easier during branch development)
  • infrastructure running cost is relatively small

Cons

  • decommissioning provisioned nodes can be tricky (depending on the technology used)
  • data synchronization and access need orchestration (subjected to database type)
  • shipping new features out needs an entire Application server deployment (downtime)

Thanks for staying, subscribe to my blog and leave me a comment below.

cheers\