Skills needed for becoming an ultimate Frontend Software Developer

If seriously thinking about starting your carrier as a Frontend Software developer, you’re not going to do a bad turn with any of these Technical skills on the list below.

Especially for those who want to be demographically independent – aka, you’ll be able to find a job anywhere you go to maintain your cash flow…

From my working experience these are the most resonating ones currently on the market (2021) in sequential order from the most wanted to down:

Bonus skills:

There are plenty of training materials online to start your journey.

But … I strongly recommend starting with the basics and principles first before jumping on core development. This can save you a lot of time in faulty code investigation and prevents unnecessary initial frustration (learning curve).

Btw., I am more like a person who learns from visual sources and I can give your a few tips on what I use for your start:

  1. Youtube.com
  2. Technology homepage and community forums (for example homepage for Node.js, and community forum dev.to)
  3. docs.microsoft.com/
  4. channel9.msdn.com/

I hope you enjoy this reading today.

Flick me an email about how did you find this article useful.

/cheers

5 software development skills to learn for rapid development

The world is changing and technology with it.

Main because since software the product deployments are becoming more frequent and Software house and Service-oriented companies are pushing hard on T2M (Time to Market) selling factor to keep them visible on the market.

Although all upcoming projects are prerequisites of non-functional requirements still the same (mainly),

  • the infrastructure design leveraging from Service Oriented Architecture
  • the solution must be scalable and automated to provision
  • the solution capable to be hosted in the Cloud as well as on hybrid network infrastructure
  • the solution is ISO 9126 compliant
  • first release completion time of 6 months

, amount of functional requirements needed for the first release keeps growing and in most cases do not help to achieve delivery in a given time.

And that is pretty bad.

Therefore “smart” selection of the frameworks and tools to use for building whatever investor wants to build a solution is an absolute must.

But, you won’t be able to succeed without the technical knowledge and experience of the production team! (the place where things are getting serious)

To to get familiar with what skill sets to seek out while building the team capable to produce the business value early from the beginning of project initiation, I have created a list of the suggested frameworks and platforms to use.

Hope it helps you to battle this constant competition market and investors pressure and elevate the progress in the initial phase of solution development as much as possible.

  1. Outsystems PaaS

    You are maybe already familiar with the term “low code“. The word on which many conceptual developers are rolling their eyes up. But hold on – if all that investors want is to get the product out of the door as soon as possible and for the cost related to head x time spent on the project (which would probably be somewhere around 50% less as opposed to traditional way of coding in this case), just give it to them!

    Every solid developer must be familiar with this PaaS these days if not with Outsystems then with some other alternative such as PowerApps, for example.

  2. LoopBack

    Heyou – all Node.js Devs are lifting their eyebrow. Yes, very powerful framework, indeed. Usage of LoopBack CLI cannot be easier thanks to documentation built by many contributors from this OpenSource project.

    Simply put, this framework allows you to build your complete backend infrastructure with speed which elevates your project progress exponentially. You can choose from REST, SOAP, GraphQL, RPC servers/services and manage all of these nodes with PM2 process management systems.

  3. Mocha

    Don’t forget about a testing framework. This option will work well with the ones mentioned above and you cannot go wrong with learning this framework right away. Javascript is rocking all over the globe right now and would be silly not to pay any attention to this programming language intentionally.

    And so why not leverage the JavaScript syntax in every SDLC phase? Sounds logical, hm?

  4. AplifyCLI

    This CLI utility from AWS is becoming more and more popular among developers from generation Y. Nobody likes to deal with building the infrastructure on DevOps level, unless it’s ABSOLUTELY necessary. And to be fair a lot of the service provisioning commands can be easily automated.

    Therefore, a utility that scaffolds everything you need for hosting your system is a necessary skill these days.

  5. Terraform

    If not going to use any of these “low code” platforms mentioned above for building your solution, solid provisioning automation system (“engine”) and paradigm for not only infrastructure automation provisioning but also for keeping track of infrastructure changes in source code is the must. You cannot go wrong with Azure DevOps/ARM Templates or Terraform. Both offer you a lot of capabilities and automation to follow IoC (Infrastructure as a code) paradigm.

    To me, Terraform is better options for those thinking to incorporate platforms of different technologies into the solution.

This is all for today, hope you enjoyed this reading today and leave me your thoughts down below in the comments!

Cheers…

PS: The technology cannot set the project for success if architecture, design, and test automation are getting compromised. Not having the right team, implemented processes, following the best practices and a need of keeping good progress momentum on the project, your entire ship can turn in the opposite direction and end up with catastrophic failure.

Hey Devs, is your calculated GST on Xero API generated invoice one cent off?

Is your system working with price items with value more than 2 decimal points long? Are you using rounding as a part of the calculating formula? Have you generated invoice from Xero API and later found out that the actual total on it is few cents off?

If answers on all of these question are yes, you are at the right place!

Well, you know this story … you have done all that hard work on building Xero API integration, happy with finishing project on time and with such a masterpiece level source code, and in first integration test run you discover that your invoice calculated data summary data are different from what Xero has generated in the invoice (ouch!):

Invoice line items total calculation with one cent off

And yes, something is not looking right and scratching the head does not seem to be helping much …

Well, the truth is that every system does invoice calculation of subtotal, total and GST differently. The same applies to Xero backend service (API) and therefor these two ways are good options to get you out of the trouble.

One way of doing it is to add an adjustment line as a part of the Xero API request payload and put the variation value into it to keep source data in alignment with Xero. Personally, I don’t really like this approach. The reason being is that you are going to end up with a more comprehensive solution for not much of added business value as appose to time spent on building it.

Another way is to follow the Xero calculation formula. Yep, you heard me right…

And the way I would rather suggest you go with. You may be asking why I would do that?

So let me explain my view on this.

Let’s assume that Xero as a business is on the market for several decades now. You may be getting some sense about the overall knowledge Xero as a company must have gained from such a long time history, providing comprehensive financial services to customers.

I also know that Xero has gone through several business validation iterations and internal system refactoring processes to build as much accurate tax calculation business logic on the API backend possible. All these company journeys supported by customer feedbacks and over time accumulating domain knowledge helped Xero build a great service reputation on the current market world-wide.

And the question is, why wouldn’t I use this knowledge in my advantage? And just btw – I am not participating on any affiliate programs running by Xero!

Do you have another thought about it? – leave me a comment below 😉

Ok, let’s go ahead and talk about the calculation formula…you can start to calculate GST from the prices either GST inclusive or exclusive. These are the types of line items on request payload.

Types of line items to be used in the invoice request payload

1. Line item price with GST exclusive

  1. Round line-item-price to 2 DP (decimal places)
    Round(line-item-price) => Round2DP(10.5456)
  2. Calculate line-item GST from rounded line-item-price, line-item-quantity and GST rate, and round result to 2 DP for each line-item
    Round(line-item-price * [GST rate] * line-item-quantity) => Round2DP(10.55*(0.155) *5)
  3. Sum-up the rounded line-item-price(s) as Subtotal
    (line-item-price * line-item-quantity)+…N(row)…+(line-item-price * line-item-quantity)
  4. Sum-up the line-item calculated GST (step2) as GST Total
    (step2)+…N(row)…+(step2)
  5. Add Subtotal and GST Total as invoice Total
    (step3)+(step4)

Feels difficult? That is ok. For simplicity and quick integration reasons I have created NuGet package XeroGSTTaxCalculation (NET 5) for you, free to use.

A short demonstration of how to use the XeroGSTTaxCalculation NuGet package:

class Program
    {
        static void Main(string[] args)
        {         

            IXeroTaxCalculationService service = new XeroTaxCalculationService();

            var data = new[] { 
                new LineItem { Code = "code_1", Price = 12m, Quantity = 10 }, 
                new LineItem { Code = "code_2", Price = 8.7998m, Quantity = 8 } 
            };

            var invoiceDetails = service.CalculateGSTFromPriceGSTExclusive(data, 0.25);

            Console.WriteLine(invoiceDetails);
            Console.ReadLine();
        }
    }

2. Line item price with GST inclusive

  1. Add 1 to GST rate
    1+[GST rate] => 1 + 0.15
  2. Calculate and Round to 2 DP line-item-price as line-item-price-total
    Round(line-item-price*line-item-quantity) => Round2DP(10.5456*5)
  3. Divide rounded line-item-price-total by GST rate (for each line-item) and round to 2 DP as line-item-price-lessTax
    Round(line-item-price-total/[GST rate] )=> Round2DP(52.73/1.15)
  4. Subtract line-item-price-lessTax from line-item-price-total as line-item-gst
    (line-item-price-total ) – (line-item-price-lessTax)
  5. Sum-up the line-item calculated GST (line-item-gst) as invoice GST total
    (step4)+…N(row)…+(step4)
  6. Sum-up line-item-price-total as invoice Total
    (step2)+…N(row)…+(step2)
  7. Subtract GST total from Total to get invoice Subtotal
    (step6) – (step5)

Feel free to use NuGet package XeroGSTTaxCalculation for this as shown in one code example from above. I bet you’re gonna need this saved time to spent on beer sessions with your mates instead .). Cheers!

For more information about rounding, visit Xero documentation site for developers.

Happy reading and leave me a comment below!

Distributed System Architecture: Modern Three-Tier

Modern Three-Tier

The most used infrastructure architecture for SMEs and data-oriented small businesses operating on a global scale.

When it comes on the technology stack, and if talking about Web Applications then the most common setup I have seen all over the place is React.js being used for SPA or PWA frontend as a client/presentation layer, node.js as a Rest API Server/business layer and Cassandra (open source) as distributed (optionally cloud-based), fault tolerant, well-performed, durable, elastic, …, supported (don’t forget on decent support from the community!), decentralized and scalable database/persistent layer.

Your database does not have to tick all of these boxes from above (apart from being distributed), but if you’re going to put all that effort to build this type of infrastructure, you want to make sure that the database meets as many required features from being modern and long-lasting solution sitting in infrastructure as possible (think about your development ROI Devs!!).

The way how it works is that the client application (fetched from the store or application server) is capable to handle the user tasks by itself on the client device with the data supported over the API (node.js) and in event of Server API running out of the breath, a new instance of the Application API server will be provisioned (new node is getting created, horizontal scaling -> scaling out/in).

Database, as it stands in this model, does not have this scaling capability but can scale up or down instead as needed (service is given more system resources, vertical scaling -> scaling up/down).

An illustration of how it’s getting all wired up together

1.1 Modern 3-Tier Distributed System Architecture

Summary

Pros

  • great logical separation and isolation with a lot of room for cybersecurity policy integration
  • not-complex architecture when it comes to problem investigation and troubleshooting
  • easy to medium complexity to get the infrastructure up and ready for development and maintenance (less DevOps, yaay!)
  • an easy option to replicate infrastructure on user localhost for development purposes (just makes it all easier during branch development)
  • infrastructure running cost is relatively small

Cons

  • decommissioning provisioned nodes can be tricky (depends on the technology used)
  • data synchronization and access needs orchestration (subjected to database type)
  • shipping new features out need an entire Application server deployment (downtime)

Software development principles and practices for solid Software Engineers

Although today’s way of software development is rapidly changing, having a good understanding of these principles and good practices may only help you become better in software development. Personally, I would recommend to every solid Software Engineers to get familiar with these practices if not already.

Coding practices

YAGNI

This principle came from Extreme Programming and states for very simple things: Don’t overthink the problem solution in the execution stage. Just write enough code for making things working!

DRY

This principle follows and states for: Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

Basically, don’t replicate functionality in the system and do make your code reusable.

SOLID

This principle has its own space in OOP. The SOLID mnemonic acronym represents these five design principles:

  1. Single-responsibility
    Design your classes in structural business entity/domain hierarchy, so only one class encapsulates only logic related to it.
  2. Open-closed
    Entities should be open for extension but closed for modification.
    In the development world, any class/API with publicly exposed methods or properties should not be modified in their current state but extended by other features as needed.
  3. Liskov substitution
    This principle defines the way how to design classes when it comes to inheritance in OOP.
    The simplified base definition says that if class B is a subtype of class (super) A, then objects of A may be replaced with objects of type B without altering any of the desirable properties of the program.
    In other words, if you have a (super) class of type Vehicle and subclass of type Car, you should be able to replace any objects of Vehicle with the objects Car in your application, without braking application behavior or its runtime.
  4. Interface segregation
    In OOP is recommended using Interfaces as an abstracted segregation level between the producer/consumer modules. This creates an ideal barrier preventing coupling dependencies and exposing just enough functionality to the consumer as needed.
  5. Dependency inversion
    The principle describes a need for abstract layer incorporation between the modules from top to bottom hierarchy. In brief, a high module should depend on an abstract layer (interface) and a lower module with dependency on the abstract layer should inherit/implement it.

KISS

Acronym for Keep it simple, stupid – and my favorite over the last years!

The principle has a very long history but getting forget by many Devs many times from my professional experience. Avoiding non-necessary complexity should be in every solid Software Engineer DNA. This keeps the additional development cost down for further software maintenance, new human resources onboarding, and the application/system additional organic growth.

BDD

Behavior-Driven Development is becoming more and more desirable practice to follow from the Agile oriented business environments. The core of these principles is coming from FDD. The BDD applies a similar process at the level of features (usually set of features). One’s tests build the application/system is getting a return on investment in form of automated QA testing for its lifetime. And therefore this way of working is very economically efficient in my opinion.

The fundamental idea of this is to engage QAs (BAs) into the development process right from the beginning.

This is a great presentation of the principle from the beginning to the end of the release lifecycle: Youtube

TDD

The software development process gained its popularity over time in test automatization. Basics are coming from the concept of starting the test-first and follow with the code until the test runs successfully.

Leveraging Unit test frameworks for this such as xUnit, NUnit (or similar), if you are .NET developers, helps to build a code coverage report very easily in MS Visual Studio (Enterprise edition) for example, which helps to build QA confidence over the code which last long time over the code releases.

FDD

Well know approach how to deliver the small blocks (features) in an Agile running environment. In other words, if you have a load of work to deliver is better to slice it down to individual blocks (features) which can be developed, tested and delivered independently.

The whole FDD methodology has 5 stages:

  1. Develop a model of what is needed to build
  2. Slice this model into small, testable blocks (features)
  3. Plan by feature (development plan – who is going to take that ownership)
  4. Design by feature (selects the set of features team can deliver within the given time frame)
  5. Build by feature (build, test, commit to the main branch, deploy)

The beauty of this development methodology approach is that deployment features such as Feature toggling can be integrated with relatively minimal complexity overhead. With this integration in place, the production team can move forward only on one main branch, unfinished feature development state regardless. An enterprise-level production team will appreciate this advantage, no doubt about it.

Summary

By following these principles and practices production team will produce maintainable code, with high test coverage and human resources high utilization over the SDLC (ROI).

TLS handshake between Client and Server explained

Not every developer these days has a clear picture of how the Client/Server HTTPS/TSL encryption works. To be fair I have to sometimes look at my notes to recall this process as it’s confusing and easy to forget.

Especially for these Devs working on the front end and using publicly available 3rd parties middleware, ready to be used for your solution – so, why to bother?

But anyway … this is a good piece of information to keep in the mind and if you forget, this handy post can remind you how the entire process workflow works again.

TLS handshake (negotiation) process flow

Example algorithm used now on: ECDH/RSA

  1. Client – [Sends](Hello: These are my supported cipher suites) -> Server
  2. [Server choose the cipher from the supplied cipher suites]
  3. Server – [Sends](Hello: This is my certificate with Public key) -> Client
  4. [Client validates the Certificate]
  5. Server – [Sends](Hello done) -> Client
  6. [Client generates Pre-Master secret and encrypts it by Server Public key]
  7. [Client generates (calculate) Symmetric key (Master secret) based on Pre-Master secret and random numbers
  8. Client – [Sends: Pre-Master Secret exchange](Change Cipher: Pre-Master secret) -> Server
  9. [Server receive and decrypt Pre-Master secret]
  10. [Server generates (calculate) Symmetric key (Master secret) based on received Pre-Master secret and random numbers]
  11. Client – [Sends](Change Cipher Spec) -> Server, which means that from now on, any other message from the Client will be encrypted by the Master secret
  12. Client – [Sends: Encrypted] -> Server, and the Server tries to decrypt the finish message
  13. Server – [Sends](Change Cipher Spec) -> Client, which means that from now on, any other message from the Server will be encrypted by the Master secret
  14. Server – [Sends: Encrypted] -> Client, Client tries to decrypt the message

-- handshake is completed --
— the communication encryption is changing from asymmetric to symmetric —

Example algorithm used now on: AES

15. Symmetric bulk encryption switched, Client and Server established TLS communication

// Agenda

   [] -> action
   () -> message

Some other facts to be aware of

  • Anything encrypted by the public key can be decrypted by private key only
  • More details about TSL
  • What is ECDH, RSA, and AES
  • What is asymmetric and symmetric cryptography

Immutable data types after .NET 5 release

Just couple weeks ago, Microsoft released RC of .NET 5 which is (unfortunately) not going to be an LTS (Long Term Support) release but on the other hand, it’s coming with some great features in it (yep yep).

One of them comes as a part of the new release of C# 9.0 (part of the .NET 5 release) which is Immutable Objects and Properties (records and init-only properties). Quite a smart concept in my opinion …

Recap on immutable data type

The immutable data type is basically data type of the variable of which the value cannot be changed after creation.

How does it look in reality?

Well, once immutable data typed object is created then only way how to change its value is to create a new one with a copied value of the previous instance.

What are the current immutable (and mostly used) data types from .NET CLR?

Primitive types

  • Byte and SByte
  • Int16 and UInt16
  • Int32 and UInt32
  • Int64 and UInt64
  • IntPtr
  • Single
  • Double
  • Decimal

Others

  • All enumeration types (enum, Enum)
  • All delegate types
  • DateTime, TimeSpan and DateTimeOffset
  • DBNull
  • Guid
  • Nullable
  • String
  • Tuple<T>
  • Uri
  • Version
  • Void
  • Lookup<TKey, TElement>

As you can see, we have quite a few to choose from already. How this list is going to look like after .NET 5 full release in November 2020?

Well, it’s going to be a revolutionary change in my 2 cents.

Principally, any object using .NET 5 runtime (and C# 9.0) can be immutable and also implement its own immutable state – and that is HOT feature.

The syntax of the immutable properties looks like in this example:

public class ObjectName
{
    public string FirstProperty { get; init; }
    public string SecondProperty { get; init; }
}

On the other hand, the syntax of immutable object (called a record) looks this:

public record class ObjectName
{
    public string FirstProperty { get; init; }
    public string SecondProperty { get; init; }
}

As you can see, the syntax is very clear and intuitive to use.

More details about new C# 9.0 features can be found here https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-9#record-types.

How to provision Azure Function in Azure by using Terraform CLI

Choosing the right way how to keep an infrastructure versioned and well maintained in source code is becoming a quite big issue in these days. There are several options to choose from on the market currently, and it’s easy to get trapped in a never-ending research cycle. For those working with Azure services only, ARM Templates is more than an obvious answer to this but what if want to have more flexibility in going over beyond the Azure boundaries?

You may be wondering why I should use anything else but ARM templates?

The answer is simple. The ARM templates may be a bottleneck for the IT solutions using different cloud providers (multicloud solutions). In this case, managing infrastructure as a code may become a quite tricky (and ugly) thing to do over time. The thing is that every tool used for infrastructure management has it’s “own ways” of how to work with it and that comes with necessary knowledge base every production team must have beforehand. And as an implication of this, choosing the right tool for your infrastructure management (including deployment) is very important.

Terraform would be a great way how to face this challenge. Just as a proof of its simplicity, what I would like to show you here is a short demonstration of how easy it is to provision Azure function in the Azure cloud by using HashiCorp Configuration Language (HCL) and TF (Terraform) CLI utility in PowerShell.

You might be wondering, what features does TF has over the ARM templates? Well, these infrastructure management tools have “the same” set of functionality but TF has other perks on the top of that, which makes it more secure and convenient tool for DevOps (besides multicloud cloud support).

Key features are:

  1. HCL (HashiCorp Configuration Language) – high-level configuration syntax, well structured and intuitive language (TF also supports configuration using JSON for these JS geeks)
  2. Execution Plans – shows you exactly what is going to happen with infrastructure before the change is getting executed
  3. Resource Graph – the visual understanding of the infrastructure, and in my opinion, Terrafarom has done a very good job on this feature (don’t forget that it’s OpenSource!)
  4. Change Automation – yes, every change needed on infrastructure can be automated -> that means less human interaction -> and less room for human errors, YAY!

If you’re new in Terraform and want to get a feel of what Terraform is, have a look at this introduction video footage with the Co-Founder and CTO Armon Dadgar.

Prerequisites (before we start)

  • Terraform utility downloaded and configured on the local environment (guide of how to do it … here)
    for Win10 users, in case of having an issue with WSL2, I recommend following this article to get over this issue
  • Azure CLI installed and ready to roll (guide of how to do it … here)

Steps to follow

For these going exactly step by step as described in this guide make sure that any resource name starting with ‘ms‘ needs to be unique. I recommend using some other characters as prefix just to be sure that this exercise on your side will go smoothly. You won’t go far with copy&paste technique here – oops!

  1. Log in to Azure by using Azure CLI (Azure Command Prompt) or PowerShell
az login

2. If you have multiple subscriptions, skip this step otherwise. List them all out by running this command and choose the one wanted to be used (subscription_id )

az account list
Subscription details after login

3. Find out what is the latest supported AzureRM provider here (at this time of writing this post 2.29.0). This step is not mandatory but I would highly recommend to do it this way as AzureRM API might change in future so better to have a version of the CLI referenced to the code batch file.

4. Create a folder and the file within main.tf (mine is located at c:/Temp/terraform-test/)

5. Add this snipped code at the beginning of the file. This will configure Azure CLI authentication in Terraform

provider "azurerm" {
  version = "=2.29.0"
  subscription_id = "<your Azure subscription id from the step 1 or 2>"
  features {}
}

6. Append the file with the rest of the script from below. For this exercise, the data centre in Australia Central is going to be used (but change it if you like), new Azure function is going to be using consumption service plan as well as running on Windows OS (this is the default option anyway – change it to the Linux if you wish)

resource "azurerm_resource_group" "example" {
  name     = "azure-functions-cptest-rg"
  location = "australiacentral"
}

resource "azurerm_storage_account" "example" {
  name                     = "msfunctionsapptestsa"
  resource_group_name      = azurerm_resource_group.example.name
  location                 = azurerm_resource_group.example.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

resource "azurerm_app_service_plan" "example" {
  name                = "azure-functions-test-service-plan"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  kind                = "FunctionApp"

  sku {
    tier = "Dynamic"
    size = "Y1"
  }
}

resource "azurerm_function_app" "example" {
  name                       = "mstest-azure-functions"
  location                   = azurerm_resource_group.example.location
  resource_group_name        = azurerm_resource_group.example.name
  app_service_plan_id        = azurerm_app_service_plan.example.id
  storage_account_name       = azurerm_storage_account.example.name
  storage_account_access_key = azurerm_storage_account.example.primary_access_key
}

7. Navigate to the folder with main.tf file created, open Command promp or PowerShell and type this command below

terraform init

This action will the create selections.json file at .terraform\plugins\ and download the AzureRMplugin into the .terraform\plugins\registry.terraform.io\hashicorp\azurerm\2.29.0\windows_amd64 directory. The CLI utility starts its live time with the batch files from now on – perfect isolation approach from the running environment (although CLI utility itself may be quite hungry for disk space!).

The selections.json file content

8. You can skip this step if in hurry but continue reading if you want to know more about how to generate the infrastructure change plan … Open the Command prompt or PowerShell and run this command from below to see the infrastructure plan before the change execution. I can strongly recommend using some advanced IDE like MS Code for working with TF (because the text editor and the command console are all integrated into one app) as opposed to switching from the text editor back to command console – this can be annoying…

terraform plan

The plan should look similar like in the screenshot below. For those using MS Code, I would recommend downloading HashiCorp Terraform extension to accelerate your further IaC development – I found it very useful in time efficiency!

Terraform infrastructure change plan

9. Let’s get ready for D-day. Type this command to apply and execute the changes to Azure

terraform apply

This command is going to generate the infrastructure change plan and prompts the confirmation message to the user – I am happy with the planning changes, so typing yes.

The Terraform confirmation message

10. ..and if everything has finished successfully, you should be able to see this message in the end

Resources successfully created
Resource group with all resources created in Azure portal

Entire main.tf file content

provider "azurerm" {  
  version = "=2.29.0"
  subscription_id = "834b29c3-9626-408d-88e0-12e92793d1f5"
  features {}
}

# Azure functions using a Consumption service plan on Windows OS (default option)
resource "azurerm_resource_group" "example" {
  name     = "azure-functions-cptest-rg"
  location = "australiacentral"
}

resource "azurerm_storage_account" "example" {
  name                     = "msfunctionsapptestsa"
  resource_group_name      = azurerm_resource_group.example.name
  location                 = azurerm_resource_group.example.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

resource "azurerm_app_service_plan" "example" {
  name                = "azure-functions-test-service-plan"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  kind                = "FunctionApp"

  sku {
    tier = "Dynamic"
    size = "Y1"
  }
}

resource "azurerm_function_app" "example" {
  name                       = "mstest-azure-functions"
  location                   = azurerm_resource_group.example.location
  resource_group_name        = azurerm_resource_group.example.name
  app_service_plan_id        = azurerm_app_service_plan.example.id
  storage_account_name       = azurerm_storage_account.example.name
  storage_account_access_key = azurerm_storage_account.example.primary_access_key
}

Also available on GitHub https://github.com/stenly311/Terraform-AzureFunction-InAzure

Overall Terraform CLI rating

  • Cloud provider portability
  • Fewer lines needed to achieve the same infrastructure configuration need compering to Azure ARM Templates
  • Intuitive and fast to learn
  • OpenSource with a wide collection of “get-started” production like examples
5/5 Rambo rating

How to find out PowerShell version quickly

Every Developer/DevOps was looking for this command at least ones a year (including myself). Truth is that we all like PowerShell and sometimes tend to forget put a running engine pre-conditional check inside a batch script (as whatever script you’re producing, always make sure that it’s transfarable onto another environment) and referencing some function which is not in (default) installed version.

Anyway…take this post as a reminder reference quide.

3 ways of how to quickly find out what version of PowerShell having installed

1. $PSVersionTable.PSVersion

This is my prefered way over the others. Why? Because it works on local as well as on remote station.

2. (Get-Host).Version

3. $host.Version

Summing this up

That’s it. Hope you like this short reminder and let me know your preferred way of doing this. Cheers!