Not every developer these days has a clear picture of how the Client/Server HTTPS/TSL encryption works. To be fair I have to sometimes look at my notes to recall this process as it’s confusing and easy to forget.
Especially for these Devs working on the front end and using publicly available 3rd parties middleware, ready to be used for your solution – so, why bother?
But anyway … this is a good piece of information to keep in the mind and if you forget, this handy post can remind you how the entire process workflow works again.
TLS handshake (negotiation) process flow
Example algorithm used now on: ECDH/RSA
Client – [Sends](Hello: These are my supported cipher suites) -> Server
[Server chooses the cipher from the supplied cipher suites]
Server – [Sends](Hello: This is my certificate with Public key) -> Client
[Client validates the Certificate]
Server – [Sends](Hello done) -> Client
[Client generates Pre-Master secret and encrypts it by Server Public key]
[Client generates (calculate) Symmetric key (Master secret) based on Pre-Master secret and random numbers
Just a couple of weeks ago, Microsoft released RC of .NET 5 which is (unfortunately) not going to be an LTS (Long Term Support) release but on the other hand, it’s coming with some great features in it (yep).
One of them comes as a part of the new release of C# 9.0 (part of the .NET 5 release) which is Immutable Objects and Properties (records and init-only properties). Quite a smart concept in my opinion …
Recap on immutable data type
The immutable data type is basically the data type of the variable of which the value cannot be changed after creation.
How does it look in reality?
Well, once immutable data typed object is created the only way how to change its value is to create a new one with a copied value of the previous instance.
What are the current immutable (and mostly used) data types from .NET CLR?
Primitive types
Byte and SByte
Int16 and UInt16
Int32 and UInt32
Int64 and UInt64
IntPtr
Single
Double
Decimal
Others
All enumeration types (enum, Enum)
All delegate types
DateTime, TimeSpan and DateTimeOffset
DBNull
Guid
Nullable
String
Tuple<T>
Uri
Version
Void
Lookup<TKey, TElement>
As you can see, we have quite a few to choose from already. How this list is going to look like after .NET 5 full release in November 2020?
Well, it’s going to be a revolutionary change in my 2 cents.
Principally, any object using .NET 5 runtime (and C# 9.0) can be immutable and also implement its own immutable state – and that is a HOT feature.
The syntax of the immutable properties looks like this in this example:
public class ObjectName
{
public string FirstProperty { get; init; }
public string SecondProperty { get; init; }
}
On the other hand, the syntax of the immutable object (called a record) looks like this:
public record class ObjectName
{
public string FirstProperty { get; init; }
public string SecondProperty { get; init; }
}
As you can see, the syntax is very clear and intuitive to use.
Choosing the right way how to keep an infrastructure versioned and well maintained in source code is becoming a quite big issue these days.
There are several options to choose from on the market currently, and it’s easy to get trapped in a never-ending research cycle.
For those working with Azure services only, ARM Templates are more than an obvious answer to this but what if want to have more flexibility in going beyond the Azure boundaries?
You may be wondering why I should use anything else but ARM templates?
The answer is simple.
The ARM templates may be a bottleneck for the IT solutions using different cloud providers (multi-cloud solutions). In this case, managing infrastructure as a code may become a quite tricky (and ugly) thing to do overtime. The thing is that every tool used for infrastructure management has its “own ways” of how to work with it and that comes with the necessary knowledge base every production team must have beforehand.
And as an implication of this, choosing the right tool for your infrastructure management (including deployment) is very important.
Terraform would be a great way how to face this challenge. Just as the proof of its simplicity, what I would like to show you here is a short demonstration of how easy it is to provision Azure function in the Azure cloud by using the HashiCorp Configuration Language (HCL) and TF (Terraform) CLI utility in PowerShell.
You might be wondering, what features does TF has over the ARM templates?
Well, these infrastructure management tools have “the same” set of functionality but TF has other perks on top of that, which makes it a more secure and convenient tool for DevOps (besides multicolored cloud support).
Key features are:
HCL (HashiCorp Configuration Language) – high-level configuration syntax, well structured and intuitive language (TF also supports configuration using JSON for these JS geeks)
Execution Plans – shows you exactly what is going to happen with infrastructure before the change is getting executed
Resource Graph – the visual understanding of the infrastructure, and in my opinion, Terrafarom has done a very good job on this feature (don’t forget that it’s OpenSource!)
Change Automation – yes, every change needed on infrastructure can be automated -> which means less human interaction -> and less room for human errors, YAY!
If you’re new to Terraform and want to get a feel of what Terraform is, have a look at this introduction video footage with the Co-Founder and CTO Armon Dadgar.
Prerequisites (before we start)
Terraform utility downloaded and configured on the local environment (guide of how to do it … here) for Win10 users, in case of having an issue with WSL2, I recommend following this article to get over this issue
Azure CLI is installed and ready to roll (guide on how to do it … here)
Steps to follow
For these going exactly step by step as described in this guide make sure that any resource name starting with ‘ms‘ needs to be unique.
I recommend using some other characters as prefixes just to be sure that this exercise on your side will go smoothly. You won’t go far with the copy&paste technique here – oops!
Log in to Azure by using Azure CLI (Azure Command Prompt) or PowerShell
az login
2. If you have multiple subscriptions, skip this step otherwise. List them all out by running this command and choose the one wanted to be used (subscription_id )
az account list
3. Find out what is the latest supported AzureRM provider here (at this time of writing this post 2.29.0). This step is not mandatory but I would highly recommend doing it this way as AzureRM API might change in the future so better to have a version of the CLI referenced to the code batch file.
4. Create a folder and the file within main.tf (mine is located at c:/Temp/terraform-test/)
5. Add this snipped code at the beginning of the file. This will configure Azure CLI authentication in Terraform
provider "azurerm" {
version = "=2.29.0"
subscription_id = "<your Azure subscription id from the step 1 or 2>"
features {}
}
6. Append the file with the rest of the script from below.
For this exercise, the data center in Australia Central is going to be used (but change it if you like), new Azure function is going to be using the consumption service plan as well as running on Windows OS (this is the default option anyway – change it to the Linux if you wish)
7. Navigate to the folder with main.tf file created, open Command prompt or PowerShell, and type this command below
terraform init
This action will the create selections.json file at .terraform\plugins\ and download the AzureRMplugin into the .terraform\plugins\registry.terraform.io\hashicorp\azurerm\2.29.0\windows_amd64 directory.
The CLI utility starts its live time with the batch files from now on – a perfect isolation approach from the running environment (although the CLI utility itself may be quite hungry for disk space!).
8. You can skip this step if in hurry but continue reading if you want to know more about how to generate the infrastructure change plan …
Open the Command prompt or PowerShell and run this command from below to see the infrastructure plan before the change execution.
I can strongly recommend using some advanced IDE like MS Code for working with TF (because the text editor and the command console are all integrated into one app) as opposed to switching from the text editor back to the command console – this can be annoying…
terraform plan
The plan should look similar to the screenshot below. For those using MS Code, I would recommend downloading the HashiCorp Terraform extension to accelerate your further IaC development – I found it very useful in time efficiency!
9. Let’s get ready for D-day. Type this command to apply and execute the changes to Azure
terraform apply
This command is going to generate the infrastructure change plan and prompts the confirmation message to the user – I am happy with the planning changes, so typing yes.
10. ..and if everything has finished successfully, you should be able to see this message in the end
Entire main.tf file content
provider "azurerm" {
version = "=2.29.0"
subscription_id = "834b29c3-9626-408d-88e0-12e92793d1f5"
features {}
}
# Azure functions using a Consumption service plan on Windows OS (default option)
resource "azurerm_resource_group" "example" {
name = "azure-functions-cptest-rg"
location = "australiacentral"
}
resource "azurerm_storage_account" "example" {
name = "msfunctionsapptestsa"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_app_service_plan" "example" {
name = "azure-functions-test-service-plan"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
kind = "FunctionApp"
sku {
tier = "Dynamic"
size = "Y1"
}
}
resource "azurerm_function_app" "example" {
name = "mstest-azure-functions"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
app_service_plan_id = azurerm_app_service_plan.example.id
storage_account_name = azurerm_storage_account.example.name
storage_account_access_key = azurerm_storage_account.example.primary_access_key
}
Every Developer/DevOps was looking for this command at least once a year (including myself). The truth is that we all like PowerShell and sometimes tend to forget put a running engine pre-conditional check inside a batch script (as whatever script you’re producing, always make sure that it’s transferable onto another environment) and reference some function that is not in (default) installed version.
Anyway…take this post as a reminder reference guide.
3 ways how to quickly find out what version of PowerShell having installed
1. $PSVersionTable.PSVersion
This is my preferred way over the others. Why? Because it works on local as well as on remote stations.
2. (Get-Host).Version
3. $host.Version
Summing this up
That’s it. Hope you like this short reminder and let me know your preferred way of doing this.
Thanks for staying, subscribe to my blog, and leave me a comment below.
This Azure service has been here for a while now, but lately got a few improvements that make the integration and use of it even easier and more seamless than before.
Just before going any further, if you haven’t read anything about it, I recommend you to start with this article first https://docs.microsoft.com/en-us/azure/search/search-what-is-azure-search as I am not going to dive too much into the details today. This reading is going to be about my personal experience of getting a Cognitive Search service provisioned with a bunch of data (of one source) connected to it.
Personally, I like to look at the problems to solve with my business lens. What that means is focusing on building content (business value) rather than building the search engine (feature). I am not saying that compromising (non-business related) system features that are helping users to enhance their User system Experience is a good thing to do. All I am saying is stop re-inventing the wheel!
Hey Devs, don’t give me this wiggle face saying things like: “c’mon, it’s not that hard to do it by yourself!”. Yes, but actually it is hard from the time complexity point of view … To build a great search engine with features your audience is going to like would take many weeks of man/hours to do so. These features include Auto-completion, geospatial search, filtering, and faceting capabilities for a rich UX, OCR (ideally backed by AI), key phrase extraction, image text found results highlighting, and all of that with the ability to scale this service as needed and add as many multiple (and different) data sources as needed.
Can you see my point, now? Did one of your eyebrows just lift up? :)) Anyway .. let’s jump into it and see how long this is going to take me to build in the Azure portal.
Steps how to build a first Cognitive Search as a Service
1) Go to the Azure portal, search for Cognitive Services and add a new one called “Azure Cognitive Search“
2) As for all services in Azure space, you need to fill up what Subscription and Resource Group this service will belong to. And as the next step, preferred URL, Geographic location of the data center, and pricing Tier. I am choosing the free Tier (which should be enough for this exercise) and the location close to NZ. The next step is to click on Validate, and on Create button afterward.
3) The first step in the wizard is the “Connect to your data” tab. That means that on this page you can connect to multiple data sources. As you can see from the picture below, quite a few options are available to choose from (and most likely going to cover all of the use case scenarios). For this exercise, I am going to take “Samples” and SQL database. You can add as many data sources as you want (with the respect to limitations of the selected service Tier type).
4) At the “Add cognitive skills” tab I decided to add a bunch of additional Text Cognitive Skills, even though this step is optional. My reasons are purely investigative and I would like to see how the @search.score field in returning data result sets is going to look like when trying to search my documents by any of these fields from the Enriched data set.
5) In the next step “Customize target index” (sometimes referred to as a “pull model“) I am going to leave all pre-populated settings as they are as I am happy with it for now. In this step, you can configure things like the level of data exposure, data field types, filtering, sorting, etc.
Just to give you a better understanding of what the search index is in this context – think about it as in a relational database a search index equates to a table. And also we have documents, which are the items of the index. Think about them as documents that are roughly equivalent to rows in a table.
Also, remember to keep a Key field in Edm.String data type. This is a mandatory prerequisite.
6) In the “Create an Indexer” tab (the way how to index data in a scheduled manner) I am not allowed to configure how often should be mapping table (index) build. The reason for it is that the Sample SQL database I am using in this exercise does not use any Change tracking policies (for example SQL Integrated Change Tracking Policy). Why is needed? Well, basically Cognitive search needs to know when the data delete change happened to address that. You can read more about it here.
For now, I am going to submit this form and move on.
The service starts provisioning itself (this should not take long to finish) and after a couple of minutes, I should have everything ready for testing.
Testing the Search Service
Now, let’s have a look at “Search explorer” from the service level main top menu and craft some data queries. My first query was the “Bachelor-Wohnung” word, which nicely got populated into the URL query as the value of &search element by itself…
I have to say that building this service did take me about 20 minutes (for someone who has some experience already) from having nothing to an easy-to-configure and scale search engine. Anyone should be able to build the first Cognitive search service by a similar time after reading this post now.
If there are any questions or want to know more about this service, visit this site built by Microsoft at https://docs.microsoft.com/en-us/azure/search/. These people did a really great job in documenting all of it. This material should help you to elevate your skills to a more advanced level.
What is the Azure Cognitive Search Tiers pricing
_
FREE
BASIC
STANDARD S1
STANDARD S2
STANDARD S3
STORAGE OPTIMIZED L1
STORAGE OPTIMIZED L2
Storage
50 MB
2 GB
25 GB (max 300 GB per service)
100 GB (max 1 TB per service)
200 GB (max 2 TB per service)
1 TB (max 12 TB per service)
2 TB (max 24 TB per service)
Max indexes per service
3
15
50
200
200 or 1000/partition in high density1 mode
10
10
Scale out limits
N/A
Up to 3 units per service (max 1 partition; max 3 replicas)
Up to 36 units per service (max 12 partition; max 12 replicas)
Up to 36 units per service (max 12 partition; max 12 replicas)
Up to 36 units per service (max 12 partition; max 12 replicas) up to 12 replicas in high density1 mode
Up to 36 units per service (max 12 partition; max 12 replicas)
Up to 36 units per service (max 12 partition; max 12 replicas) up to 12 replicas in high density1 mode
It has been a while since being interviewed by Google and got to answer a lot of technical questions. The essence of being successful is to be prepared! Especially now, in these Covid-19 difficult times, when getting a job is even harder than before for young developers with no professional network or working experience (hey, YOU are not alone in this!)
And so, I am writing this post for you, the NEXT DEV GENERATION! But just to be clear, this post is not about to leak the hiring questions to the public. It is about to give YOU an idea of what sort of coding challenges you may get along the way.
The most given questions (and tricky ones) are how to efficiently solve the problems with the algorithm. You as a Dev must show understanding of what a time complexity is, how to work with data structures and how to write (and write less) readable code, and all of that while people on another side of the conference meeting are WATCHING! (feel the stress but stay CALM, stay COOL)
Remember that this task assignment was given to me a couple of years ago and don’t rely on getting exactly the same coding Task on your D-Day. The assignment will be different but the level of (solution) the complexity of the task is more likely going to be the same.
Coding challenge
Task assignment
You have a collection of numeric item values among which are numbers ‘0’. Build the algorithm which shifts all zeros to the end of the array with the best time complexity possible. You are not allowed to use any additional data structures in the solution. Also, keep the items at the same order as they are.
Design
Always do design first!
Normally, it is a good practice to ask as many questions as possible to clarify all the requirements at the beginning (these are all positive points). Some of them can be (not explicitly written ones in task description):
should my solution be structured for production use?
do you want me to write a unit test, too?
can I use Google? – NOPE, don’t ask this one. All good companies usually structure the technical questions in the way that any (capable) candidate should be able to answer them. Don’t take it personally if failed, you are not just there yet.
Coding
The solution I have used was based on swapping the items within the array:
using System;
namespace TestApp
{
class Program
{
static void Main(string[] args)
{
var array = new int[] { 1, 0, 4, 5, 0, 4, 5, 3, 0 };
var iterations = 0;
int j = 0;
for (int i = 0; i < array.Length; i++)
{
++iterations;
if (array[i] != 0)
{
array[j++] = array[i];
}
}
while (j < array.Length)
{
++iterations1;
array[j++] = 0;
}
Console.WriteLine($"Array: '{string.Join(",", array)}'");
Console.WriteLine($"Time complexity: O({iterations})");
Console.ReadLine();
}
}
}
Let’s examine the code.
As you can see, I have two loops. For the first one (for), I am trying to find a non-zero value in each iteration, copy the value from the current index to the Pivot index, and increment Pivot at the end of the cycle. If zero is found, the Pivot index value remains and the loop goes on to the next item. If a non-zero value is found again, the value at the current index gets copied over to the Pivot index (zero value) and the Pivot index value gets incremented by 1.
The second loop (while) is going to add zero values at all indexes between the Pivot index value and the last array index (that many zeros have to be placed back to an array).
What is the time complexity of this solution? Let’s do an analysis of it.
First loop (for) goes over 9 items within an array. The array has 3 zero values (while loop). Total number of iterations is: 9 + 3 = 12 => O(12) => O(n)
Linear time complexity? THIS IS PRETTY GOOD TO ME! But do I want to gain an extra point (and I WANTED to) by building a slightly different approach with fewer loop iterations?
So, I asked Google Hiring Technical Manager whether I can compromise the last requirement and reorder the non-zero values in the array a little bit. He has agreed…
Why am I doing this?! The answer is optimization … As you can see, the while loop might not be a part of the solution (now) if going thru an array in a reverse way and swapping the zero-value items with the one sitting at the last (examined) array index:
using System;
namespace TestApp
{
class Program
{
static void Main(string[] args)
{
var array = new int[] { 1, 0, 4, 5, 0, 4, 5, 3, 0 };
var iterations = 0;
var end = array.Length - 1;
var index = end;
for (int i = end; i >= 0; i--)
{
++iterations;
if (array[i] == 0)
{
var left = array[index];
var right = array[i];
array[i] = left;
array[index--] = right;
}
}
Console.WriteLine($"Array: '{string.Join(",", array)}'");
Console.WriteLine($"Time complexity: O({iterations})");
Console.ReadLine();
}
}
}
What is the time complexity now?
Algorithm is using one loop (for) and goes over 9 items within an array => O(9) => O(n)
Not a bad approach and another plus point going towards my credit bank (Yep, Yep!).
Conclusion
The second approach might not sound like a huge performance achievement (and it is NOT for such a small dataset) but it shows the Technical Recruiter Manager your way of thinking! Remember, it can be only a good impression of what stands between choosing you over other tens/hundreds of candidates applying for the same role as you do.
Wishing you good luck and let me know in the comments below how the technical interview did go along!
We are living in a very fast and dynamic world now. The days when software developers could have just a narrow set of skillset are gone and in order to “do good” on the market, everyone must adopt them.
It does not have to be a radial adaptation process (phew!) but having reasonably good knowledge about certain development languages, patterns, frameworks, and “way of doing stuff” ala trends is the must.
That is why YOU as a software developer should know these languages at least an intermediate level to be able to code some basics without googling.
Alright, enough of the initial sauce of words, let’s get into these three languages according to the 2020 Dev survey.
Must languages to learn
1. Believe it or not, the best option for you is Python. I am not going to write what this language is in detail but in brief, this interpreted and high-level and generic-purpose language has become integrated into almost any type of solution you can think of (cross-platform). Well, that is not surprising to me as been with us for almost 3 decades now (1991). What is more interesting in it is the actual philosophy which stands on these points:
2. Honestly, I am surprised that JavaScript made it to second place (and not to the top). I personally think that this multiparadigm language has a lot of potential for the future and so every developer should learn it.
3. And probably my favorite one over these two is Go (Golang). Not because of my experience (just started to learn this) but because of what am capable of in a very short time (hey I am C# dev, I know what I am talking about!). This would not be my surprise if Go makes its way to the top of the ladder in the next 3 years.
Just remember, that data has been collected from an active society contributing to Stackoverflow. That means that these results do not EXACTLY reflect the market situation globally nor in your region. Always do your homework and look at the different data sources, related to the place you live (and going to be for the next 5 years).
New Zealanders this does not apply to you. You cannot go wrong with these three ones. Just for reference, NZ-based company Rocket Lab is constantly hiring Software Engineers with Golang experience https://www.rocketlabusa.com/careers/positions/.
Overall Stackoverflow survey rating
it’s great to have actual IT pros attending this survey
Stack overflow holds a big audience
in my opinion, data were collected from the younger generation as opposed to the older and so segregated datasets might not be in the required balance for reports
Thanks for staying, subscribe to my blog, and leave me a comment below.
Well, believe it or not, this Azure service has no free subscription. The ‘cheapest’ one is about $0.252/day with a total of 10 GiB of storage and 2 Webhooks. Unfortunately, with no support for Geo-replication.
500 Premium offers enhanced throughput for docker pulls across multiple, concurrent nodes
Total webhooks
2
10
500
Geo-Replication
Not Supported
Not Supported
Supported $2.520 per replicated region
Azure Container Registry pricing
Do I like ACR?
Yes and no …
For big projects in size, where the biggest proportion of the solution services is getting provisioned in Azure – Yes, definitely. The level of convenience of having ‘everything’ (source code, tool-set, hosting environment, …) in one place plays a big role here. The assumption is that if Devs/DevOps are happy with the tool-set within the same platform, the overall progress on the project should be faster as there is no need for extra work for system integration and shaping diametrically different skills sets (theory but works in many cases).
And for the projects hungry for disk space and tight to budget – No. There are cheaper alternatives on the market, for example, Docker.com (with one private repository in the Free plan – whoop, whoop!). Pricing starts as low as USD $5/month (with an annual plan) which is insanely CHEAP! So if Azure is not your dime in solution, Docker.com would be my choice to pick.
If you made it this far, then you must know something about the Docker (Containerization). That is great because this post is not about what Docker really is but how to work with image revisions in conjunction with the Azure Container Registry (aka repo).
I am assuming you have your own repository in Azure created already and know the basic commands of how to spin up the container (or leave the comments below).
Also that a Docker desktop is installed on your PC and has a docker image ready to be used for this exercise.
User story
You as a developer want to create a starting (based) image out of the running container on your localhost (image type regardless of this exercise) for your co-workers. The image is going to be parked in ACR for easy access. The initial version is going to have the tag ‘v1’.
Steps to follow
1. Download MS Azure Command Prompt, the latest version can be found here, or just use google search
2. Validate that installation has been successful by starting the MS Azure Prompt and run
az --version
3. Log in to Azure by using this command below (you should be redirected onto the browser app with portal.azure.com as URL. Now, use your user credentials and wait for a callback redirect back to the terminal (MS ACP)
az acr login --name <your ACR name>
Example:
az acr login --name webcommerce22
4. Commit the latest changes on the top of the running docker container (docker desktop) with tag v1 (this operation creates a new image). Remember that only these characters are allowed in naming ‘a-z0-9-_.’