Software development principles and practices for solid Software Engineers

Although today’s way of software development is rapidly changing, having a good understanding of these principles and good practices may only help you become better in software development. Personally, I would recommend to every solid Software Engineers to get familiar with these practices if not already.

Coding practices

YAGNI

This principle came from Extreme Programming and states for very simple things: Don’t overthink the problem solution in the execution stage. Just write enough code for making things working!

DRY

This principle follows and states for: Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

Basically, don’t replicate functionality in the system and do make your code reusable.

SOLID

This principle has its own space in OOP. The SOLID mnemonic acronym represents these five design principles:

  1. Single-responsibility
    Design your classes in structural business entity/domain hierarchy, so only one class encapsulates only logic related to it.
  2. Open-closed
    Entities should be open for extension but closed for modification.
    In the development world, any class/API with publicly exposed methods or properties should not be modified in their current state but extended by other features as needed.
  3. Liskov substitution
    This principle defines the way how to design classes when it comes to inheritance in OOP.
    The simplified base definition says that if class B is a subtype of class (super) A, then objects of A may be replaced with objects of type B without altering any of the desirable properties of the program.
    In other words, if you have a (super) class of type Vehicle and subclass of type Car, you should be able to replace any objects of Vehicle with the objects Car in your application, without braking application behavior or its runtime.
  4. Interface segregation
    In OOP is recommended using Interfaces as an abstracted segregation level between the producer/consumer modules. This creates an ideal barrier preventing coupling dependencies and exposing just enough functionality to the consumer as needed.
  5. Dependency inversion
    The principle describes a need for abstract layer incorporation between the modules from top to bottom hierarchy. In brief, a high module should depend on an abstract layer (interface) and a lower module with dependency on the abstract layer should inherit/implement it.

KISS

Acronym for Keep it simple, stupid – and my favorite over the last years!

The principle has a very long history but getting forget by many Devs many times from my professional experience. Avoiding non-necessary complexity should be in every solid Software Engineer DNA. This keeps the additional development cost down for further software maintenance, new human resources onboarding, and the application/system additional organic growth.

BDD

Behavior-Driven Development is becoming more and more desirable practice to follow from the Agile oriented business environments. The core of these principles is coming from FDD. The BDD applies a similar process at the level of features (usually set of features). One’s tests build the application/system is getting a return on investment in form of automated QA testing for its lifetime. And therefore this way of working is very economically efficient in my opinion.

The fundamental idea of this is to engage QAs (BAs) into the development process right from the beginning.

This is a great presentation of the principle from the beginning to the end of the release lifecycle: Youtube

TDD

The software development process gained its popularity over time in test automatization. Basics are coming from the concept of starting the test-first and follow with the code until the test runs successfully.

Leveraging Unit test frameworks for this such as xUnit, NUnit (or similar), if you are .NET developers, helps to build a code coverage report very easily in MS Visual Studio (Enterprise edition) for example, which helps to build QA confidence over the code which last long time over the code releases.

FDD

Well know approach how to deliver the small blocks (features) in an Agile running environment. In other words, if you have a load of work to deliver is better to slice it down to individual blocks (features) which can be developed, tested and delivered independently.

The whole FDD methodology has 5 stages:

  1. Develop a model of what is needed to build
  2. Slice this model into small, testable blocks (features)
  3. Plan by feature (development plan – who is going to take that ownership)
  4. Design by feature (selects the set of features team can deliver within the given time frame)
  5. Build by feature (build, test, commit to the main branch, deploy)

The beauty of this development methodology approach is that deployment features such as Feature toggling can be integrated with relatively minimal complexity overhead. With this integration in place, the production team can move forward only on one main branch, unfinished feature development state regardless. An enterprise-level production team will appreciate this advantage, no doubt about it.

Summary

By following these principles and practices production team will produce maintainable code, with high test coverage and human resources high utilization over the SDLC (ROI).

TLS handshake between Client and Server explained

Not every developer these days has a clear picture of how the Client/Server HTTPS/TSL encryption works. To be fair I have to sometimes look at my notes to recall this process as it’s confusing and easy to forget.

Especially for these Devs working on the front end and using publicly available 3rd parties middleware, ready to be used for your solution – so, why to bother?

But anyway … this is a good piece of information to keep in the mind and if you forget, this handy post can remind you how the entire process workflow works again.

TLS handshake (negotiation) process flow

Example algorithm used now on: ECDH/RSA

  1. Client – [Sends](Hello: These are my supported cipher suites) -> Server
  2. [Server choose the cipher from the supplied cipher suites]
  3. Server – [Sends](Hello: This is my certificate with Public key) -> Client
  4. [Client validates the Certificate]
  5. Server – [Sends](Hello done) -> Client
  6. [Client generates Pre-Master secret and encrypts it by Server Public key]
  7. [Client generates (calculate) Symmetric key (Master secret) based on Pre-Master secret and random numbers
  8. Client – [Sends: Pre-Master Secret exchange](Change Cipher: Pre-Master secret) -> Server
  9. [Server receive and decrypt Pre-Master secret]
  10. [Server generates (calculate) Symmetric key (Master secret) based on received Pre-Master secret and random numbers]
  11. Client – [Sends](Change Cipher Spec) -> Server, which means that from now on, any other message from the Client will be encrypted by the Master secret
  12. Client – [Sends: Encrypted] -> Server, and the Server tries to decrypt the finish message
  13. Server – [Sends](Change Cipher Spec) -> Client, which means that from now on, any other message from the Server will be encrypted by the Master secret
  14. Server – [Sends: Encrypted] -> Client, Client tries to decrypt the message

-- handshake is completed --
— the communication encryption is changing from asymmetric to symmetric —

Example algorithm used now on: AES

15. Symmetric bulk encryption switched, Client and Server established TLS communication

// Agenda

   [] -> action
   () -> message

Some other facts to be aware of

  • Anything encrypted by the public key can be decrypted by private key only
  • More details about TSL
  • What is ECDH, RSA, and AES
  • What is asymmetric and symmetric cryptography

Immutable data types after .NET 5 release

Just couple weeks ago, Microsoft released RC of .NET 5 which is (unfortunately) not going to be an LTS (Long Term Support) release but on the other hand, it’s coming with some great features in it (yep yep).

One of them comes as a part of the new release of C# 9.0 (part of the .NET 5 release) which is Immutable Objects and Properties (records and init-only properties). Quite a smart concept in my opinion …

Recap on immutable data type

The immutable data type is basically data type of the variable of which the value cannot be changed after creation.

How does it look in reality?

Well, once immutable data typed object is created then only way how to change its value is to create a new one with a copied value of the previous instance.

What are the current immutable (and mostly used) data types from .NET CLR?

Primitive types

  • Byte and SByte
  • Int16 and UInt16
  • Int32 and UInt32
  • Int64 and UInt64
  • IntPtr
  • Single
  • Double
  • Decimal

Others

  • All enumeration types (enum, Enum)
  • All delegate types
  • DateTime, TimeSpan and DateTimeOffset
  • DBNull
  • Guid
  • Nullable
  • String
  • Tuple<T>
  • Uri
  • Version
  • Void
  • Lookup<TKey, TElement>

As you can see, we have quite a few to choose from already. How this list is going to look like after .NET 5 full release in November 2020?

Well, it’s going to be a revolutionary change in my 2 cents.

Principally, any object using .NET 5 runtime (and C# 9.0) can be immutable and also implement its own immutable state – and that is HOT feature.

The syntax of the immutable properties looks like in this example:

public class ObjectName
{
    public string FirstProperty { get; init; }
    public string SecondProperty { get; init; }
}

On the other hand, the syntax of immutable object (called a record) looks this:

public record class ObjectName
{
    public string FirstProperty { get; init; }
    public string SecondProperty { get; init; }
}

As you can see, the syntax is very clear and intuitive to use.

More details about new C# 9.0 features can be found here https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-9#record-types.

What type of coding challenge to expect on technical interview with Google

It has been a while since being interviewed by Google and got to answer a lot of technical questions. The essence of being successful is to be prepared! Especially now, in these Covid-19 difficult times, when getting a job is even harder than before for young developers with no professional network nor working experience (hey, YOU are not alone in this!)

And so, I am writing this Post for you, the NEXT DEV GENERATION! But just to be clear, this post is not about to leak the hiring questions to the public. It is about to give YOU an idea of what sort of coding challenges you may get along the way.

The most given questions (and tricky ones) are how to efficiently solve the problems with the algorithm. You as a Dev must show understanding of what a time complexity is, how to work with data structures and how to write (and write less) readable code, and all of that while people on another side of the conference meeting are WATCHING! (feel the stress but stay CALM, stay COOL)

Remember that this task assignment was given to me a couple of years ago and don’t rely on getting exactly the same coding Task on your D-Day. The assignment will be different but the level of (solution) the complexity of the task is more likely going to be the same.

Coding challenge

Task assignment

You have a collection of numeric item values among which are numbers ‘0’. Build the algorithm which shifts all zeros to the end of the array with the best time complexity possible. You are not allowed to use any additional data structures in the solution. Also, keep the items at the same order as they are.

Design

Always do design first!

Normally, it is a good practice to ask as many questions as possible to clarify all the requirements at the beginning (these are all positive points). Some of them can be (not explicitly written ones in task description):

  • should my solution be structured as for production use?
  • do you want me to write a unit test, too?
  • can I use Google? – NOPE, don’t ask this one. All good companies usually structure the technical questions in the way that any (capable) candidate should be able to answer them. Don’t take it personally if failed, you are not just there yet.

Coding

The solution I have used was based on swapping the items within the array:

using System;

namespace TestApp
{
    class Program
    {
        static void Main(string[] args)
        {
            var array = new int[] { 1, 0, 4, 5, 0, 4, 5, 3, 0 };

            var iterations = 0;
           
            int j = 0;
            for (int i = 0; i < array.Length; i++)
            {
                ++iterations;
                if (array[i] != 0)
                {
                    array[j++] = array[i];
                }
            }

            while (j < array.Length)
            {
                ++iterations1;
                array[j++] = 0;
            }

            Console.WriteLine($"Array: '{string.Join(",", array)}'");
            Console.WriteLine($"Time complexity: O({iterations})"); 
            Console.ReadLine();
        }
    }
}

Let’s examine the code.

As you can see, I have two loops. By the first one (for), I am trying to find a non-zero value in each iteration, copy the value from the current index to Pivot index and incrementing Pivot at the end of the cycle. If zero is found, Pivot index value remains and the loop goes onto the next item. If a non-zero value is found again, the value at the current index gets copied over to Pivot index (zero value) and Pivot index value gets incremented by 1.

The second loop (while) is going to add zero values at all indexes between Pivot index value and the last array index (that many zeros have to be placed back to an array).

What is the time complexity of this solution? Let’s do an analysis of it.

First loop (for) goes over 9 items within an array. The array has 3 zero values (while loop). Total number of iterations is: 9 + 3 = 12 => O(12) => O(n)

Linear time complexity? THIS IS PRETTY GOOD TO ME! But do I want to gain an extra point (and I WANTED to) by building a slightly different approach with fewer loop iterations?

So, I asked Google Hiring Technical Manager whether I can compromise the last requirement and reorder the non-zero values in the array a little bit. He has agreed…

Why am I doing this?! The answer is optimization … As you can see, the while loop might not be a part of the solution (now) if going thru an array in a reverse way and swapping the zero-value items with the one sitting at the last (examined) array index:

using System;

namespace TestApp
{
    class Program
    {
        static void Main(string[] args)
        {
            var array = new int[] { 1, 0, 4, 5, 0, 4, 5, 3, 0 };

            var iterations = 0;

            var end = array.Length - 1;
            var index = end;
            for (int i = end; i >= 0; i--)
            {
                ++iterations;
                if (array[i] == 0)
                {
                    var left = array[index];
                    var right = array[i];

                    array[i] = left;
                    array[index--] = right;
                }
            }

            Console.WriteLine($"Array: '{string.Join(",", array)}'");
            Console.WriteLine($"Time complexity: O({iterations})");
            Console.ReadLine();
        }
    }
}

What is the time complexity now?

Algorithm is using one loop (for) and goes over 9 items within an array => O(9) => O(n)

Not a bad approach and another plus point going towards my credit bank (Yep, Yep!).

Conclusion

The second approach might not sound like huge performance achievement (and it is NOT for such a small dataset) but it shows the Technical Recruiter Manager your way of thinking! Remember, it can be only a good impression what stands between choosing you over another tens/hundreds of candidates applying for the same role as you do.

Wishing you good luck and let me know in comments below how the technical interview did go along!

You can download the code from my GitHub repo here: https://github.com/stenly311/Moving-Zero-Value-Items-To-The-End-Of-Array

Few tips what to look at before going to technical interview