Rust: The Language Redefining Efficiency and Safety in Software Development

When I first delved into Rust, it felt like stepping into a world where safety, performance, and developer experience converge in harmony. As someone who’s often wrestled with the complexities of memory management, runtime bugs, and performance bottlenecks in other languages, Rust has been a game-changer. It has not only helped me reduce running costs for applications but also significantly improved their security and stability.

Let me take you on a journey through what makes Rust truly stand out, chapter by chapter, highlighting its most powerful features.

Zero-Cost Abstractions: Performance Without Sacrifice

One of Rust’s most impressive features is its zero-cost abstractions. High-level constructs like iterators and smart pointers provide clarity and elegance to your code but with zero performance overhead. This means you get the power of abstraction without sacrificing efficiency—a dream come true for anyone managing resource-heavy applications.

For me, this has translated into writing expressive code that’s both readable and as efficient as handcrafted low-level implementations. Rust makes me feel like I’m coding with superpowers, optimizing applications without even breaking a sweat.

[RUST]

fn main() {
    let numbers = vec![1, 2, 3, 4, 5];
    let doubled: Vec<i32> = numbers.iter().map(|x| x * 2).collect();

    println!("{:?}", doubled); // Output: [2, 4, 6, 8, 10]
}

This example demonstrates how Rust’s iterators provide high-level abstraction for traversing and transforming collections without performance overhead.

Ownership Model: Memory Safety Reinvented

Rust’s ownership model was a revelation for me. It ensures every value has a single owner, and once that owner goes out of scope, the memory is freed—no garbage collector, no fuss.

This elegant approach eliminates bugs like dangling pointers, double frees, and memory leaks. For anyone who’s spent late nights debugging memory issues (like I have), Rust feels like the safety net you didn’t know you needed. It’s not just about reducing bugs; it’s about coding with peace of mind.

[RUST]

fn main() {
    let s = String::from("hello"); // s owns the memory
    let s1 = s;                    // Ownership is transferred to s1
    // println!("{}", s);          // Uncommenting this line causes a compile error

    println!("{}", s1); // Correct, as s1 now owns the memory
}

This shows how Rust ensures memory safety by transferring ownership, preventing use-after-free errors.

Borrowing Rules: Sharing Done Right

Building on the ownership model, Rust introduces borrowing rules, allowing you to share data through references without taking ownership. But there’s a catch (a good one): Rust enforces strict rules around lifetimes and mutability, ensuring your references never outlive the data they point to.

This might sound restrictive at first, but trust me—it’s liberating. Rust helps you avoid data races and other concurrency nightmares, making multi-threaded programming surprisingly smooth.

[RUST]

fn main() {
    let mut x = 10;

    {
        let r1 = &x; // Immutable borrow
        println!("r1: {}", r1);

        // let r2 = &mut x; // Uncommenting this line causes a compile error
    }

    let r3 = &mut x; // Mutable borrow after immutable borrow scope ends
    *r3 += 1;
    println!("x: {}", x);
}

This code highlights borrowing rules, showcasing how Rust prevents data races by enforcing strict borrowing lifetimes.

Algebraic Data Types (ADTs): Expressive and Error-Free

Rust’s Algebraic Data Types (ADTs) are like a Swiss Army knife for designing robust data models. Whether it’s enums, structs, or tuples, Rust lets you express relationships concisely.

The Option and Result types are my personal favorites. They force you to handle errors explicitly, reducing those frustrating runtime surprises. Combined with pattern matching, Rust makes handling edge cases a breeze. ADTs in Rust don’t just make code cleaner—they make it safer.

[RUST]

fn divide(a: i32, b: i32) -> Option<i32> {
    if b == 0 {
        None
    } else {
        Some(a / b)
    }
}

fn main() {
    match divide(10, 2) {
        Some(result) => println!("Result: {}", result),
        None => println!("Cannot divide by zero!"),
    }
}

Rust’s Option type ensures that edge cases like division by zero are handled explicitly, reducing runtime errors.

Polymorphism: Traits Over Classes

Rust’s approach to polymorphism is refreshingly different. Forget bloated inheritance hierarchies—Rust uses traits to define shared behavior across types. This static dispatch ensures zero runtime cost.

When I need flexibility, Rust also offers dynamic dispatch via dyn Trait. It’s the best of both worlds—flexibility where needed and performance everywhere else.

[RUST]

trait Area {
    fn area(&self) -> f64;
}

struct Circle {
    radius: f64,
}

struct Rectangle {
    width: f64,
    height: f64,
}

impl Area for Circle {
    fn area(&self) -> f64 {
        3.14 * self.radius * self.radius
    }
}

impl Area for Rectangle {
    fn area(&self) -> f64 {
        self.width * self.height
    }
}

fn print_area<T: Area>(shape: T) {
    println!("Area: {}", shape.area());
}

fn main() {
    let circle = Circle { radius: 5.0 };
    let rectangle = Rectangle { width: 4.0, height: 3.0 };

    print_area(circle);
    print_area(rectangle);
}

This demonstrates Rust’s trait-based polymorphism, allowing shared behavior across different types.

Async Programming: Concurrency with Confidence

Rust’s async/await model is a masterpiece of design. Writing asynchronous code that feels synchronous is a joy, especially when you know Rust’s ownership rules are keeping your data safe.

For my high-throughput projects, Rust’s lightweight async tasks have been a game-changer. I’ve seen noticeable improvements in scalability and responsiveness, proving Rust isn’t just about safety—it’s about speed, too.

[RUST]

use tokio::time::{sleep, Duration};

#[tokio::main]
async fn main() {
    let task1 = async_task(1);
    let task2 = async_task(2);

    tokio::join!(task1, task2);
}

async fn async_task(id: u32) {
    println!("Task {} started", id);
    sleep(Duration::from_secs(2)).await;
    println!("Task {} completed", id);
}

Using the Tokio runtime, Rust enables asynchronous programming with minimal overhead, ideal for scalable applications.

Meta Programming: Automate the Boring Stuff

Rust’s support for meta-programming is a lifesaver. With procedural macros and attributes, Rust automates repetitive tasks elegantly.

One standout for me has been the serde library, which uses macros to simplify serialization and deserialization. It’s like having an extra pair of hands, ensuring you can focus on logic rather than boilerplate.

Macros: Power and Simplicity

Rust’s macros are not just about simple text replacement; they’re a gateway to reusable, efficient patterns. Whether it’s creating custom DSLs or avoiding code duplication, macros have saved me countless hours.

What I love most is how declarative macros make the complex simple, ensuring my codebase remains DRY (Don’t Repeat Yourself) without becoming cryptic.

[RUST]

macro_rules! say_hello {
    () => {
        println!("Hello, Rustaceans!");
    };
}

fn main() {
    say_hello!(); // Expands to: println!("Hello, Rustaceans!");
}

Rust’s macros simplify repetitive code, boosting productivity without runtime costs.

Cargo: Your New Best Friend

Managing dependencies and builds can often feel like a chore, but Rust’s Cargo turns it into a seamless experience. From managing dependencies to building projects and running tests, Cargo is the all-in-one tool I never knew I needed.

I particularly appreciate its integration with crates.io, Rust’s package registry. Finding and using libraries is intuitive and hassle-free, leaving me more time to focus on building features.

[RUST]

[dependencies]
tokio = { version = "1.0", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }

With Cargo, adding dependencies is as simple as updating the Cargo.toml file. Here, tokio is used for asynchronous programming and serde for data serialization.

Why I’m Sticking with Rust

Rust has redefined what I expect from a programming language. Its combination of safety, performance, and developer-friendly tools has allowed me to build applications that are not only faster and more secure but also more cost-efficient.

If you’re still unsure whether Rust is worth the learning curve, let me leave you with this: Rust isn’t just a language—it’s a way to write code you can be proud of. It’s about building software that stands the test of time, with fewer bugs and better resource efficiency.

Here’s to building the future—one safe, efficient, and high-performance Rust program at a time. 🚀

Highlighted Features in Summary: Wrap-up

  • Zero-Cost Abstractions: Write expressive code without runtime overhead.
  • Ownership Model: Memory safety without garbage collection.
  • Borrowing Rules: No data races; safe and efficient memory sharing.
  • Algebraic Data Types (ADTs): Cleaner, safer error handling.
  • Polymorphism: Trait-based, with no runtime cost.
  • Async Programming: Lightweight, high-performance concurrency.
  • Meta Programming: Automate repetitive tasks effortlessly.
  • Macros: Powerful and reusable patterns.
  • Cargo: The all-in-one tool for dependency and project management.

If this resonates with your development goals or sparks curiosity, give Rust a try—you won’t regret it!

Thanks again for returning to my blog!


Sources used for this article:
https://www.rust-lang.org/
https://doc.rust-lang.org/stable/

Building Virtual Machine automated vertical scalability with VMSS in Azure

Are you planning to lift and shift VMs into the Cloud? Or have you done migration and now looking for a way how to scale them automatically?

Well, this article can be right for you!

When it comes to lifting and shifting the application and/or services hosted on the Virtual Machines (VM) from an on-premise environment to the Cloud, building a strategy and design of how to achieve some sort of automatization in VM scaling can be a challenging task.

In general, some type of scalability can be achieved with Virtual Machines as they are but it’s going to be very inopportune.

That’s why it is very important to work on cloud-based infrastructure design prior to the lifting-and-shifting process itself. And remember, VM scaling needs to happen automatically.

The approach I like to teach others is to automate everything that follows repetitive cycles.

But hang on, what if I scale up horizontally in the Cloud by adding extra HW resources (RAM, CPU) to logical machine computing power hosting my VM? .. Yes, this may work but with some scripting to be done first and that is less likely going to be repetitive with the same set of input properties…

But what if full scaling automatization can be accomplished with a higher running cost efficiency and as little configuration work as possible?

Yes, that is all possible these days and I am going to share how to use one of the options from the market.

The options I liked to pursue one project of mine is coming from the Microsoft Azure Resources stash.

Why is that?

It’s not a secret that I’ve worked with Azure since the early saga beginning. Therefore, I have built a long experience with the Azure platform. On the other hand, I have to admit that Azure Software Engineers have done a great job of building platform APIs and Web Wide UX/UI interface (Azure Portal) to make this process seamless and as easy to use as possible. More on my driving decision factors later …

Let’s get started

The Azure resources I have been mentioning here in the prologue are:

VMSS in Azure portal
Azure Compute Galleries in Azure portal
Azure Load Balancers in Azure portal

My reasons for choosing Azure

Every project has different needs and challenges coming from the business domain requirements. More importantly, rational justification on the economical side of the project complexity is mostly the driver of the project’s technological path in the design stage.

For this project, I was lucky because the customer I designed this solution for had part of the business applications and services in Azure already. Also, the customer big ambitious plans to migrate everything else from an on-premise data center to the Cloud in the near time horizon just made my decision more sealed, and therefore Cloud in Azure was the way to go.

Infrastructure diagram

Let’s get a better understanding of the designed system infrastructure from the simplified infrastructure diagram below.

Take it with a grain of salt as the main purpose of it is to highlight the main components used in the project and discuss these in this post.

VMSS simplified infrastructure diagram

What I like most about the selected Azure stack

  • VM redundancy across multiple data centers globally
  • has the ability to multiply VM instances as needed with an option to resize the instance computing power when needed (RAM, CPU, etc. => vertical scaling)
  • high service availability and resilience (subject to infrastructure design – in my case, I provisioned a total of two VMSSs, one geographically different data center each)
  • I like the flexibility of building my rules in VMSS on which the system decides whether VM instances go up or down in the quantity
  • Azure traffic balancer can be linked to VMSS easily
  • the VMSS service can provision up to 600 VM instances (and that is a lot!)
  • the Azure Compute Gallery (ACG) service is able to replicate images globally, supports image versioning and auto-deployment of the latest model to VM running instance (and that was a hot feature for me)

Steps to Provision Services in Azure

In a nutshell, follow these steps to provision Azure services and build the cloud infrastructure from the ground up:

  1. Lift and shift the VM into Azure (I can recommend using the Azure Migrate service to start this process)
  2. Create a new Azure resource: Azure Compute Gallery
  3. Go to the running instance of the VM and capture and generalize the image of the migrated VM
Capturing VM state into the image, Azure portal
Selecting an option to Generalized VM captured state into the image
  1. Create two replicated images (for one datacentre each)
Two replicated images setting
  1. Save the image into Azure Compute Gallery created in step 2
  2. Create two new Azure resources: Virtual Machine Scale Set (in geographically different data centers as per settings in ‘Target regions’ in step 4 for Scaleset redundancy capabilities)
  3. Create scale-out/in rules in VMSS

Scale-out/in VMSS rules example

The screenshot image below shows the example of setting up the Scaling rules for one of the VMSS instances.

VMSS scaling rules example

As you can see in my default profile in the picture above, this VMSS instance is not running any VM instances by default (Minimum = 0). But rather, spins up some (scaling out) based on these criteria:

  1. The main VMSS instance hosted in datacenter A increases on average CPU (or)
  2. The load balancer availability drops below 70% in a given timeframe

Very similar rules are used in the reverse process, aka scaling in.

If you’re planning to use a similar concept in your solution, count a VM operation system booting time in your metrics if high VM-hosted service availability and responsiveness are important to meet.

Microsoft Azure recently introduced a new feature called Predictive autoscale with Pre-launch setup (at the time of writing this article in preview only) which should solve a VM boot time issue for most of the use case scenarios. It works based on cyclical workload patterns determined by machine learning and predicts scaling out needs action in advance.

I like to say, that using Machine learning capabilities in this sort of behavior analysis is a very smart move from Microsoft forward.

I think VMSS has a lot to offer to businesses starting their journey to the Cloud.

The process of setting the infrastructure up is not complicated and can be done over UI/UX design in the Azure portal in no time. The VMSS scaling rules offer a lot of options to choose from and the level of integration with other types of Azure resources is on a very mature level, too.

Thanks for staying, subscribe to my blog and leave me a comment below.

cheers\

Why is CDN very important for your static HTTP content?

Almost everyone heard about CDN but what that actually is?

Explained: CDN (Content delivery network) is the set of geographically distributed servers (proxy servers if you like) that cache the static content on running physical hardware which speeds up the actual download to its destination.

The global CDN network
The global CDN network in Example

Now, let me explain why the CDN network is such a big player in the Solution infrastructure and why no Solution developer/architect should overlook this.

But before we go any further let me mention another term which is: response latency.

Explained: In other words, the time needed for to download the Website content entirely to the consumer (End-user) device.

And as you can imagine, this is another very important factor to have your eye on if want to keep your audience engaged to service-provided content as long as possible.

Low latency means a better User responsiveness/experience with the Website (Web service).

The question is, how to achieve the lowest latency possible? … there are two ways how to do it:

  • to use a very fast network for content delivery, or
  • to cache the content as closely as possible geographically to your audience

… the combination of both of these is the ultimate state towards which the global network is going (near real-time response).

And as all of you probably understand by now, to get the best ROI in the time you put into the content it is very important to have your infrastructure in the best shape possible. Keeping your visitors happy by serving them content as fast as possible helps to build better Website awareness and audience growth.

What CDN service provider do I use?

Among all of the CDN providers, I have come across, Cloudflare is the one I was attracted to most.

.. for many reasons:

The main one is that the service is offering reasonably good DDoS protection shielding and well-distributed and fast CDN server nodes.

Cloudflare account dashboard

To me, it is almost unbelievable that all of that for as much as $0! Yes, all of that can be yours for FREE! Very sweet deal, don’t you think? (btw, I am not participating in any affiliate program!)

Setting all of that up is a really straightforward and well-documented process.

If you want to know more visit this guide on how to set it all up.

The entire configuration process becomes even easier if having a domain name address purchased separately from the Web hosting (easier to maintain the DNS servers configuration over the Domain name provider portal – which every solid domain name provider has).

Another feature Cloudflare provides is the fast route finder across the Cloudflare network called Argo, which helps to decrease loading time and reduces bandwidth costs.

I have been using this service for one of my clients who is providing Address lookup and Address validation services over REST API web services hosted in the Cloud in multiple geographically different data centers and I must say that the customer experience has been very positive since.

In numbers, I was able to reduce an HTTP response latency time down from 1.4s to 0.5s! And these are very good performance improvements for a business where time is of the essence.

I am leaving this link here if interested to know more about this.

Anyway, thank you again for visiting this post, I hope you have enjoyed reading and let me know what CDN provider you’re using!

cheers\

Distributed System Architecture: Modern Three-Tier

Modern Three-Tier

The most used infrastructure architecture for SMEs and data-oriented small businesses operating on a global scale.

When it comes to the technology stack, and if talking about Web Applications then the most common setup I have seen all over the place is React.js being used for SPA or PWA frontend as a client/presentation layer, node.js as a Rest API Server/business layer and Cassandra (open source) as distributed (optionally cloud-based), fault-tolerant, well-performed, durable, elastic, …, supported (don’t forget on decent support from the community!), decentralized and scalable database/persistent layer.

Your database does not have to tick all of these boxes from above (apart from being distributed), but if you’re going to put all that effort into building this type of infrastructure, you want to make sure that the database meets as many required features from being modern and long-lasting solution sitting in infrastructure as possible (think about your development ROI Devs!!).

The way it works is that the client application (fetched from the store or application server) is capable of handling the user tasks by itself on the client device with the data supported over the API (node.js) and in the event of Server API running out of the breath, a new instance of the Application API server will be provisioned (new node is getting created, horizontal scaling -> scaling out/in).

Database, as it stands in this model, does not have this scaling capability but can scale up or down instead as needed (service is given more system resources, vertical scaling -> scaling up/down).

An illustration of how it’s getting all wired up together

1.1 Modern 3-Tier Distributed System Architecture

Summary

Pros

  • great logical separation and isolation with a lot of room for cybersecurity policy integration
  • not-complex architecture when it comes to problem investigation and troubleshooting
  • easy to medium complexity to get the infrastructure up and ready for development and maintenance (less DevOps, yay!)
  • an easy option to replicate infrastructure on user localhost for development purposes (just makes it all easier during branch development)
  • infrastructure running cost is relatively small

Cons

  • decommissioning provisioned nodes can be tricky (depending on the technology used)
  • data synchronization and access need orchestration (subjected to database type)
  • shipping new features out needs an entire Application server deployment (downtime)

Thanks for staying, subscribe to my blog and leave me a comment below.

cheers\