Memory Management in Embedded Systems: A Comparative Analysis of C and Rust

This report explores how Rust, with its focus on compile-time safety, is poised to redefine reliability and security in the resource-constrained world of embedded devices, contrasting its approach with C's traditional manual memory management.

Embedded Market Share (2024)

C remains dominant, but Rust's adoption is accelerating, driven by the demand for safer software.

The Memory Safety Crisis

70%

of critical security vulnerabilities at major tech companies are caused by memory safety bugs—the exact class of errors Rust is designed to prevent at compile time.[5, 1, 9, 2, 10, 11, 7]

Summary

Memory management is paramount in resource-constrained embedded systems, where efficiency and reliability are critical for system operation and safety. For decades, C has dominated this domain due to its low-level control and minimal runtime. However, this power comes with significant responsibility, as C's manual memory management is a notorious source of vulnerabilities such as buffer overflows, null pointer dereferences, and use-after-free errors. These issues frequently lead to system crashes, data corruption, and severe security risks, with studies indicating that a substantial percentage of software vulnerabilities stem from memory safety flaws.

Rust presents a compelling alternative, fundamentally shifting memory safety guarantees from runtime to compile time through its unique ownership and borrowing system. This approach eliminates entire classes of memory-related bugs without the overhead of a garbage collector, making it highly suitable for embedded environments. While C relies on programmer vigilance to prevent errors, Rust's compiler proactively enforces memory safety, leading to more robust and secure code. This report delves into the distinct memory management paradigms of C and Rust, illustrating their practical differences through code examples and discussing their implications for embedded development, including performance, binary size, and developer productivity.

1. Introduction to Memory Management in Embedded Systems

1.1. The Criticality of Memory in Resource-Constrained Environments

Embedded systems operate under strict limitations, with memory, both volatile RAM and non-volatile Flash, being a particularly constrained resource. In these environments, efficient and safe memory management is not merely a beneficial practice but a fundamental requirement for ensuring stable operation, preventing system failures, and meeting real-time performance targets. Even minor memory-related defects can escalate into severe and often catastrophic consequences, ranging from device malfunction and data corruption to critical safety hazards in applications such as medical devices or automotive control systems.[1, 2]

The Rust Embedded Working Group, for instance, explicitly targets "resource-constrained environments and non-traditional platforms"[3, 4], underscoring the inherent challenges of memory management in this specialized field. The memory footprint and CPU utilization are directly influenced by the chosen memory management strategy.[5] Furthermore, the prevalence of microcontrollers (MCUs) equipped with limited flash storage, such as those with only 64KB[6], makes the binary size of compiled software a critical factor in design and language selection.

The core challenge in embedded memory management extends beyond merely preventing errors; it necessitates doing so with minimal to zero runtime overhead. Rust's design philosophy is specifically engineered to achieve memory safety without relying on a garbage collector or introducing significant runtime memory management overhead.[7, 8] This is a crucial distinction for resource-constrained embedded environments. The language's underlying memory model directly impacts its practical viability for deeply embedded systems, where every byte of memory and every clock cycle is accounted for. This design choice in Rust, which avoids garbage collection, directly addresses the critical need for safety without sacrificing efficiency, a key factor in its growing adoption in domains traditionally dominated by C.

Beyond operational stability, memory-related bugs are consistently identified as primary vectors for security vulnerabilities. They are not simply sources of instability but can be exploited to compromise system integrity. Statistics from prominent technology companies like Microsoft and Google indicate that approximately 70% of their security vulnerabilities are memory safety issues that Rust's design would inherently prevent.[5, 1, 9, 2, 10, 11, 7] This highlights that a programming language's inherent approach to memory management directly determines the fundamental security posture of the resulting embedded firmware. Consequently, selecting a language with robust memory safety guarantees becomes a proactive security measure, elevating it beyond a mere bug-prevention strategy to a foundational element of system security and functional safety.

1.2. Overview of C's Traditional Approach to Memory Management

For over five decades, C has maintained its position as the dominant programming language for embedded systems development.[5, 2] Its enduring prevalence is primarily attributed to its direct access to hardware, minimal runtime requirements, and granular control over system resources. This unparalleled level of control has historically been indispensable for optimizing performance and resource utilization in highly constrained environments. However, this immense power is coupled with a significant responsibility: the programmer bears the entire burden of manual memory management.

C provides powerful yet manual memory management tools, primarily through functions like malloc() for allocation and free() for deallocation.[12] Despite the emergence of newer languages, C continues to hold a substantial market share, accounting for approximately 70% of embedded applications according to a 2024 Embedded.com Developer Survey.[5] Conversely, C is widely recognized for its susceptibility to memory-related bugs, including buffer overflows, null pointer dereferences, and data races.[2] These issues can lead to severe system malfunctions and security vulnerabilities. The reliance on C-based firmware in most embedded devices means that developers must possess extensive knowledge of the language's edge cases to mitigate potential issues [13], placing a high burden on individual programmers to ensure correctness.

C's memory management paradigm fundamentally relies on the programmer's vigilance, expertise, and strict adherence to best practices to prevent errors.[9, 13, 10] This dependence on human diligence is simultaneously the source of C's immense power, enabling highly optimized and custom memory layouts, and its major weakness, as it is prone to human error. Such errors can lead to unpredictable and often exploitable undefined behavior. This contrasts sharply with languages that offer compile-time enforced safety mechanisms.

The long-standing dominance of C, with its 70% market share [5], is not solely a testament to its technical advantages in all contexts. It is also significantly influenced by the vast accumulated legacy codebases, deep developer familiarity, and well-established toolchains.[5, 2, 10, 14] This creates a substantial barrier to the adoption of newer languages, even when they offer demonstrably superior safety features. The decision to transition away from C often involves overcoming considerable organizational and cultural inertia, extending beyond a purely technical evaluation of language capabilities.

The Challenge with C: Manual Memory Management

C provides unparalleled low-level control, but this power comes at a cost. The programmer is solely responsible for managing memory, a complex task where simple oversights can lead to critical, hard-to-debug vulnerabilities. This section details the mechanisms C provides for memory management and illustrates common pitfalls with specific code examples.

2.1. Heap Allocation and Deallocation (malloc, free)

Dynamic memory allocation in C, primarily facilitated by the malloc() and free() functions from stdlib.h, allows programs to request and release memory at runtime. This flexibility is powerful, enabling the management of data structures whose sizes are unknown or vary during execution. However, this power comes with the complete burden of memory lifecycle management placed squarely on the programmer.[15]

When malloc() is called, it reserves a block of memory on the heap and returns a pointer to its beginning. The programmer is then responsible for using this memory appropriately. Subsequently, free() must be called with the same pointer to deallocate the memory, making it available for reuse by the system.[15]

The use of malloc and free in C establishes an implicit, unenforced contract: the programmer is solely responsible for both allocating memory and correctly deallocating it exactly once. Failure to uphold this contract—such as forgetting to call free (leading to a memory leak), calling free multiple times on the same pointer (resulting in a double-free error), or attempting to access or modify memory after it has been freed (known as a use-after-free vulnerability)—leads directly to undefined behavior and potential system vulnerabilities. The C language provides the tools for memory management but offers no compile-time guarantees for their correct usage, relying entirely on programmer discipline.

2.2. Stack Allocation and Scope

The stack is a region of memory automatically managed by the compiler and hardware. It is primarily used for local variables, function parameters, and return addresses. Memory allocation on the stack follows a Last-In, First-Out (LIFO) principle, with data being pushed onto the stack when a function is called and popped off when the function returns.[16] This automatic management makes stack allocation generally safer and faster than heap allocation.

All data stored on the stack must have a known, fixed size at compile time.[16] When a function completes execution, its corresponding stack frame, containing its local variables and parameters, is automatically deallocated.[17] This deterministic and automatic cleanup prevents many common memory errors associated with manual management.

While memory management on the stack in C is largely automatic and inherently safe within its defined scope, the majority of memory safety issues and vulnerabilities in C programs originate from the heap. This is due to the manual control required for dynamic memory and the non-deterministic lifetimes of heap-allocated data. The contrast between the compile-time determined lifetimes of stack variables and the runtime-determined lifetimes of heap-allocated data represents a critical shift in complexity, making the heap the primary source of C's memory safety challenges.

2.3. Illustrative Code: Buffer Overflow Vulnerability

A buffer overflow is a classic and highly dangerous memory vulnerability that occurs when a program attempts to write more data into a fixed-size buffer than it can hold.[18] This excess data then overflows into adjacent memory locations, potentially corrupting legitimate data, leading to program crashes, or, in severe cases, allowing attackers to execute arbitrary code.

C standard library functions such as gets() and strcpy() are particularly notorious for enabling buffer overflows because they perform no bounds checking.[18, 19] If the source string provided to strcpy() is larger than the destination buffer, the function will simply continue writing past the buffer's boundary, leading to undefined behavior. Memory safety issues, including buffer overflows, account for a significant percentage of security vulnerabilities in software. For example, reports from major technology companies like Microsoft and Google indicate that approximately 70% of their security vulnerabilities are memory safety issues that Rust's design would prevent.[5, 7]

The following C code demonstrates a buffer overflow vulnerability:

#include <stdio.h> // Includes standard input/output functions (like printf)
#include <string.h> // Includes string manipulation functions (like strcpy)

// Function demonstrating Buffer Overflow
void handle_input_c(char *input_str) {
    // This is our "box" (buffer) that can hold 15 characters plus one special character (null terminator)
    // for a total of 16 bytes. If we put more than 15 characters, it will overflow.
    char buffer[16]; 
    printf("\n--- C: Buffer Overflow Example ---\n");
    printf("Input string length: %zu\n", strlen(input_str)); // Shows how long the input string is
    printf("Buffer size: %zu\n", sizeof(buffer)); // Shows how big our "box" is

    // *** VULNERABLE LINE ***
    // strcpy is dangerous! It copies the entire 'input_str' into 'buffer'
    // WITHOUT checking if 'buffer' is large enough.
    // If 'input_str' is too long, it will write past the end of 'buffer'.
    strcpy(buffer, input_str); 
    printf("Buffer content: %s\n", buffer); // Prints what's inside the buffer
    printf("Warning: Adjacent memory might be corrupted if input is too long.\n");
    // In a real program, this "spill-over" could overwrite important program data,
    // leading to crashes, security holes, or unpredictable behavior.
}

int main() {
    // A short string that fits safely into the buffer.
    char short_str[] = "Hello World"; 
    // A very long string designed to cause an overflow. It's much bigger than 15 characters.
    char long_str[] = "This is a very long string that will definitely overflow the buffer and cause problems.";

    // First, we call the function with a safe, short string. This works fine.
    handle_input_c(short_str);

    // Next, we call the function with the dangerously long string.
    // When this line runs, the 'strcpy' inside 'handle_input_c' will write too much data,
    // causing a buffer overflow. The program might crash immediately, or behave strangely later.
    handle_input_c(long_str);

    // This line might not be reached if the program crashes earlier due to the overflow.
    // If it does reach here, it means the program continued, but potentially with corrupted memory.
    printf("Program continued (potentially corrupted memory or crashed earlier).\n");
    return 0; // Indicates the program finished (if it didn't crash)
}

In this example, strcpy is used to copy an input string into a fixed-size buffer. When handle_input_c is invoked with long_str, which is 87 characters long, it significantly exceeds the buffer's capacity of 15 characters plus a null terminator. strcpy proceeds to write past the allocated memory boundary for buffer.[18, 19] This action results in undefined behavior. The consequences can vary widely, from a program crash (e.g., a segmentation fault) to silent corruption of adjacent data on the stack, or even the manipulation of control flow by overwriting critical data like return addresses. Such vulnerabilities are prime targets for security exploits.

C's fundamental lack of inherent bounds checking means that buffer overflows often lead to silent memory corruption before any noticeable crash or error.[2, 4, 18, 19, 20] This "silent" aspect makes these vulnerabilities exceptionally difficult to diagnose and debug in complex systems. More critically, it creates a fertile ground for security exploits where attackers can precisely control the overwritten memory without immediate detection. The resulting "undefined behavior" is a critical concept in C, as it signifies that the program's subsequent actions are entirely unpredictable and outside the language specification. This absence of explicit failure mechanisms is a profound difference when contrasted with Rust's approach, which prioritizes explicit error handling and prevents such issues at compile time.

2.4. Illustrative Code: Null Pointer Dereference

A null pointer dereference occurs when a program attempts to access the memory location pointed to by a pointer that holds a NULL value.[21] Since NULL signifies an invalid or non-existent memory address, this operation typically results in a program crash (e.g., a segmentation fault on Unix-like systems or an access violation on Windows) or other forms of undefined behavior.[1, 21] The C standard explicitly states that dereferencing a null pointer leads to undefined behavior, meaning the outcome is not specified and can vary unpredictably.[22] Such memory safety issues, including null pointer dereferences, are a major source of security vulnerabilities, with industry reports indicating that over 70% of common vulnerabilities and exposures at Microsoft stem from memory safety issues.[7]

The following C code demonstrates a null pointer dereference:

#include <stdio.h> // Includes standard input/output functions
#include <stdlib.h> // Includes standard library functions (like NULL)

// Function demonstrating Null Pointer Dereference
void process_data_c(int *data_ptr) {
    printf("\n--- C: Null Pointer Dereference Example ---\n");
    // In real-world C programming, it's CRUCIAL to check if a pointer is NULL before using it.
    // If this check is forgotten, the program will likely crash.
    // Example of a good check (commented out):
    // if (data_ptr == NULL) {
    //     printf("Error: Pointer is NULL. Cannot process data.\n");
    //     return; // Stop the function if the pointer is invalid
    // }
    
    // *** VULNERABLE LINE ***
    // This line attempts to store the value 10 into the memory location that 'data_ptr' points to.
    // If 'data_ptr' is NULL (pointing to nothing), this operation is invalid and will cause a crash
    // (often a "segmentation fault" or "access violation").
    *data_ptr = 10; 
    printf("Value assigned: %d\n", *data_ptr); // This line is usually not reached if a crash occurs
}

int main() {
    int valid_val = 50; // A normal integer variable with a value
    int *valid_ptr = &valid_val; // A pointer that correctly points to 'valid_val' (a valid memory location)
    int *null_ptr = NULL;        // A pointer that explicitly points to NOTHING (NULL)

    printf("Calling process_data_c with a valid pointer:\n");
    process_data_c(valid_ptr); // This call is safe because 'valid_ptr' points to valid memory.

    printf("\nCalling process_data_c with a NULL pointer (will likely crash):\n");
    // *** DANGEROUS CALL ***
    // If you uncomment the line below, the program will try to use 'null_ptr' inside 'process_data_c'.
    // Since 'null_ptr' points to nothing, this will cause a runtime crash.
    // process_data_c(null_ptr); 

    printf("Program continued (if it didn't crash).\n"); // This line might not be seen if the program crashes.
    return 0; // Indicates the program finished (if it didn't crash).
}

In this C example, null_ptr is explicitly initialized to NULL. If this NULL pointer is passed to process_data_c and the crucial if (data_ptr == NULL) check is omitted, as is a common oversight leading to real-world bugs, the line *data_ptr = 10; will attempt to write to an invalid memory address.[22] On most modern operating systems, this action will trigger a segmentation fault, resulting in an immediate program crash. The unpredictability inherent in undefined behavior means that while a crash is the most common outcome, other less likely outcomes are technically possible. This scenario underscores C's reactive safety model, where errors are detected at runtime, often leading to abrupt program termination.

C's design implicitly trusts the programmer to ensure that pointers are valid before they are dereferenced. When this trust is violated, the failure manifests as a runtime crash, such as a segmentation fault. Such crashes are disruptive, difficult to recover from gracefully in embedded systems, and can be exploited by malicious actors.[1, 21] This highlights a reactive, rather than proactive, safety model, where the system responds to an error after it has occurred, rather than preventing it from happening in the first place.

Buffer Overflow

When more data is written to a memory buffer than it can hold, it spills into adjacent memory, corrupting data or enabling exploits.

char buffer[16];
Adjacent Memory

Rust's Solution: Compile-Time Safety

Rust enforces a set of rules at compile time that guarantee memory safety without a garbage collector. This is achieved through its innovative Ownership, Borrowing, and Lifetimes system, which prevents errors before the program ever runs.

3.1. Ownership, Borrowing, and Lifetimes

Rust's core innovation for memory safety is its ownership model.[23, 16] This system is enforced at compile time, meaning it incurs no runtime overhead. Each value in Rust has a single owner, and when that owner goes out of scope, the value is automatically "dropped" (cleaned up).[16] This mechanism prevents common memory errors like double-frees and memory leaks without relying on a garbage collector or manual free calls.

The ownership model is particularly crucial for managing data on the heap, where sizes might be dynamic or unknown at compile time. For instance, the String type in Rust, which represents mutable, growable text, allocates its content on the heap. When a String variable is assigned to another, Rust performs a "move" operation by default, transferring ownership and invalidating the original variable.[23, 16] This prevents two variables from simultaneously owning and attempting to free the same heap memory, thus eliminating use-after-free and double-free vulnerabilities.[24] If a deep copy is desired, the clone() method must be explicitly called, which incurs a performance cost but clearly indicates the intent to duplicate data.[16] For fixed-size types like integers or booleans, Rust uses the Copy trait, which performs a simple bitwise copy, leaving the original variable valid.[16]

Beyond ownership, Rust introduces the concept of "borrowing" through references. A reference allows a piece of code to use data without taking ownership of it. Rust distinguishes between immutable references (&T), which allow multiple readers of data, and mutable references (&mut T), which grant exclusive read-write access.[23] The "borrow checker," a component of the Rust compiler, enforces strict rules:

  1. There can be multiple immutable references to a value.
  2. There can be only one mutable reference to a value at any given time.
  3. An immutable reference and a mutable reference cannot coexist for the same data.[23]

These rules prevent data races and ensure that data is not modified unexpectedly while being read. Lifetimes are another compile-time mechanism that ensures references remain valid for as long as they are used, preventing dangling references—pointers that outlive the data they refer to.[23]

Rust's approach to memory safety represents a proactive safety model, where errors are caught at compile time before they can manifest at runtime.[9, 2, 25, 10, 11, 7] This contrasts sharply with C's reactive model, where memory errors typically lead to runtime crashes or undefined behavior. The compile-time enforcement of memory safety not only enhances the reliability of software but also significantly improves developer productivity. By guaranteeing memory safety, Rust enables "fearless concurrency" and "fearless refactoring".[26, 27, 2, 11] Developers can modify and restructure code with confidence, knowing that the compiler will flag any memory safety violations, rather than spending extensive time debugging subtle runtime bugs. This fundamental shift in error detection allows for faster development cycles and reduced long-term maintenance costs.

1

Single Owner

Each value in Rust has one—and only one—owner variable. Here, `s1` owns the heap data.

2

Ownership Moves

When assigned, ownership is moved. `s1` is now invalid, preventing its accidental use.

This simple rule, enforced by the compiler, eliminates use-after-free and double-free errors entirely in safe Rust.

3.2. Illustrative Code: Preventing Buffer Overflows

Rust provides built-in protections against buffer overflows through its robust type system and runtime checks. Unlike C, where out-of-bounds access can lead to silent memory corruption, Rust ensures that all array and slice accesses are within their defined boundaries.[2, 13]

If a program attempts to access an element beyond a slice's valid range, Rust's runtime will cause the program to panic!.[2, 13, 28, 29] A panic! is Rust's way of handling unrecoverable errors, leading to a controlled program termination rather than unpredictable undefined behavior. This behavior ensures that unintended memory access is explicitly caught and handled, preventing potential security issues and data corruption.

The following Rust code demonstrates how buffer overflows are prevented:

fn main() { // The main function, where the program starts
    println!("--- Rust: Buffer Overflow Prevention Example ---"); // Prints a message to the console
    
    // This creates a fixed-size array named 'array' that can hold 5 numbers (u32 means unsigned 32-bit integer).
    // All elements are initialized to 0.
    let array: [u32; 5] = [0; 5]; 
    println!("Array length: {}", array.len()); // Prints the number of items in the array (which is 5)

    // This loop tries to go through the array.
    // It starts at index 0 and goes up to (but not including) 'array.len() + 1', which is 6.
    // So, it will try to access indices 0, 1, 2, 3, 4, and then 5.
    for index in 0..array.len() + 1 {
        // *** RUST'S SAFETY CHECK HERE ***
        // When 'index' becomes 5, Rust's built-in bounds check will detect that 'array[5]' does not exist.
        // Instead of silently overwriting memory (like C's strcpy), Rust will immediately stop the program
        // and tell you exactly what went wrong ("index out of bounds"). This is called a "panic!".
        println!("Index {}: {}", index, array[index]); 
    }

    // This line will NOT be reached if the program stops due to a panic.
    // This shows that Rust prevents the program from continuing with potentially corrupted memory.
    println!("This line will not be reached if a panic occurs.");
}

When this Rust code is compiled and executed, the loop for index in 0..array.len() + 1 attempts to iterate from index 0 up to (but not including) 6. When index reaches 5, the line println!("Index {}: {}", index, array[index]) attempts to access array[5]. Since array is defined with a length of 5, its valid indices are 0 through 4. Rust's bounds checking mechanism detects this out-of-bounds access and immediately triggers a panic!, halting the program with a clear error message indicating "index out of bounds".[2, 13]

This behavior highlights Rust's deterministic error handling. Unlike C, where a buffer overflow might lead to silent memory corruption and unpredictable undefined behavior, Rust provides a predictable and controlled response through panicking.[2, 13, 30] This proactive detection and explicit error handling during runtime significantly reduces the risk of exploitable vulnerabilities and makes debugging much more straightforward, as the program fails loudly and predictably at the point of error.

3.3. Illustrative Code: Preventing Null Pointer Dereferences

Rust eliminates the concept of null pointers, a common source of bugs and vulnerabilities in C, by using the Option enum.[1, 25, 31, 32] The Option<T> enum has two variants: Some(T), which indicates the presence of a value of type T, and None, which indicates the absence of a value.[32] This design forces developers to explicitly handle both the presence and absence of a value at compile time, preventing the implicit trust in pointer validity that often leads to null pointer dereferences in C.

When a function or data structure might logically have an "empty" or "missing" value, it uses Option<T> instead of a nullable pointer. This means that before a value can be used, it must be explicitly unwrapped or pattern-matched to determine if it is Some or None.[32]

The following Rust code demonstrates how null pointer dereferences are prevented:

// This function simulates looking for a value.
// It returns 'Option', meaning it might return an integer (i32) or nothing.
fn find_value(key: &str) -> Option {
    if key == "exists" {
        Some(42) // If the key is "exists", we put the number 42 into the 'Some' box.
    } else {
        None // Otherwise, the box is 'None' (empty).
    }
}

// This function tries to use the value returned by 'find_value'.
// It takes an 'Option' as input.
fn process_optional_data(data: Option) {
    println!("\n--- Rust: Null Pointer Prevention Example ---"); // Prints a message

    // The 'match' statement is like a smart switch that checks what's inside the 'data' box.
    match data {
        Some(value) => { // If the box contains a value (it's 'Some(value)')...
            // This code runs ONLY if a value is present. 'value' is guaranteed to be valid.
            println!("Value found: {}", value); // We can safely print and use the value.
        }
        None => { // If the box is empty (it's 'None')...
            // This code runs ONLY if no value is found.
            println!("No value found. Cannot process data."); // We handle the absence gracefully.
            // There's no attempt to use a non-existent value, so no crash occurs.
        }
    }
}

fn main() { // The main function where the program starts
    // First scenario: We look for a key that "exists".
    let result1 = find_value("exists"); 
    process_optional_data(result1); // This will print "Value found: 42" because 'result1' contains a value.

    // Second scenario: We look for a key that "non_existent".
    let result2 = find_value("non_existent"); 
    process_optional_data(result2); // This will print "No value found. Cannot process data." because 'result2' is empty.

    // *** RUST'S SAFETY CHECK HERE (commented out for demonstration) ***
    // If you try to directly use the value without checking if the 'Option' box is empty,
    // Rust will force you to "unwrap" it. If the box is 'None' (empty) and you try to unwrap,
    // the program will stop with a "panic!" error, telling you exactly what went wrong.
    // This prevents the silent crashes that happen in C.
    // let val = find_value("non_existent").unwrap(); 
    // println!("Directly unwrapped value: {}", val);
}

In this Rust example, the find_value function returns an Option<i32>, clearly indicating that it might or might not return an integer. The process_optional_data function then uses a match expression to explicitly handle both Some(value) and None cases. This forces the programmer to consider the possibility of a missing value, making it impossible to accidentally dereference a non-existent value.[32] If one were to attempt to directly unwrap() a None value (as shown in the commented-out line in main), the program would panic! in a controlled manner, rather than leading to undefined behavior or a segmentation fault.

Rust's Option enum effectively shifts null-related errors from runtime to compile time.[1, 25, 31, 32] By encoding the possibility of absence directly into the type system, the compiler ensures that developers explicitly handle all potential scenarios where a value might be missing. This proactive enforcement prevents the common pitfalls of null pointer dereferences, enhancing both the safety and predictability of embedded software.

3.4. Illustrative Code: Preventing Use-After-Free

Use-after-free vulnerabilities occur when a program attempts to access memory that has already been deallocated. In C, this typically happens when a pointer is used after free() has been called on the memory it points to, leading to unpredictable behavior, data corruption, or security exploits.[15, 33] Rust's ownership and borrowing system is specifically designed to prevent these types of errors at compile time, ensuring that memory is freed exactly once and is not accessible thereafter by invalid pointers.[23, 16, 25, 24]

The core principle is single ownership: each piece of data in Rust has one owner, and when that owner goes out of scope, the data is automatically deallocated via its drop function.[16] When ownership is transferred (e.g., by assigning a variable to another for heap-allocated types like String), the original variable becomes invalid, preventing any subsequent access to the now-moved data.

The following Rust code demonstrates how use-after-free vulnerabilities are prevented:

fn main() { // The main function where the program starts
    println!("--- Rust: Use-After-Free Prevention Example ---"); // Prints a message

    // 1. Create a String: 's1' now "owns" the text "hello, world" in memory.
    // 'String' typically stores its text on the heap (dynamic memory).
    let s1 = String::from("hello, world"); 
    println!("s1 created: {}", s1); // Prints the content of s1

    // 2. Move ownership from 's1' to 's2'.
    // This is NOT a copy. The ownership of the text "hello, world" is transferred to 's2'.
    // After this line, 's1' is no longer valid and cannot be used.
    let s2 = s1; 
    println!("s2 created (ownership moved): {}", s2); // Prints the content of s2

    // 3. Attempt to use 's1' after it has been moved (this is a "use-after-move" attempt).
    // *** RUST'S SAFETY CHECK HERE ***
    // If you uncomment the line below, the Rust compiler will immediately give you an error!
    // It will say: "error[E0382]: borrow of moved value: `s1`".
    // This means Rust caught the potential error BEFORE the program even runs, preventing a crash.
    // println!("Attempting to use s1 after move: {}", s1); 

    // To explicitly copy the data (if you need both 's2' and a new variable to have the text),
    // you must use the '.clone()' method. This creates a completely new copy in memory.
    // Note: 'clone()' can sometimes be slower as it duplicates data.
    let s3 = s2.clone(); // 's3' now owns a separate copy of the data. 's2' is still valid.
    println!("s3 cloned from s2: {}", s3); // Prints the content of s3

    // 's2' is still valid here because 's3' is a clone (a new copy), not a a move.
    println!("s2 still valid after clone: {}", s2); // Prints the content of s2

    // When 's2' and 's3' go out of scope (e.g., at the end of the 'main' function),
    // their respective heap allocations will be automatically "dropped" (freed) by Rust,
    // ensuring memory is cleaned up exactly once, safely and automatically.
}

In this Rust example, s1 is initialized as a String, which allocates data on the heap. When let s2 = s1; is executed, Rust performs a "move" operation. This means that the ownership of the heap-allocated string data is transferred from s1 to s2. Crucially, s1 is then invalidated by the compiler.[24] If a programmer attempts to use s1 after this move (as shown in the commented-out println! line), the Rust compiler will produce an error: error[E0382]: borrow of moved value: `s1`. This compile-time error prevents the use-after-free bug from ever reaching runtime.

Rust's ownership system, through the concept of "moving" data and invalidating the original owner, ensures that there is always exactly one owner for any piece of heap data. This single ownership guarantees that memory is freed precisely once when its owner goes out of scope, thereby eliminating use-after-free vulnerabilities.[23, 16, 25, 24] This compile-time enforcement of resource management is a fundamental aspect of Rust's safety guarantees, preventing these critical memory errors before the program even executes.

4. Memory Management for Embedded Systems: C vs. Rust in Practice

The theoretical differences in memory management between C and Rust translate into significant practical implications for embedded systems development, affecting everything from application architecture to performance and binary size.

4.1. Static vs. Dynamic Allocation in no_std Environments

Embedded systems often operate in no_std environments, meaning they do not have access to the Rust standard library, which includes the global heap allocator.[34, 35] In such contexts, dynamic memory allocation (heap) is typically absent by default, and developers primarily rely on static allocation, where memory is reserved at compile time.

In C, static allocation is straightforward using global or static variables. For dynamic-like behavior without a heap, C developers might employ custom memory pools or arenas, manually managing blocks of memory from a pre-allocated static buffer. This requires meticulous manual tracking to prevent fragmentation and ensure correct deallocation.

Rust, in a no_std context, also defaults to static allocation. However, it offers flexibility: dynamic memory allocation is optional.[3] Developers can choose to completely omit the heap and statically allocate everything, ensuring maximum predictability and minimal runtime overhead. If dynamic data structures are required, a global allocator can be explicitly enabled and provided, often through a third-party crate designed for embedded use. This optionality and explicit control over heap usage allow Rust developers to tailor memory management precisely to the constraints and requirements of their embedded target. This ability to entirely avoid the heap for maximum predictability is a significant advantage, particularly for ultra-constrained or safety-critical applications where any runtime unpredictability is unacceptable.

4.2. Concurrency and Data Races

Concurrency in embedded systems, involving interrupt handlers, multiple threads, or multi-core processors, introduces complex challenges, particularly concerning shared mutable data and race conditions.[36, 37] In C, managing shared resources requires careful manual synchronization using mechanisms like mutexes, semaphores, or disabling interrupts. Failure to implement these correctly can lead to subtle, hard-to-debug data races and undefined behavior.

Rust's "fearless concurrency" is a cornerstone of its design, leveraging its ownership and type systems to prevent data races at compile time.[36, 26, 37, 2, 3] The borrow checker ensures that shared mutable data can only be accessed safely, either through immutable references (allowing multiple readers) or a single mutable reference (exclusive write access). This compile-time guarantee eliminates an entire class of concurrency bugs that plague C development. Rust's Send and Sync traits further extend these guarantees to user-defined types, ensuring that data is safe to transfer or share between threads.[26, 37]

For embedded contexts, Rust provides specialized mechanisms. Critical sections, implemented via cortex_m::interrupt::free, temporarily disable interrupts to ensure exclusive access to shared data on single-core systems.[37] On platforms supporting them, atomic instructions offer more efficient, non-blocking synchronization.[37] Higher-level frameworks like RTIC (Real-Time Interrupt-driven Concurrency) and Embassy provide structured approaches to concurrency, simplifying the management of tasks and interrupts while maintaining Rust's safety guarantees.[37, 38, 34] RTIC, for example, statically enforces priorities and tracks resource access to prevent deadlocks and race conditions with minimal overhead.[37, 38] Embassy, an asynchronous framework, enables efficient, non-blocking concurrency using Rust's async/await syntax, simplifying I/O-bound applications.[38, 34]

Rust's type system proactively prevents data races at compile time, leading to significantly fewer concurrency-related bugs compared to languages with less safety enforcement.[36, 26, 37, 2, 11, 39, 7] This guaranteed thread safety translates directly into faster development cycles and reduced long-term maintenance costs, as developers spend less time debugging elusive race conditions. This is a profound advantage in embedded systems where concurrency is often a necessity.

4.3. Hardware Interaction: Memory-Mapped Registers and HALs

Both C and Rust interact with low-level hardware, particularly memory-mapped registers, which are essentially mutable global state.[40] In C, direct access to these registers is achieved through volatile pointers and manual address manipulation. This provides maximum control but is highly error-prone, requiring the programmer to carefully manage read/write operations and ensure correct bit manipulation.

Rust provides a more structured and safer approach to hardware interaction. Tools like svd2rust generate Rust register maps (structs) directly from vendor-provided SVD (System View Description) files.[2, 4] These generated structs provide type-safe, high-level abstractions for interacting with peripherals, preventing common errors such as writing to reserved bits or accessing non-existent registers.[41, 42]

The embedded-hal (Hardware Abstraction Layer) project is a crucial component of the Rust embedded ecosystem.[5, 2, 43, 4] It defines a set of traits that standardize interfaces for common embedded peripherals (e.g., GPIO, I2C, SPI, Timers).[43] This allows hardware-agnostic drivers to be written, promoting code reusability across different microcontrollers and reducing complexity. HAL implementations for specific chips then implement these traits, providing the low-level interaction.[43]

While Rust aims for safety, direct hardware interaction sometimes necessitates the use of unsafe blocks to perform raw memory writes or reads.[25, 35, 44] However, a core principle in embedded Rust is to minimize and encapsulate unsafe code within safe abstractions (like peripheral access crates and HALs).[25, 35, 44] This means that while the underlying implementation might use unsafe, the end-user code interacting with the HAL remains "safe Rust," benefiting from the compiler's guarantees. This structured approach to low-level access ensures that the majority of application code can be written without fear of memory errors, even when dealing with complex hardware.

4.4. Binary Size and Performance Considerations

When comparing Rust and C in embedded systems, binary size and runtime performance are critical metrics. While C has historically been favored for its minimal binary footprint, Rust's capabilities in this area are rapidly maturing.

A common observation is that a simple Rust "hello world" or "blinky" program can initially result in a larger binary size compared to its C equivalent, especially when using higher-level abstractions like async/await frameworks (e.g., Embassy) or certain Hardware Abstraction Layer (HAL) features.[6] For instance, an embassy blinky example in Rust might be approximately 10 times larger in .text section size than a comparable C blinky example.[6] This can be a significant challenge for microcontrollers with very small flash storage, such as those with only 64KB, where even a basic application might struggle to fit.[6] This size difference is often attributed to Rust's default optimizations favoring execution speed and debuggability, as well as the monomorphization of generics (where the compiler generates specialized code for each type a generic function is used with), and the inclusion of panic unwinding information.[13, 45, 46]

However, Rust provides extensive mechanisms to optimize binary size for embedded systems:

  • Release Mode and Optimization Levels: Building in release mode (cargo build --release) significantly reduces binary size compared to debug builds (up to 30% or more).[46] Further, setting opt-level = "z" in Cargo.toml explicitly instructs the compiler to prioritize size optimization.[45, 46]
  • Stripping Symbols: Configuring Cargo to automatically strip = true removes debugging information and other metadata from the binary, which can drastically reduce its size, especially for smaller programs.[47, 45, 46]
  • Link Time Optimization (LTO): Enabling lto = true allows the linker to perform whole-program optimizations, such as dead code elimination, leading to smaller binaries.[45, 46]
  • Code Generation Units: Reducing codegen-units to 1 in release profiles allows for more aggressive inter-crate optimizations, further reducing size at the cost of compile time.[46]
  • Panic Behavior: Changing panic = "abort" in Cargo.toml prevents stack unwinding on panic, removing associated code and reducing binary size.[46]
  • Dependency Management: Tools like cargo-bloat can analyze binary contributions from dependencies, helping developers identify and replace heavy crates or disable unnecessary features.[45]

When well-optimized, the runtime performance characteristics of Rust and C/C++ in embedded systems are generally comparable in terms of CPU utilization and memory footprint.[5] Both languages compile to native code without requiring a runtime environment, making them suitable for resource-constrained devices. Rust's "zero-cost abstractions" ensure that high-level language features do not incur performance penalties at runtime.[5, 2] While C++ might have slight advantages in highly specialized, hand-optimized code, Rust's compile-time guarantees can sometimes enable more aggressive compiler optimizations.[5]

While Rust can have larger binaries by default, it offers extensive tools and configuration options to optimize for size, allowing it to achieve a footprint comparable to C for many embedded applications.[5, 47, 6, 45, 46] The choice between the languages often becomes a trade-off between the initial effort in size optimization for Rust and the ongoing vigilance required for memory safety in C.

Binary Size: "Blinky" App Comparison (Bytes)

While a default Rust binary can be larger, aggressive optimization brings it into a competitive range for resource-constrained devices, trading a small size increase for massive safety gains.

5. Ecosystem, Tooling, and Adoption Landscape

Rust's power is amplified by a modern, integrated toolchain and a vibrant community building foundational libraries and frameworks for embedded development.

5.1. Maturity of the Embedded Rust Ecosystem

The embedded Rust ecosystem is experiencing rapid growth, though it is still less comprehensive and mature compared to the long-established C/C++ ecosystem.[2, 9] Key components and initiatives are actively being developed to support embedded development:

  • Hardware Abstraction Layers (HALs): The embedded-hal project provides a set of traits that define standardized interfaces for interacting with microcontroller peripherals, promoting portability and reusability of drivers.[2, 28, 34]
  • Peripheral Access Crate (PAC) Generation: Tools like svd2rust automatically generate Rust register maps (structs) directly from vendor-provided System View Description (SVD) files, simplifying low-level hardware interaction.[9, 34, 35, 36]
  • Concurrency Frameworks: RTIC (Real-Time Interrupt-driven Concurrency) and Embassy.rs are prominent frameworks that simplify real-time and asynchronous application development, respectively.[9, 10, 12]
  • Community-Driven Development: The Rust on Embedded Devices Working Group is an official part of the Rust language project, dedicated to improving the end-to-end experience for embedded Rust developers.[34] They maintain curated lists of resources like awesome-embedded-rust and provide extensive documentation through books like "The Embedded Rust Book" and "The Discovery Book".[20, 34]

Despite this growth, traditional microcontroller vendors such as ST Microelectronics, NXP, and Microchip have not yet fully prioritized official Rust support and libraries, which can be a barrier to broader adoption.[37] However, several companies, including Ferrous Systems and Tweede Golf, are now offering commercial support for embedded Rust development, indicating increasing industry interest and a growing professional ecosystem.[2]

Rust is increasingly being adopted for production use cases where its strengths in safety and reliability are paramount. These include:

  • Automotive and Robotics: Companies are using Rust for new safety-critical components and for developing highly efficient, real-time control systems.[2, 7] Examples include EVA ICS, which built an industrial automation platform entirely in Rust, and Arculus, which integrates Rust for robot control.[7] Volvo Cars also sees potential for higher quality code and reduced warranty costs with Rust.[7]
  • IoT Devices: Manufacturers value Rust's security properties for networked devices, especially those exposed to internet input.[2, 13]
  • Aerospace and Defense: Rust is being evaluated for systems where memory safety is critical.[2]
  • Specific Implementations: Android's Bluetooth stack was rewritten in Rust, eliminating all memory safety bugs.[2] AWS reports zero memory safety bugs in production Rust services.[2] Sensirion used Rust for a fast and robust embedded demonstrator, benefiting from easy cross-compilation and high-quality crates.[20] A production system for connecting battery storage to cloud services for monitoring and remote control was successfully implemented in Rust, addressing reliability issues present in a previous C implementation.[19]

The current state of the embedded Rust ecosystem reveals a notable gap between the rapid, community-driven development of tools and frameworks and the slower pace of adoption and official support from traditional hardware vendors. While the Rust Embedded Working Group and numerous open-source projects are actively building a robust foundation, the lack of widespread, official Rust support from major chip manufacturers can limit the immediate accessibility and ease of use for developers accustomed to vendor-provided C SDKs. This disparity means that broad industry adoption may take longer, as it depends on these vendors recognizing and responding to the growing demand for Rust. Nevertheless, the increasing number of companies offering commercial support for embedded Rust signals a maturing market and a growing professional ecosystem.

Despite the current ecosystem limitations, Rust's inherent strengths position it as a strategic choice for early adopters in specific, high-stakes domains. Its unparalleled memory safety and robust concurrency guarantees make it particularly attractive for safety-critical applications (e.g., automotive, medical devices) and networked IoT devices where security vulnerabilities can have severe consequences.[2, 4, 9] The ability to prevent a large percentage of security bugs at compile time translates into significant long-term cost savings and enhanced reliability, even if the initial development might require more effort due to less mature vendor-specific libraries. This suggests that Rust is not aiming to replace C overnight across all embedded domains but is carving out a crucial niche where its unique value proposition directly addresses the most pressing industry challenges.

5.2. Development Workflow and Tooling

Rust offers a modern and integrated development workflow that provides significant advantages over traditional C/C++ embedded development, particularly in terms of build management, debugging, and hardware interaction.

  • Build System (Cargo): Cargo, Rust's official package manager and build system, greatly simplifies dependency management, a notable improvement over conventional C++ build systems.[2, 9, 38] It streamlines project creation, compilation, and dependency resolution, fostering a more organized and reusable codebase.[7]
  • Debugging and Flashing Tools: The probe-rs project provides a modern, open-source debugging toolkit written in Rust. It offers direct interaction with debug probes (supporting ARM and RISC-V targets) and eliminates the need for an intermediate GDB layer.[10, 11] probe-rs includes various tools like cargo-flash for downloading compiled programs to target devices and cargo-embed for an extended debugging experience with GDB and RTT (Real-Time Transfer) support.[11] Modern IDEs like VS Code and IntelliJ CLion also offer robust Rust support, including debugging capabilities via the Microsoft Debug Adapter Protocol (DAP).[11, 20, 39, 40]
  • Hardware Interaction Tools: svd2rust is a crucial tool that generates Rust register maps (structs) from SVD files, providing a safe and idiomatic Rust interface for memory-mapped registers.[9, 34, 35, 36] This allows developers to interact with hardware peripherals using high-level Rust types rather than raw memory addresses.
  • Interoperability with C/C++: Rust provides robust Foreign Function Interface (FFI) capabilities, allowing seamless integration with existing C codebases. Developers can call C functions from Rust and vice versa.[9, 38] For C++ interoperability, the cxx library offers a safe mechanism for calling C++ code from Rust and Rust code from C++.[38] This feature is vital for incremental adoption, enabling teams to introduce Rust into existing projects without a complete rewrite, leveraging established C/C++ libraries.[9, 38]

Rust's integrated toolchain, exemplified by Cargo and probe-rs, offers a significantly more streamlined and productive development experience compared to the often fragmented and complex workflows in traditional C/C++ embedded development. The ability to manage dependencies effortlessly, cross-compile with ease, and debug effectively from a unified environment reduces setup time and cognitive load for developers. This modern approach to tooling enhances overall developer productivity and allows teams to focus more on application logic rather than build system complexities.

The robust FFI in Rust is a critical enabler for its adoption in the embedded space. It allows organizations to incrementally introduce Rust into existing C/C++ codebases, rather than requiring a costly and risky full rewrite. This interoperability means that new modules or safety-critical components can be developed in Rust, while still leveraging a vast existing ecosystem of C/C++ libraries and drivers. This pragmatic approach mitigates the "chicken-and-egg" problem of ecosystem maturity, allowing Rust to gain traction by coexisting with established codebases, ultimately accelerating its real-world deployment and demonstrating its benefits in a controlled manner.

5.3. Learning Curve and Community Adoption

The adoption of Rust in embedded systems, while growing, faces certain challenges, particularly concerning the learning curve for developers accustomed to C/C++ and the overall community adoption rate.

  • Learning Curve: Rust's unique ownership and borrowing model can be initially daunting for developers deeply familiar with C or C++.[9, 41, 42] The compiler's strictness, while ultimately beneficial for safety, requires a shift in mindset and can lead to initial frustration as developers learn to satisfy the borrow checker. Developers need to understand "what's Rust doing that you do not see" compared to C's more direct memory manipulation.[42]
  • Community and Resources: Despite the learning curve, Rust boasts a strong and highly active open-source community. The Rust on Embedded Devices Working Group is an official entity dedicated to improving the embedded experience.[34] Comprehensive documentation is available, including "The Embedded Rust Book" (for those familiar with embedded development), "The Discovery Book" (for beginners), and "The Embedonomicon" (for deeper insights into foundational libraries).[9, 20, 34] This rich set of resources and a supportive community are crucial for helping new developers overcome the initial learning hurdles.[20]
  • Adoption Challenges: While Rust's adoption is increasing, it is not yet universal. Many developers still prefer C due to familiarity and the vast existing codebases.[9, 42] The availability of Rust developers is also currently lower compared to C/C++.[9]

The learning curve for C developers transitioning to Rust is largely a conceptual one, involving a shift from C's manual control and implicit assumptions about memory to Rust's compiler-enforced safety and explicit ownership model. This requires developers to internalize new paradigms like borrowing and lifetimes, which can feel restrictive at first but ultimately lead to more robust and predictable code. The initial investment in learning these concepts is a trade-off for the long-term benefits of reduced debugging time and enhanced reliability.

The vibrant and highly supportive Rust community, coupled with its extensive and well-maintained documentation, acts as a significant force multiplier in overcoming the steep learning curve and driving broader adoption. The availability of official working groups, comprehensive learning materials, and active forums provides a strong support system for developers. This collaborative environment fosters knowledge sharing and accelerates the development of new tools and libraries, making Rust increasingly accessible and practical for embedded systems programming. The community's commitment to improving the developer experience is a key factor in Rust's promising trajectory in this domain.

6. Conclusions

Rust offers a path to building more secure, reliable, and maintainable embedded systems.

The comparative analysis of memory management in C and Rust for embedded systems reveals distinct paradigms with significant implications for reliability, security, and development practices. C, with its long-standing dominance, offers unparalleled low-level control and minimal runtime, which has been crucial for resource-constrained environments. However, this power comes at the cost of manual memory management, placing the entire burden of correctness on the programmer. This reliance on human vigilance is the primary source of C's notorious memory-related vulnerabilities, such as buffer overflows, null pointer dereferences, and use-after-free errors. These issues frequently lead to unpredictable undefined behavior, system crashes, and are a major vector for security exploits, as evidenced by industry statistics attributing a large percentage of vulnerabilities to memory safety flaws.

Rust fundamentally redefines memory management by shifting safety guarantees from runtime to compile time through its unique ownership and borrowing system. This proactive approach eliminates entire classes of memory bugs, including buffer overflows and use-after-free errors, before the code is even executed. The Option enum replaces nullable pointers, forcing explicit handling of absent values and preventing null pointer dereferences. While C's failures are often reactive runtime crashes, Rust provides deterministic error handling through controlled panics. This compile-time enforcement not only enhances software reliability and security but also fosters "fearless concurrency" and "fearless refactoring," significantly boosting developer productivity.

In practical embedded development, both languages can achieve comparable runtime performance when code is well-optimized. While Rust binaries can initially be larger due to default optimizations and monomorphization, the language offers extensive tools and configuration options to aggressively optimize for size, often achieving footprints competitive with C. Rust's no_std environment provides flexibility to avoid dynamic allocation entirely, and its embedded-hal and svd2rust tools offer structured, type-safe abstractions for hardware interaction, encapsulating necessary unsafe operations.

In conclusion, C remains a viable choice for legacy systems and highly specialized, hand-optimized code where absolute minimal size and direct control are paramount, and where developer expertise can mitigate its inherent memory safety risks. However, for modern embedded applications, particularly those with real-time operating systems (RTOS) where safety, security, and concurrency are critical, Rust presents a compelling and increasingly mature alternative. Its compile-time memory safety guarantees, robust concurrency model, and growing ecosystem position it as a transformative language that can lead to safer, more reliable, and more maintainable embedded software. The trade-off involves a steeper initial learning curve for C developers adapting to Rust's ownership model, but the long-term benefits in reduced debugging, enhanced security, and improved code quality often justify this investment.

References

  1. https://krishnag.ceo/blog/understanding-cwe-476-null-pointer-dereference/
  2. https://cppcat.com/rust-vs-c-for-embedded-systems/
  3. https://www.rust-lang.org/what/embedded
  4. https://github.com/rust-embedded
  5. https://technology.nirmauni.ac.in/rust-the-embedded-language-thats-outperforming-c/
  6. https://www.reddit.com/r/learnrust/comments/1l69r0x/the_mystery_of_the_rust_embedded_binary_size/
  7. https://www.thoughtworks.com/en-us/insights/blog/programming-languages/rust-automotive-software
  8. https://os.phil-opp.com/async-await/
  9. https://www.embedded.com/memory-safety-in-rust/
  10. https://www.trust-in-soft.com/resources/blogs/rusts-hidden-dangers-unsafe-embedded-and-ffi-risks
  11. https://runsafesecurity.com/blog/convert-c-to-rust/
  12. https://owasp.org/www-community/vulnerabilities/Using_freed_memory
  13. https://www.nccgroup.com/us/research-blog/rust-for-security-and-correctness-in-the-embedded-world/
  14. https://www.ko2.co.uk/rust-vs-c-plus-plus/
  15. https://cqr.company/web-vulnerabilities/use-after-free-vulnerability/
  16. https://dev.to/francescoxx/ownership-in-rust-57j2
  17. https://stackoverflow.com/questions/4007268/what-exactly-is-meant-by-de-referencing-a-null-pointer
  18. https://www.jsums.edu/nmeghanathan/files/2015/05/CSC437-Fall2013-Module-5-Buffer-Overflow-Attacks.pdf
  19. https://github.com/npapernot/buffer-overflow-attack/blob/master/stack.c
  20. https://informaengage.sirv.com/DesignNews_Digikey_Archives/2023/Feb/Session_1-CEC-Session.pdf
  21. https://learn.snyk.io/lesson/null-dereference/
  22. https://stackoverflow.com/questions/4007268/what-exactly-is-meant-by-de-referencing-a-null-pointer
  23. https://doc.rust-lang.org/book/ch04-01-what-is-ownership.html
  24. https://doc.rust-lang.org/book/ch04-01-what-is-ownership.html
  25. https://blog.adacore.com/memory-safety-in-rust
  26. https://doc.rust-lang.org/book/ch16-00-concurrency.html
  27. https://stackoverflow.com/questions/49023664/could-software-written-only-in-rust-fully-avoid-race-conditions
  28. https://informaengage.sirv.com/DesignNews_Digikey_Archives/2023/Feb/Session_1-CEC-Session.pdf
  29. https://doc.rust-lang.org/std/vec/struct.Vec.html
  30. https://www.reddit.com/r/rust/comments/18ha2bq/the_nsa_advises_move_to_memorysafe_languages/
  31. https://users.rust-lang.org/t/documentation-of-null-pointer-optimization/58038
  32. https://learning-rust.github.io/docs/option-and-result/
  33. https://cqr.company/web-vulnerabilities/use-after-free-vulnerability/
  34. https://docs.rust-embedded.org/book/collections/
  35. https://www.reddit.com/r/rust/comments/1ae126x/dereferencing_an_arbitrary_raw_pointer_to_should/
  36. https://docs.rust-embedded.org/book/concurrency/
  37. https://docs.rust-embedded.org/book/concurrency/
  38. https://www.ashwinnarayan.com/post/embedded-rust-blinking-led/
  39. https://www.reddit.com/r/rust/comments/1g9u0uj/rust_borrow_checker_should_be_capable_of_flow/
  40. https://doc.rust-lang.org/beta/embedded-book/peripherals/borrowck.html
  41. https://docs.rust-embedded.org/book/start/registers.html
  42. https://docs.rs/embedded-registers
  43. https://docs.rust-embedded.org/book/portability/
  44. https://nora.codes/post/what-is-rusts-unsafe/
  45. https://rustprojectprimer.com/building/size.html
  46. https://github.com/johnthagen/min-sized-rust
  47. https://github.com/rust-embedded/embedded-dma