A Re-Introduction to C# References

Standard

Reviewing what we need to know pre- and post- C# 7 features about the type system and references in particular, while correcting common misconceptions along the way.

Warm-up Exercise

What would the following code output? Hint: An array is a reference type.

C# Types: Reference, Value and Primitives

Put simply, a type is something that describes what a variable can hold.

Misconception: The new keyword means we are creating a reference type. Wrong! Perhaps this comes from the syntax provided by the primitive type aliases (and also many developers aren’t using structs regularly, so won’t be exposed to using new with them).

Primitive types are those, which the compiler supports directly, along with a range of operators and casting between them. They map to framework types e.g.

int  maps to System.Int32

float  maps to System.Single

The C# compiler will emit identical IL for the following lines.

The latter is the alias, which primitives provide, that masks the use of the new  keyword for those value types.

Below is a quick sketch of the types in C#. The Framework Class Library (FCL) obviously has a lot more that I won’t try to squeeze in.

Value Types

If you edit the copy made of a file, the changes do not affect the original file.

This is how value types are passed around in C# – as copies of the data. Given:

int originalInt = 0;

The value of originalInt is 0, which is the data we are intending to store in this variable.

When we pass originalInt as a value to a method (more on this later), or assign it to a new variable, we are making a copy. Any changes to the value in the copy, do not change the original e.g.

500 was only added to the copy. originalInt  is still 0.

A note on inheritance

Just to confuse, all types, including System.ValueType , inherit from System.Object . Don’t get ideas – this is so we can have them behave like reference types through boxing. Us developers cannot actually inherit from value types in our code.

In summary:

  • The ‘value’ stored in a value type is the actual data.
  • The default behaviour passing it around, is that we are making copies of this value.
  • They support interfaces but there is no inheritance.

Reference Types

We’ll start this one with the analogy of a link to a file:

The ‘link’ from this analogy is a reference in C#.

A reference type still has a value – it’s just that the ‘value’ is a memory location where the actual data is.

By default, we are not directly working with that value. Whenever we access the variable, we are fetching the data stored in the memory location referenced by that value (Mads Torgersen uses the example for those from the pointer world – think of it as automatically dereferencing)

So, when you pass one of these around in code, it is making copies of this reference, not the data. Consider the following code:

We have created a new SimpleObject in memory and stored its memory address / location in the value of original .

When we make a copy of it, we are still copying the value as we do with value types:

But the value being copied is this memory location.

So now copied and original  both reference the same memory location. When we modify the data referenced by copied (the property inside it, Number ), we are also changing original .

Now it gets interesting, and gets us a step closer to understanding the behaviour of the code in the warm-up exercise.

Remember – the ‘value’ stored in a reference type, is the reference to the object’s memory address. Now we create a another SimpleObject , after making a copy, and the new operator returns its memory address, which we store in original .

copied  still points to the object that original used to point to.  Confusing? Let’s return to our analogy:

Tom has changed the copy of the link he has, which doesn’t affect Kate’s copy. So now their links point to different files.

In summary / need to know:

  • The ‘value’ in a reference type a memory location where the actual data is
  • Whenever we access a reference type variable, we are fetching the data stored in the memory location it has as its value
  • The default behaviour when we pass it around is that we are copying just this reference value. Changes to the data inside the object are visible by both the original and any copies. Changes to the actual memory location value are not shared between copies of that value.

Passing to a Method by Value

The default behaviour is passing by value without extra keywords looks like so:

private static void One(int[] intArray)

You’re probably doing this 99% of the time without a thought.

Nothing new to learn, so need for any code samples. This will exhibit all the behaviour already covered above:

  • a value type will pass a copy of its value and changes to that copy won’t be seen by the caller
  • a reference type will pass a reference to the object as its value and changes to the object will be seen by the caller; but if it is assigned to a brand-new object inside the method, the caller will not see this change

Passing to a Method by Reference

We have ref , out  and with C# 7, in  keywords for passing by reference.

Let’s just look at ref while we get our heads round what passing by reference means.

Note that the compiler will insist that the keyword appears in the call and the method, so our intention is clear both ends:

Behaviour with Value types

If you pass a value type by reference, there is no copying and the called method is able to make changes to the data, which will be visible to the caller.

Misconception: passing a value type by reference causes boxing to occur. Wrong! Boxing occurs when you convert a value type to a reference type. Passing a value type by reference simply creates an alias, meaning we’d have two variables representing the same memory location.

Behaviour with Reference types

I was cheeky in the warm-up test – I passed a reference type, by reference, which is not a common thing to do.

Misconception: passing a reference type is the same as passing by reference. Wrong! This is easier to explain by trying to do both at the same time, and to observe how it differs from passing a reference type as a value.

Back to the file link analogy to look at what happens when we pass a reference type by reference to a method:

Instead of passing Tom and Kate copies of my link, I gave them access to the link itself. So as before, they both see changes to the file; but now also, if one of them changes the link to a new file, they both see that too.

So, using the ref keyword is kind of telling the compiler not to dereference / fetch the data from the memory location, but instead, pass the address itself, analogous to creating an alias.

We can see in the IL emitted for the code above, that the opcode stind is used to store the result back in the address of the int32  passed by address (note the &).

In summary / need to know:

  • The ref modifier allows a value to be passed in and modified – the caller sees the changes.
  • The ’value’ when used with reference types is the actual memory location hence it can change where the caller’s variable points in memory.

When Reference Types Meet Ref Locals

In C# 7 we got ref locals. They were introduced along side ref returns to support the push for more efficient, safe code.

I want to use them with reference types to give us a second chance to appreciate what happens when we pass a reference type around by reference.

A complete code example:

Notice how the original is replaced by the copy now. In the IL we can see that ref locals utilise ldloca (for value and reference types) – we are copying the actual address where the value is (remember that the value in a reference type is a memory address were the object is).

By using ref , we are essentially making an alias to this value containing the address – any changes to either, including pointing the reference to a new object, will affect both.

Ref returns

Just imagine I have an array of large structs and not the int I have used below.

I can now return a reference directly to an element in an int  array without any copying.

The gotcha with return ref  is scope. Glance ahead and you’ll see I briefly cover the stack and stack frames, if you struggle with this bit. Ultimately, when a method returns you’ll lose anything on the stack and lose references to anything on the heap (and GC will claim it). With this in mind you can only ref return something visible in the scope of the caller. You can see above I am returning a reference to an index in the array held at the call site.

Ref locals & returns – useful for reference types?

The real value is to avoid copying around large value types – they complement the existing feature to pass by reference, adding the (missing) reference-like behaviour we already get with reference types.

We could start using ref  returns and ref  locals, but expect limited use cases if you work higher up the stack. Many libraries we use have already or will be utilising these and the new Span<T > work, so it is useful to understand how they play.

For reference types, as with passing to method by ref, you’re giving a caller access to the actual memory location and letting them change it. If anyone has come across some real-world scenarios please share so I can add it here.

Where do the Stack, Heap and Registers fit in all this?

Misconception: value types are always allocated on the stack. Wrong! If we’re going to get into discussions about where allocations occur, then it would be more correct to state that the intention is more like:

  • short-lived objects to be allocated in registers or on the stack (which is going to be any time they are declared inside a method)
  • and long-lived objects to be allocated on the heap.

EDIT: Eric Lippert suggests we should be thinking in terms of a ‘short term allocation pool and long term allocation pool … regardless of whether that variable contains an int or a reference to an object’.


Mostly, we shouldn’t be concerning ourselves with how any particular JIT allocates and we should make sure we know the differences in how the two types are passed around. That said, the .NET roadmap has been focused on ‘inefficiencies … directly tied to your monthly billing’, and delivered Span<T>  and ref struct , which are stack-only.

For interest, here’s a few scenarios where we can expect a value type will to be heap allocated:

  • Declared in a field of a class
  • In an array
  • Boxed
  • Static
  • Local in a yield return block
  • Inside lambda / anon methods

What does it even mean to allocate on the stack (or the heap)?

This stack thing… it is actually that same call stack, made up of frames, which is responsible for the execution of your code. I’m not going to teach you about what a stack data structure is now.

A stack frame represents a method call, which includes any local variables. Those variables store the values for value or reference types we have already thoroughly discussed above.

A frame only exists during the lifetime of a method; so any variables in the frame also only exist until the method returns.

A big difference between stack and heap is that an object on the heap can live on after we exit the function, if there is a reference to it from elsewhere. So, given that passing references to objects around can potentially keep them alive forever, we can safely say that all reference types can be considered long-term and the objects/data will be allocated on the heap.

Misconception: The integers in an array of integers int[]  will be allocated to the stack. Wrong. Value types are embedded in their container so they would be stored with the array on the heap.

Enforcing Immutability, Now That We’re Passing More References

Out and ref produce almost identical IL with the only difference being, the compiler enforces correct code who is responsible for initialising the object being referred to:

  • Out  – caller does not have to initialise the value. If they do it is discarded on calling the method. The called method must write to it.
  • Ref  – caller must initialise the value

Great for avoiding copying value types but how do prevent the method being called from making unwanted modifications? C# 7 introduced the in  modifier. It got the name by being the opposite of out (because it makes the reference (alias) read only; and the caller does have to initialise the value).

The equivalent for the other direction i.e. return ref , is the new modifier: ref readonly .

Here’s the immutable array example from the readonly ref proposal:

Now we can still get a reference to an array element without copying, but without the dangers of full access to the location:

Briefly on Boxing

You can convert from value to reference type and back again. It can be implicit or explicit and is commonly seen when passing a value type to a method that takes object types:

And unboxing:

An interesting case of implicit boxing is when working with structs that implement interfaces. Remember, an interface is a reference type.

This will cause a boxing to occur.

Misconception: when a value type is boxed, changes to the boxed reference affect the value type itself. Wrong! You’d be thinking of when we create an alias with ref local or passing by reference. Changes to the boxed copy on the heap have no effect on the value type instance and vice versa.

When the C# compiler spots any implicit or explicit boxing it will emit specific IL:

IL_007c: box

When the JIT compiler sees this instruction, it will allocate heap storage and wrap the value type contents up in a ‘box’, which points to the object on the heap.

If you are careful, boxing is not going to hurt performance. Problems arise when they are occurring within iterations over large data sets. There is both additional CPU time for the boxing itself, followed by the additional pressure on the garbage collector.

Misconception: in the warm-up exercise, the array goes on the heap and so do the int objects in it. Therefore, the int objects have to be boxed. Wrong!

Remember we rebuffed the misconception that ALL value types go on the stack. With that mind, it doesn’t mean int objects ending up on the heap are boxed. Take the code:

If this were inside a method, a new array object would be allocated to the heap with a reference to it stored on the stack. The int objects 10 and 20 would be allocated to the heap also with the array.

Warm-up answer

30, 20
10, 20
60, 70

Summary.

  • The ‘value’ in a value type is the actual data.
  • The default behaviour when we pass a value type around is that we are copying the actual value.
  • The ‘value’ held in a reference type, is the reference to a location in memory where the data is.
  • Whenever we access a reference type variable, we are fetching the data stored in the memory location it has as its value
  • The default behaviour when we pass a reference type around is that we are copying just this reference value. Changes to the data inside the object are visible by both the original and any copies. Changes to the actual memory location value are not shared between copies of that value.
  • The ref modifier allows a value to be passed in and modified – the caller sees the changes. The ‘value’ when used with reference types is the actual memory location, hence it can change where the caller’s variable points in memory.
  • Amongst other things beyond article, C# 7 introduced a way to return by ref . It also gave us the readonly  keyword and in modifier to help enforce immutability.

Some homework because I ran out of space:

  • Doing reference and value type quality right
  • When to use structs vs classes
  • How string differs
  • Extension method refs
  • Readonly structs
  • Nullable value types and look forward to nullable reference types

Sources

Who knows? I play with the internals a lot and read a great deal, so can’t be sure where it all comes from. It’s just in my head now. Probably:

  • Any of the Mads or Skeet talks I’ve watched
  • The writings of by Eric Lippert
  • Writing High Performance .NET Code by Ben Watson
  • CLR Via C# by Jeffrey Richter
  • Pro .NET Performance by Sasha Goldshetin
  • Probably loads from MS blogs and MS repositories at github.com

.NET Performance Tip – Benchmarking

Standard

Micro-Benchmarking

Micro-optimising has a bad reputation although I’d argue that knowledge of such things can help you write better code. We should also make the distinction clear between this and micro-benchmarking, on the other hand, which is a little risky but a lot safer if you know how to do it right.

Now, if you’re making that mental link with premature optimization right now, off the back of a 44 year old (mis)quote from Donald Knuth, there are lots of good, more recent and balanced arguments on the subject out there now; and plenty of tools that help you do it better. I think some of the confusion comes from a misunderstanding of where it sits within a much broader range of knowledge and tools for performance. Remember, he did say that ‘we should not pass up our opportunities in that critical 3%’, which sounds about right in my experience.

Common Pitfalls

There’s loads of pitfalls going old-school with System.Diagnostics.Stopwatch such as:

  • not enough iterations of operations that take a negligible amount of time
  • forgetting to disable compiler optimizations
  • inadvertently measuring code not connected to what you are testing
  • failing to ‘warm-up’ for account for JIT costs and processor caching
  • forgetting to separate setup code from code under test
  • and so on…

Enter Adam Sitniks’ BenchmarkDotNet. This deals with all the problems above and more.

BenchmarkDotNet

It’s available as a NuGet package:

And has some excellent documentation:

You have a choice of configuration methods via objects, attributes or fluent. Things you can configure include:

  • Compare RyuJIT (default since .NET46 for x64 and since Core 2.0 for x86) and Legacy JIT
  • Compare x86 with x64
  • Compare Core with full framework (aka Clr)
  • JIT inlining (and tail calls, which can be confusingly similar to inlining in 64-bit applications in my experience)
  • You can even test the difference between Server GC and Workstation GC from my last tip

A Very Simple Example

For many scenarios, it is fine to just fire it up with the standard settings. Here’s an example of where I used it to get some comparisons between DateTime , DateTimeOffset  and NodaTime.

  • [ClrJob, CoreJob]  – I used the attribute approach to configuration, decorating the class to make BenchmarkDotNet run the tests on .NET full framework and also Core.
  • [Benchmark]  – used to decorate each method I wanted to benchmark

A call to get things started:

Note if you want to try this code, you’ll need to install the NuGet packages for BenchmarkDotNet and NodaTime.

Output:

Obviously, not a substitute for understanding the underlying implementation details of the DateTime  class in the Base Class Library (BCL); but a quick and easy to initially identify problem areas. In fact, this was just a small part of a larger explanation I gave to a colleague around ISO 8601, time zones, daylight saving and the pitfalls of DateTime.Now .

One Thing That Caught Me Out

One gotcha is, if you are testing Core and full framework, make sure you create a new Core console application and edit your csproj file, switching out <TargetFramework>  for e.g.

.NET Performance Tip – Know Your Garbage Collection Options

Standard

Introduction

An under-utilised setting that can offer substantial performance gains.

Workstation GC – is what you’ll be getting by default with a .NET application and might be unaware of another option. It uses smaller segments, which means more frequent collections, which in turn are short, thus minimising application thread suspensions. When used with concurrent GC, it is best suited for desktop / GUI applications. With concurrent disabled (all threads will suspend for GC), it uses a little less memory and is best suited for lightweight services on single-core machines, processing intermittently (appropriate use cases are few and far between).

There Is Another Option!

Server GC– the one you should try – if you have multiple processors dedicated to just your application, this can really speed up GC, and often allocations too. GCs happen in parallel on dedicated threads (one for each processor/core), facilitated by a heap per processor. Segments are larger, favouring throughput and resulting in less frequent, but longer GCs. This does mean higher memory consumption.

I mentioned a concurrent GC setting above (since .NET4, this is called background GC). From .NET4.5, it is enabled by default in both Server and Workstation GC. I don’t expect you’ll ever change it but good to know what it brings to the table – with it enabled, the GC marks (finds unreachable objects) concurrently using a background thread. It does mean an additional thread for every logical processor in Server GC mode but they are lower priority and won’t compete with foreground threads.

Add this to your app.config for Server GC:

If you have good reason, you can disable background GC:

Disclaimer: As with all performance work, measure impact before and after to confirm it’s the right choice for your application!

A Super-Simplified Explanation of .NET Garbage Collection

Standard

 

 

Super happy to have won First Prize @ Codeproject for best article.

Garbage collection is so often at the root (excuse the pun) of many performance problems, very often because of misunderstanding, so please do set aside time to deepen your understanding after reading this.

This article is a super-simplified, look at .NET garbage collection, with loads of intentional technical omissions. It aims to provide a baseline level of understanding that a typical C# developer realistically needs for their day-to-day work. I certainly won’t add complexity by mentioning the stack, value types, boxing etc.

Disclaimer: The .NET garbage collector is a dynamic beast that adapts to your application and its implementation is often changed.

What Developers Know To Say At Interview

The garbage collector automatically looks for objects that are no longer used and frees up that memory. This helps avoid memory leaks created through programmer error.

That’s fine to get the job but is not enough to engineer good, performant C#, trust me.

Managed Memory (The Brief Version)

The .NET CLR (Common Language Runtime) reserves a chunk of memory available to your application where it will manage any objects allocated by your application. When your application is finished with these objects, they are deallocated. This part is handled by the Garbage Collector (GC).

The GC can and does expand segments sizes when needed but its preference is to reclaiming space through its generational garbage collection…

The Generational .NET Garbage Collector

New, small, objects go into generation 0. When a collection occurs, any objects that are no longer in use (no references to them) have their memory freed up (deallocated). Any objects still in use will survive and promoted to the next generation.

Live Long Short (or Forever) and Prosper

Ask any .NET expert and they will tell you the same thing – an object should be short-lived or else live forever. I won’t be going into detail about performance – this is the one and only rule I want you to take away.

To appreciate it, we need to answer: Why generational?

In a well-engineered C# application, typical objects will live and die without ever being promoted out of gen 0. I’m thinking of operations like:

  • Variables local to a short running method
  • Objects instantiated for the lifetime of a request to a web API call

Gen 1 is the ‘generation in between’, which will catch any wannabe-short-lived objects that escape gen 0, with what is still a relatively quick collection.

A check on which objects are unused consumes resources and suspends applications threads. GC gets increasingly expensive up the generations, as a collection of a particular generation has to also collect all those preceding it e.g. if gen 2 is collected, then so also must gen 1 and gen 0 (see above diagram). This is why we often refer to gen 2 as the full GC. Also, objects that live longer tend to be more complicated to clean up!

Don’t worry though – if we know which objects are likely to live longer, we can just check them less frequently. And with this in mind:

.NET GC runs the most on gen 0, less on gen 1 and even less often on gen 2.

If an object makes it to gen 2 it needs to be for a good reason – like being a permanent, reusable object. If objects make it there unintentionally, they’ll stick around longer, using up memory and resulting in more of those bigger gen 2 full collections!

But Generations Are All Just A Front!

The biggest gotcha when exploring your application through profilers and debuggers, looking at GC for the first time, the Large Object Heap (LOH) also being referred to as generation 2.

Physically, your objects will end up in managed heap segments (in the memory allocated to the CLR, mentioned earlier).

Objects will be added onto gen 0 of the Small Object Heap (SOH) consecutively, avoiding having to look for free space. To reduce fragmentation when objects die, the heap may be compacted.

See the following simplified look at a gen 0 collection, followed by a gen 1 collection, with a new object allocation in between (my first go at such an animation):

 

Large objects go on the Large Object Heap, which does not compact, but will try to reuse space.

As of .NET451 you can tell the GC to compact it on the next collection. But prefer other options for dealing with LOH fragmentation such as pooling reusable objects instead.

How Big Is a Large Object?

It is well established that >= 85KB a large object (or an array of 1000 doubles). Need to know a bit more than that…

You might be thinking of that large Bitmap image you’ve working with – actually that object uses 24 Bytes and the bitmap itself is in unmanaged memory. It’s really rare to see an object that is really large. More typically a large object is going to be an array.

In the following example, the object from LargeObjectHeapExample is actually 16 Bytes because it is just made up of general class info and pointers to the string and byte array.

By instantiating the LargeObjectHeapExample object we are actually allocating 3 objects on the heap: 2 of them on the small object heap; and the byte array on the large object heap.

Remember what I said earlier about stuff in the Large Object Heap – notice how the byte array reports as being in generation 2! One reason for the LOH being within gen 2 logically, is that large objects typically have longer lifetimes (think back to what I said earlier about objects that live longer in generational GC). The other reason is the expense of copying large objects while performing the compacting that occurs in earlier generations.

What Triggers a Collection?

  • An attempt to allocate exceeds threshold for a generation or the large object heap
  • A call to GC.Collect (I’ll save this for another article)
  • The OS signals low system memory

Remember gen 2 and LOH are logically the same thing, so hitting the threshold on either, will trigger a full (gen 2) collection on both heaps. Something to consider re performance (beyond this article).

Summary

  • A collection of a particular generation also collects all those below it i.e. collecting 2 also collects 1 and 0.
  • The GC promotes objects that survive collection (because they are still in use) to the next generation. Although see previous point – don’t expect an object in gen 1 to move to gen 2 when a gen 0 collection occurs.
  • GC runs the most on gen 0, less on gen 1 and even less often on gen 2. With this in mind, objects should be short-lived (die in gen 0 or gen 1 at worst) or live forever (intentionally of course) in gen 2.

C# Debug vs. Release builds and debugging in Visual Studio – from novice to expert in one blog article

Standard

Super happy to have won First Prize @ Codeproject for this article.

Repository for my PowerShell script to inspect the DebuggableAttribute of assemblies.

Introduction

‘Out of the box’ the C# build configurations are Debug and Release.

I planned to write a write an introductory article but as I delved deeper into internals I started exploring actual behaviour with Roslyn vs. previous commentary / what the documentation states. So, while I do start with the basics, I hope there is something for more experienced C# developers too.

Disclaimer: Details will vary slightly for .NET languages other than C#.

A reminder of C# compilation

C# source code passes through 2 compilation steps to become CPU instructions that can be executed.

Diagram showing the 2 steps of compilation in the C# .NET ecosytem

As part of your continuous integration, step 1 would take place on the build server and then step 2 would happen later, whenever the application is being run. When working locally in Visual Studio, both steps, for your convenience, fire off the back of starting the application from the Debug menu.

Compilation step 1: The application is built by the C# compiler. Your code is turned into Common Intermediate Language (CIL), which can be executed in any environment that supports CIL (which from now on I will refer to as IL). Note that the assembly produced is not readable IL text but actually metadata and byte code as binary data (tools are available to view the IL in a text format).

Some code optimisation will be carried out (more on this further on).

Compilation  step 2:  The Just-in-time (JIT) compiler will convert the IL into instructions that the CPU on your machine can execute. This won’t all happen upfront though – in the normal mode of operation, methods are compiled at the time of calling, then cached for later use.

The JIT compiler is just one of a whole bunch of services that make up the Common Language Runtime (CLR), enabling it to execute .NET code.

The bulk of code optimisation will be carried out here (more on this further on).

What is compiler optimisation (in one sentence)?

It is the process of improving factors such as execution speed, size of the code, power usage and in the case of .NET, the time it takes to JIT compiler the code – all without altering the functionality, aka original intent of the programmer.

Why are we concerned with optimisation in this article?

I’ve stated that compilers at both steps will optimise your code. One of the key differences between the Debug and Release build configurations is whether the optimsations are disabled or not, so you do need to understand the implications of optimisation.

C# compiler optimisation

The C# compiler does not do a lot of optimisation. It relies ‘…upon the jitter to do the heavy lifting of optimizations when it generates the real machine code. ‘  (Eric Lippert). It will nonetheless still degrade the debugging experience.  You don’t need in-depth knowledge of C# optimisations to follow this article, but I’ll look at one to illustrate the effect on debugging:

The IL nop instruction (no operation)

The nop instruction has a number of uses in low level programming, such as including small, predictable delays or overwriting instructions you wish to remove. In IL, it is used to help breakpoints set in the your source code behave predictably when debugging.

If we look at the IL generated for a build with optimisations disabled:

nop instruction

This nop instruction directly maps to a curly bracket and allows us to add a breakpoint on it:

curly bracket associated with nop instruction

This would be optimised out of IL generated by the C# compiler if optimisations were enabled, with clear implications for your debugging experience.

For a more detailed discussion on C# compiler optimisations see Eric Lippert’s article: What does the optimize switch do?. There is also a good commentary of IL before and after being optimised here.

The JIT compiler optimisations

Despite having to perform its job swiftly at runtime, the JIT compiler performs a lot of optimisations. There’s not much info on its internals and it is a non-deterministic beast (like Forrest Gump’s box of chocolates) – varying in the native code it produces depending on a many factors. Even while your application is running it is profiling and possibly re-compiling code to improve performance. For a good set of examples of optimisations made by the JIT compiler checkout Sasha Goldshtein’s article.

I will just look at one example to illustrate the effect of optimisation on your debugging experience:

Method inlining

For the real-life optimisation made by the JIT compiler, I’d be showing you assembly instructions. This is just a mock-up in C# to give you the general idea:

Suppose I have:

The JIT compiler would likely perform an inline expansion on this, replacing the call to Add()   with the body of Add()  :

Clearly, trying to step through lines of code that have been moved is going to be difficult and you’ll also have a diminished stack trace.

The default build configurations

So now that you’ve refreshed your understanding of .NET compilation and the two ‘layers’ of optimisation, let’s take a look at the 2 build configurations available ‘out of the box’:

Visual Studio release and debug configurations

Pretty straightforward – Release is fully optimised, the Debug is not at all, which as you are now aware, is fundamental to how easy it is to debug your code. But this is just a superficial view of the possibilities with the debug and optimize arguments.

The optimize and debug arguments in depth

I’ve attempted to diagram these from the Roslyn and mscorlib code, including: CSharpCommandLineParser.cs, CodeGenerator.cs, ILEmitStyle.csdebuggerattributes.cs, Optimizer.cs and OptimizationLevel.cs. Blue parallelograms represent command line arguments and the greens are the resulting values in the codebase.

Diagram of optimize and debug command line arguments and their related settings in code

The OptimizationLevel enumeration

OptimizationLevel.Debug disables all optimizations by the C# compiler and disables JIT optimisations via DebuggableAttribute.DebuggingModes  , which with the help of ildasm, we can see is:

Manifest debuggable attribute

Given this is Little Endian Byte order, it reads as 0x107, which is 263, equating to: Default , DisableOptimizations , IgnoreSymbolStoreSequencePoints  and EnableEditAndContinue, (see debuggerattributes.cs.

OptimizationLevel.Release enables all optimizations by the C# compiler and enables JIT optimizations via DebuggableAttribute.DebuggingModes = ( 01 00 02 00 00 00 00 00 ) , which is just DebuggingModes.IgnoreSymbolStoreSequencePoints .

With this level of optimization, ‘sequence points may be optimized away. As a result it might not be possible to place or hit a breakpoint.’ Also, ‘user-defined locals might be optimized away. They might not be available while debugging.’ (OptimizationLevel.cs).

IL type explained

The type of IL is defined by the following enumeration from ILEmitStyle.cs.

As in the diagram above, the type of IL produced by the C# compiler is determined by the OptimizationLevel ; the debug argument won’t change this, with the exception of debug+ when the OptimizationLevel is Release i.e. in all but the case of debug+, optimize is the only argument that has any impact on optimisation – a departure from pre-Roslyn*.

* In Jeffry Richter’s CLR Via C# (2014), he states that optimize- with debug- results in the C# compiler not optimising IL and the JIT compiler optimising to native.

ILEmitStyle.Debug – no optimization of IL in addition to adding nop instructions in order to map sequence points to IL

ILEmitStyle.Release – do all optimizations

ILEmitStyle.DebugFriendlyRelease – only perform optimizations on the IL that do not degrade debugging. This is the interesting one. It comes off the back of a debug+ and only has an effect on optimized builds i.e. those with OptimizationLevel.Release. For optimize- builds debug+ behaves as debug.

The logic in (CodeGenerator.cs) describes it more clearly than I can:

The comment in the source file Optimizer.cs states that, they do not omit any user defined locals and do not carry values on the stack between statements. I’m glad I read this, as I was a bit disappointed with my own experiments in ildasm with debug+, as all I had been seeing was the retention of local variables and a lot more pushing and popping to and from the stack!

There is no intentional ‘deoptimizing’ such as adding nop instructions.

There’s no obvious direct way to chose this debug flag from within Visual Studio for C# projects? Is anyone making use of this in their production builds?

No difference between debug, debug:full and debug:pdbonly?

Correct – despite the current documentation and the help stating otherwise:

csc command line help

They all achieve the same result – a .pdb file is created. A peek at CSharpCommandLineParser.cs  can confirm this. And for good measure I did check I could attach and debug with WinDbg for both the pdbonly and full values.

They have no impact on code optimisation.

On the plus side, the documentation on Github is more accurate, although I’d say, still not very clear on the special behaviour of debug+.

I’m new.. what’s a .pdb? Put simply, a .pdb file stores debugging information about your DLL or EXE, which will help a debugger map the IL instructions to the original C# code.

What about debug+?

debug+ is its own thing and cannot be suffixed by either full or pdbonly. Some commentators suggest it is the same thing as debug:full, which is not exactly true as stated above – used with optimize- it is indeed the same, but when used with optimize+ it has its own unique behaviour, discussed above under DebugFriendlyRelease .

And debug- or no debug argument at all?

The defaults in CSharpCommandLineParser.cs are:

The values for debug- are:

So we can confidently say debug- and no debug argument result in the same  single effect – no .pdb file is created.

They have no impact on code optimisation.

Suppress JIT optimizations on module load

A checkbox under Options->Debugging->General; this is an option on the debugger in Visual Studio and is not going to affect the assemblies you build.

You should now appreciate that the JIT compiler does most of the significant optimisations and is the bigger hurdle to mapping back to the original source code for debugging. With this enabled, the debugger will request that DisableOptimizations  is ignored by the JIT compiler.

Until circa 2015 the default was enabled. I earlier cited CLR via C#, in that pre-Roslyn we could supply optimise- and debug- arguments to csc.exe and get unoptimised C# that was then optimised by the JIT compiler – so there would have been some use for suppressing the JIT optimisations in the Visual Studio debugger. However, now that anything being JIT optimised is already degrading the debugging experience via C# optimisations, Microsoft decided to default to disabled on the assumption that if you are running the Release build inside Visual Studio, you probably wish to see the behaviour of an optimised build at the expense of debugging.

Typically you only need to switch it on if you need to debug into DLLs from external sources such as NuGet packages.

If you’re trying to attach from Visual Studio to a Release build running in production (with a .pdb or other source for symbols) then an alternative way to instruct the JIT compiler not to optmiize is to add a .ini file with the same name as your executable along side it with the following:

Just My Code.. What?

By default, Options->Debugging→Enable Just My Code is enabled and the debugger considers optimised code to be non-user. The debugger is never even going to attempt non-user code with this enabled.

You could uncheck this option, and then theoretically you can hit breakpoints. But now you are debugging code optimised by both the C# and JIT compilers that barely matches your original source code, with a super-degraded experience – stepping through code will be unpredictable you will probably not be able to obtain the values in local variables.

You should only really be changing this option if working with DLLs from others where you have the .pdb file.

A closer look at DebuggableAttribute

Above, I mentioned using ildasm to examine the manifest of assemblies to examine DebuggableAttribute . I’ve also written a little PowerShell script to produce a friendlier result (available via download link at the start of the article).

Debug build:

Release build:

You can ignore IsJITTrackingEnabled, as it is has been ignored by the JIT compiler since .NET 2.0. The JIT compiler will always generate tracking information during debugging to match up IL with its machine code and track where local variables and function arguments are stored (Source).

IsJITOptimizerDisabled simply checks DebuggingFlags for DebuggingModes.DisableOptimizations. This is the one that turns on optimisation by the JIT compiler.

DebuggingModes.IgnoreSymbolStoreSequencePoints tells the debugger to work out the sequence points from the IL instead of loading the .pdb file, which would have performance implications. Sequence points are used to map locations in the IL code to locations in your C# source code. The JIT compiler will not compile any 2 sequence points into a single native instruction. With this flag, the JIT will not load the .pdb file. I’m not sure why this flag is being added to optimised builds by the C# compiler – any thoughts?

Key points

  • debug- (or no debug argument at all) now means: do not create a .pdb file.
  • debug, debug:full and debug:pdbonly all now causes a .pdb file to be output. debug+ will also do the same thing if used alongside optimize-.
  • debug+ is special when used alongside optimize+, creating IL that is easier to debug.
  • each ‘layer’ of optimisation (C# compiler, then JIT) further degrades your debugging experience. You will now get both ‘layers’ for optimize+ and neither of them for optimize-.
  • since .NET 2.0 the JIT compiler will always generate tracking information regardless of the attribute IsJITTrackingEnabled
  • whether building via VS or csc.exe, the DebuggableAttribute is now always present
  • the JIT can be told to ignore IsJITOptimizerDisabled during Visual Studio debugging via the general debugging option, Suppress JIT optimizations on module load. It can also be instructed to do so via a .ini file
  • optimised+ will create binaries that the debugger considers non-user code. You can disable the option Just My Code, but expect a severely degraded debugging experience,

You have a choice of:

  • Debug: debug|debug:full|debug:pdbonly optimize+
  • Release: debug-|no debug argument optimize+
  • DebugFriendlyRelease: debug+ optimize+

However, DebugFriendlyRelease is only possible by calling Roslyn csc.exe directly. I would be interested to hear from anyone that has been using this.

Addressing a simple yet common C# Async/Await misconception

Standard


Super happy to have won First Prize @ Codeproject for this article.

Git repository with example code discussed in this article.

Async/await has been part of C# since C# 5.0 yet many developers have yet to explore how it works under the covers, which I would recommend for any syntactic sugar in C#. I won’t be going into that level of detail now, nor will I explore the subtleties of IO and CPU bound operations.

The common misconception

That awaited code executes in parallel to the code that follows.

i.e. in the following code, LongOperation() is called and awaited, and while this is executing, and before it has completed, the code ‘doing other things’ will start being executed.

This is not how it behaves.

In the above code, what actually happens is that the await operator causes WithAwaitAtCallAsync() to suspend at that line and returns control back to DemoAsyncAwait() until the awaited task, LongOperation(), is complete.

When LongOperation() completes, then ‘do other things’ will be executed.

And if we don’t await when we call?

Then you do get that behaviour some developers innocently expect from awaiting the call, where LongOperation() is left to complete in the background while continuing on with  WithoutAwaitAtCallAsync() in parallel, ‘doing other things’:

However, if LongOperation() is not complete when we reach the awaited Task it returned, then it yields control back to DemoAsyncAwait(), as above. It does not continue to complete ‘more things to do’ – not until the awaited task is complete.

Complete Console Application Example

Some notes about this code:

  • Always use await over Task.Wait() to retrieve the result of a background task (outside of this demo) to avoid blocking. I’ve used Task.Wait() it in my demonstrations to force blocking and prevent the two separate demo results overlapping in time.
  • I have intentioinally not used Task.Run() as I don’t want to confuse things with new threads. Let’s just assume LongOperation() is IO-bound.
  • I used Task.Delay() to simulate the long operation. Thread.Sleep() would block the thread

This is what happens when the code is executed (with colouring):

 

Conclusion

If you use the await keyword when calling an async method from inside an async method, execution of the calling method is suspended to avoid blocking the thread and control is passed (or yielded) back up the method chain. If, on its journey up the chain, it reaches a call that was not awaited, then code in that method is able to continue in parallel to the remaining processing in the chain of awaited methods until it runs out of work to do, and then needs to await the result, which is inside the Task object returned by LongOperation().​​​​​​​

.NET String Interning to Improve String Comparison Performance (C# examples)

Standard

Introduction

String comparisons must be one of the most common things we do in C#; and string comparisons can be really expensive! So its worthwhile knowing the what, when and why to improving string comparison performance.

In this article I will explore one way – string interning.

What is string interning?

String interning refers to having a single copy of each unique string in an string intern pool, which is via a hash table in the.NET common language runtime (CLR). Where the key is a hash of the string and the value is a reference to the actual String object.

So if I have the same string occurring 100 times, interning will ensure only one occurrence of that string is actually allocated any memory. Also, when I wish to compare strings, if they are interned, then I just need to do a reference comparison.

String interning mechanics

In this example, I explicitly intern string literals just for demonstration purposes.

Line 1:

  • This new “stringy” is hashed and the hash is looked up in our pool (of course its not there yet)
  • A copy of the “stringy” object would be made
  • A new entry would be made to the interning hash table, with the key being the hash of “stringy” and the value being a reference to the copy of “stringy”
  • Assuming application no longer references original “stringy”, the GC can reclaim that memory

Line 2: This new “stringy” is hashed and the hash is looked up in our pool (which we just put there). The reference to the “stringy” copy is returned
Line 3: Same as line 2

Interning depends on string immutability

Take a classic example of string immutability:

We know that when line 2 is executed, the “stringy” object is de-referenced and left for garbage collection; and s1 then points to a new String object “stringy new string”.

String interning works because the CLR knows, categorically, that the String object referenced cannot change. Here I’ve added a fourth line to the earlier example:

Line 4: s1 doesn’t change because it is immutable; it now points to a new String object ” stringy new string”.
s2 and s3 will still safely reference the copy that was made at line 1

Using Interning in the .NET CLR

You’ve already seen a bit in the examples above. .NET offers two static string methods:

Intern(String str)

It hashes string str and checks the intern pool hash table and either:

  • returns a reference to the (already) interned string, if interned; or
  • a references to str is added to the intern pool and this reference is returned

IsInterned(String str)

It hashes string str and checks the intern pool hash table. Rather than a standard bool, it returns either:

  • null, if not interned
  • a reference to the (already) interned string, if interned

String literals (not generated in code) are automatically interned, although CLR versions have varied in this behaviour, so if you expect strings interned, it is best to always do it explicitly in your code.

My simple test: Setup

I’ve run some timed tests on my aging i5 Windows 10 PC at home to provide some numbers to help explore the potential gains from string interning. I used the following test setup:

  • Strings randomly generated
  • All string comparisons are ordinal
  • Strings are all the same length of 512 characters as I want the CLR will compare every character to force the worst-case (the CLR checks string length first for ordinal comparisons)
  • The string added last (so to the end of the List<T>) is also stored as a ‘known’ search term. This is because I am only exploring the worst-case approach
  • For the List<T> interned, every string added to the list, and also the search term, were wrapped in the string.Intern(String str) method

I timed how long populating each collection took and then how long searching for the known search term took also, to the nearest millisecond.

The collections/approaches used for my tests:

  • List<T> with no interning used, searched via a foreach loop and string.Equals on each element
  • List<T> with no interning used, searched via the Contains method
  • List<T> with interning used, searched via a foreach loop and object.ReferenceEquals
  • HashSet<T>, searched via the Contains method

The main objective is to observe string search performance gains from using string interning with List<T> HashSet is just included as a baseline for known fast searches.

My simple test: Results

In Figure 1 below, I have plotted the size of collections in number of strings added, against the time taken to add that number of randomly generated strings. Clearly there is no significant difference in this operation, when using a HashSet<t> compared to a List<T> without interning. Perhaps if had run to larger sets the gap would widen further based on trend?

There is slightly more overhead when populating the List<T> with string interning, which is initially of no consequence but is growing faster than the other options.

Figure 1: Populating List<T> and HashSet<T> collections with random strings

Figure 2, shows the times for searching for the known string. All the times are pretty small for these collection sizes but the growths are clear.

Figure 2: Times taken searching for a string known, which was added last

As expected, HashSet is O(1) and the others are O(N). The searches not utilising interning are clearly growing in time taken at a greater rate.

Conclusion

HashSet<T> is present in this article only as a baseline for fast searching and should obviously be your choice for speed where there are no constraints requiring a List<T>.

In scenarios where you must use a List<T> such as where you still wish to enumerate quickly through a collection, there are some performance gains to be had from using string interning, with benefits increasing as the size of the collection grows. The drawback is the slightly increased populating overhead (although I think it is fair to suggest that most real-world use cases would not involve populating the entire collection in one go).

The utility and behaviour of string interning, reminds me of database indexes – it takes a bit longer to add a new item but that item will be faster to find. So perhaps the same considerations for database indexes are true for string interning?

There is also the added bonus that string interning will prevent any duplicate strings being stored, which in some scenarios could mean substantial memory savings.

Potential benefits:

  • Faster searching via object references
  • Reduced memory usage because duplicate interned strings will only be stored once

Potential performance hit:

  • Memory referenced by the intern hash table is unlikely to be released until the CLR terminates
  • You still need to create the string to be interned, which will be allocated memory (granted, this will then be garbage collected)

Sources

  • https://msdn.microsoft.com/en-us/library/system.string.intern.aspx

Erratic Behaviour from .NET MemoryCache Expiration Demystified

Standard

On a recent project I experienced first-hand, how the .NET MemoryCache class, when used with either absolute or sliding expiration, can produce some unpredictable and undocumented results.

Sometimes cache items expire exactly when expected… yay. But mostly, they expire an arbitrary period of time late.

For example, a cache item with an absolute expiry of 5 seconds might expire after 5 seconds but could just as likely take up to a further 20 seconds to expire.

This might only be significant where precision, down to a few seconds, is required (such as where I have used it to buffer / throttle FileSystemWartcher events) but I thought it would be worthwhile decompiling System.Runtime.Caching.dll and then clearly documenting the behaviour we can expect.

When does a cache item actually expire?

There are 2 ways your expired item can leave the cache:

  • Every 20 seconds, on a Timer, it will pass through all items and flush out anything past its expiry
  • Whenever an item is accessed, its expiry is checked and that item will be removed if expired

This goes for both absolute and sliding expiration. The timer is enabled as soon as anything is added to the cache, whether or not it has an expiration set.

Note that this is all about observable behaviour, as witnessed by the bemused debugger, because once an item has past its expiry, you can no longer access it anyway – see point 2 above, where accessing an item forces a flush.

Just as weird with Sliding Expiration…

Sliding expiration is where an expiration time is set, the same as absolute, but if it is accessed the timer is reset back to the configured expiration length again.

  • If the new expiry is not at least 1 second longer than the current (remaining) expiry, it will not be updated/reset on access

Essentially, this means that while you can add to cache with a sliding expiration of <= 1 second, there is no chance of any accessing causing the expiration to reset.

Note that if you ever feel the urge to avoid triggering a reset on sliding expiration, you could do this by boxing up values and getting/setting via the reference to the object instead.

Conclusion / What’s so bewildering?

In short, it is undocumented behaviour and a little unexpected.

Consider the 20 second timer and the 5 second absolute expiry example. When it is actually removed from the cache, will depend on where we are in the 20 seconds Timer cycle; it could be any time period, up to an additional 20 seconds, before it fires, giving a potential total of ~ 25 seconds between actually expiring from being added.

Add to this, the additional confusion you’ll come across while debugging, caused by items past their expiry time being flushed whenever they are accessed, it has even troubled the great Troy Hunt: https://twitter.com/troyhunt/status/766940358307516416. Granted he was using ASP.NET caching but the core is pretty much the same, as System.Runtime.Caching was just modified for general .NET usage.

Decompiling System.Runtime.Caching.dll

Some snippets from the .NET FCL for those wanting a peek at the inner workings themselves.

CacheExpires.cs

FlushExpiredItems is called from the TimerCallback (on the 20 seconds) and can also be triggered manually via the MemoryCache method, Trim. There must be interval of >= 1 second between flushes.

Love the goto – so retro. EDIT: Eli points out that it might just be my decompiler!

MemoryCacheEntry.cs

UpdateSlidingExp updates/resets sliding expiration. Note the limit MIN_UPDATE_DELTA of 1 sec.

MemoryCacheStore.cs

See how code accessing a cached item will trigger a check on its expiration and if expired, remove it from the cache.