Archive for Quick tip

A few random ASP.NET Core and .NET Core tips

I’ve been working with .NET core recently and I’d like to post some random observations on this subject for the future reference.

  1. It is possible to create Nuget package upon build. This option is actually available also from the VS2017 Project properties GUI. Add this code to csproj.

     
      netstandard2.0
      true
      $(UserProfile)
      0.0.1
    
    
  2. It is possible to add local folder as Nuget feed. The folder can also be current user’s profile. This one is actually not Core specific. Nuget.config should look like this:

    < ?xml version="1.0" encoding="utf-8"?>
    
      
        
        
      
      
          
          
        
      
        
          
          
        
      
      
        
      
      
        
      
    
    
  3. You can compile for multiple targets in .NET Core compatible csproj. Please note the trailing s in the tag name. You can also conditionally include items in csproj. Use the following snippets:

      netstandard2.0;net45;net40
    

    and:

        
      
    

    There is a reference documentation for the available targets: here.

  4. The listening port in Kestrel can be configured in multiple ways. It can be read from environment variable or can be passed as command line argument. An asterisk is required to bind to physical interfaces. It is needed e.g. when trying to display the application from mobile phone when being served from development machine. The following are equivalent:

    set ASPNETCORE_URLS=http://*:11399
    --urls http://*:11399
    
  5. The preferred way to pass hosting parameters to Kestrel is launchSettings.json file located in Properties of the solution root. You can select a profile defined there when running:

    dotnet run --launch-profile "Dev"
    

    dotnet run is used to build and run from the directory where csproj resides. It is not a good idea to run the app’s dll directly. Settings file can be missing from bin folder and/or launch profile may not be present there.

How to run Tmux in GIT Bash on Windows

Tmux running under Git Bash default terminal with two shell processes

I know everyone uses Cmder, but it didn’t work for me. It hung a few times, it has way too many options, it has issues sending signal to kill a process. I gave up on using it. I work with carefully configured default Windows console and believe it or not, it serves the purpose. I also know you can use Windows Subsystem For Linux under Windows 10, which is truly amazing, but I am just talking about the cases where you need standard Git for Windows installation.

When I worked with Unix I liked GNU Screen, which is terminal multiplexer. It gives you a bunch of keyboard shortcuts to create separate shell processes under the same terminal window. The problem is, it is not available under GIT Bash. But it turns out, its alternative — Tmux is.

I did a little research and have found that GIT Bash uses MINGW compilation of GNU tools. It uses only selected ones. You can install the whole distribution of the tools from https://www.msys2.org/ and run a command to install Tmux. And then copy some files to installation folder of Git. This is what you do:

  1. Install before-mentioned msys2 package and run bash shell
  2. Install tmux using the following command: pacman -S tmux
  3. Go to msys2 directory, in my case it is C:\msys64\usr\bin
  4. Copy tmux.exe and msys-event-2-1-4.dll to your Git for Windows directory, mine is C:\Program Files\Git\usr\bin

And you are ready to go. Please note, that I do this on 64-bit installations of Git and MSYS . Now when you run Git Bash enter tmux. My most frequently used commands are:

  • CTRL+B, (release and then) C — create new shell within existing terminal window
  • CTRL+B, N — switch between shells
  • CTRL+B, a digit — switch to chosen shell by corresponding number
  • CTRL+B, " — split current window horizontally into panels (panels are inside windows)
  • CTRL+B, o — switch between panels in current window
  • CTRL+B, x — close panel

This is everything you need to know to start using it. Simple. There are many other options which you can explore yourself, for example here http://hyperpolyglot.org/multiplexers.

Update 1: Users in comments are reporting the method not always works. If you have any experiences with this method please feel free to comment, so that we can figure out what are the circumstances under which it works

Update2: I managed to run this on Windows 7, Windows 2012 R2 and Windows 10. My Git installation is set up to use MinTTy console and tmux works only when run from this console, not from default Windows command line console. Still haven’t figured out what are precise requirements for this trick

Beware LINQ Group By custom class

Actually this one is pretty obvious. But if you are focused on implementing some complex logic, it is easy to forget about this requirement. Let’s assume there is following piece of code:

var processedData = 
    from q in rawDataFromDb
    group q by new ProductCodeCentreNumPair 
        { ProductCode = q.Code, CentreNumber = q.CentreNumberIdentifier } into g
    select g
    ;

public class ProductCodeCentreNumPair
{
    public string ProductCode;
    public string CentreNumber;
}

This will compile and run, however it will not distinguish ProductCodeCentreNumPair instances. I.e. it will not do the grouping, but produce a sequence of IGroupings, each for corresponding source item. The reason is self-evident, if we try to think for a while. This custom class does not have custom equality comparison logic implemented. Default logic is based on ReferenceEquals so, as each separate object resides at different memory address, they all will be recognized as not equal to each other. Even if they contain the same values in their fields (strings behave like value types, although they are reference types). I used the following set of overridden methods to provide my custom equality comparison logic to solve the problem. It is important to note, that GetHashCode is also needed in order for the grouping to work.

public override bool Equals(object other)
{
    if (ReferenceEquals(null, other)) return false;
    if (ReferenceEquals(this, other)) return true;
    if (this.GetType() != other.GetType())
    	return false;
    return Equals((ProductCodeCentreNumPair)other);
}

public override int GetHashCode()
{
    return ProductCode.GetHashCode() + CentreNumber.GetHashCode();
}

public bool Equals(ProductCodeCentreNumPair other)
{
    var result =
    other.CentreNumber == CentreNumber &&
    other.ProductCode == ProductCode
    ;
    return result;
}

Alternatively you can use anonymous types, I mean:

var processedData = 
    from q in rawDataFromDb
    group q by new { ProductCode = q.Code, CentreNumber = q.CentreNumberIdentifier } into g
    select g
    ;

will just work. This is because instances of anonymous classes have automatically generated equality comparison logic based on values of their fields. Contrary to ReferenceEquals based implementation generated for typical named classes. They are most frequently used in the context of comparisons, so it seems reasonable.

One more alternative is to use a structure instead of a class. But structures should only be used if their fields are value types, because only then you can benefit from binary comparison of their value. And even having structs instead of classes requires implementing custom GetHashCode. By not implementing it, there is a risk that 1) the auto-generated implementation will use reflection or 2) will not be well distributed across int leading to performance problems when adding to HashSet.

When profiling a loop, look at entire execution time of that loop

Today’s piece of advice will not be backed by concrete examples of code. It will be just loose observations of some profiling session and related conclusions. Let’s assume there is a C# code doing some intense manipulations over in-memory stored data. The operations are done in a loop. If there is more data, there are more iterations of the loop. The aim of the operations is to generate kind of summary of data previously retrieved from a database. It has been proved to work well in test environments. Suddenly, in production, my team noticed this piece of code takes about 20 minutes to execute. It turned out there were about 16 thousand iterations of the loop. It was impossible to test such high load with manual testing. The testing only confirmed correctness of the logic itself.

After an investigation and some experiments it turned out that bad data structure was to blame. The loop did some lookup-heavy operations over List<T>. Which are obviously O(n), as list is implemented over array. Substitution of List<T> for Hashset<T> caused dramatic reduction of execution time to a dozen or so of seconds. This is not as surprising as it may seem because Hashset<T> is implemented over hashtable and has O(1) lookup time. These are some data structures fundamentals taught in every decent computer science lecture. The operation in a loop looked innocent and the author did not bother to try to anticipate future load of the logic. A programmer should always keep in mind the estimation of the amount of data to be processed by their implementation and think of appropriate data structure. By appropriate data structure I mean choice between fundamental structures like array (vector), linked list, hashtable, binary tree. This can be the first important conclusion, but I encouraged my colleagues to perform profiling. The results were not so obvious.

Although the measurement of timing of the entire loop with 16 thousand iterations showed clearly that hashtable based implementation performs orders of magnitude better, but when we stepped over individual loop instructions there was almost no difference in their timings. The loop consisted of several .Where calls over entire collection. These calls took something around 10 milliseconds in both List<T> and Hashset<T> implementations. If we had not bothered to measure entire loop execution, it would have been pretty easy to draw conclusion there is no difference! Even performing such step measurement during the development can me misleading, because at first sight, does 10 milliseconds look suspicious? Of course not. Not only does it look unsuspicious but also it works well. At least in test environment with test data.

As I understand it, the timings might have been similar, because we measured only some beginning iterations of the loop. If the data we are searching for are at the beginning of the array, the lookup can obviously be fast. For some edge cases even faster than doing hashing and looking up corresponding list of slots in a hashtable.

For me there are two important lessons learned here:

  • Always think of data structures and amount of data to be processed.
  • When doing performance profiling, measure everything. Concentrate on amortized timings of entire scope of code in question. Do not try to reason about individual subtotals.

Entity object in EF is partially silently read-only

What this post is all about is the following program written in C# using Entity Framework 6.1.3 throwing at line 25 and not at line 23.

We can see the simplest usage of Entity framework here. There is a Test class and a TestChild class which contains a reference to an instance of Test named Parent. This reference is marked as virtual so that Entity Framework in instructed to load an instance in a lazy manner, i.e. upon first usage of that reference. In DDL model obviously TestId column is a foreign key to Test table.

I create an entity object, save it into database and then I retrieve it at line 21. Because the class uses virtual properties, Entity Framework dynamically creates some custom type in order to be able to implement lazy behavior underneath.

Now let’s suppose I have a need to modify something in an object retrieved from a database. It turns out I can modify Value property, which is of pure string type. What is more, I can also modify Parent property, but… the modification is not preserved!. This program throws at line 25 because an assignment from line 24 is silently ignored by the framework.

I actually have been trapped by this when I was in need of modifying some collection in complicated object graph. I am deeply disappointed the Entity Framework on one hand allows modification of non-virtual properties, but on the other hand it ignores virtual ones. This can make a developer run into a trouble.

Of course I am aware it is not good practice to work on objects of data model classes. I recovered myself from this situation with AutoMapper. But this is kind of a quirk, and a skilled developer has to hesitate to even try to modify something returned by Entity Framework.

using System;
using System.Data.Entity;
using System.Linq;

namespace ConsoleApplication1
{
    class Program
    {
        static void Main()
        {
            using (var db = new TestDBContext())
            {
                var t = new Test { Value = "Hello" };
                var c = new TestChild { Value = "Hello from child", Parent = t };
                db.TestChildren.Add(c);
                db.SaveChanges();
            }
            using (var db = new TestDBContext())
            {
                // Type of c is System.Data.Entity.DynamicProxies.TestChild_47042601AE8E209C11CC25521C2746A2B9D93EC625A6F20BA3D60926278A3D21}
                var c = db.TestChildren.First();
                c.Value = string.Empty;
                if (c.Value != string.Empty) throw new Exception();
                c.Parent = null;
                if (c.Parent != null) throw new Exception();
            }
        }
    }

    public class TestDBContext : DbContext
    {
        public TestDBContext() : base("Name=default")
        {
            Database.SetInitializer<TestDBContext>(new CreateDatabaseIfNotExists<TestDBContext>());
        }
        public DbSet<Test> Tests { get; set; }
        public DbSet<TestChild> TestChildren { get; set; }
    }

    public class Test
    {
        public int Id { get; set; }
        public string Value { get; set; }
    }

    public class TestChild
    {
        public int Id { get; set; }
        public int TestId { get; set; }
        public string Value { get; set; }
        public virtual Test Parent { get; set; }
    }

}

Kdiff3 as Git mergetool and –auto pitfall

In Git, when merging you can sometimes observe the behavior when a conflict has been somehow auto-magically solved without any user interaction at all and without displaying resolution tool window. I have experienced this with commonly used Kdiff3 tool set up as the default mergetool. It turns out that Git has hardcoded --auto option when invoking Kdiff3. This option instructs Kdiff3 to perform merge automatically and silently and display the window only if it is not able to figure out conflict resolutions itself. My understanding is, that it is intended solely for trivial cases and most of the time the window is displayed anyway. However, I am writing this post obviously because such “feature” once has got me into trouble — the tool fixed conflict, but should not have and, of course, did it wrong. In my opinion it is undoubtedly better to always make the decision on your own, instead of relying on some not well known logic of the tool.

The solution is to define custom tool in .gitconfig with Kdiff3 executable and custom command line parameters. Here is the configuration which I use on Windows:

[merge]
    tool = kdiff3NoAuto
    conflictstyle = diff3

[mergetool "kdiff3NoAuto"]
    cmd = C:/Program\\ Files\\ \\(x86\\)/KDiff3/kdiff3.exe --L1 \"$MERGED (Base)\" --L2 \"$MERGED (Local)\" --L3 \"$MERGED (Remote)\" -o \"$MERGED\" \"$BASE\" \"$LOCAL\" \"$REMOTE\"