Archive for General programming

‘Differs only in casing’ class of problems

Error message from TypeScript compiler

This one attacked me yesterday by the end of the day. This is a TypeScript project with some instrumentation done with gulp. The command shown attempts to launch TypeScript compiler. The console is standard git bash for Windows. I didn’t realize what was going on there at the first sight. The messages were confusing and mysterious. I made sure I didn’t have any changes in the repository comparing to the HEAD. I opened the folder in Visual Studio Code and everything was fine. It was only the compiler which saw the problems.

The next day I figured out there is a small detail in my path. Just take a look at directory names casing:

TypeScript compiler ran successfully

After changing the directory using names with proper casing everything was fine. What actually happened was that, I somehow opened the git bash providing the current directory name using lower case letters. It is not that difficult to do that. This is a flaw in the MINGW implementation of bash, I think. It just allows you to change the directory using its name written with whatever case. That is not the problem itself, because cmd.exe allows you to do so as well. The problem is that it then stores the path (probably in PWD environment variable). Some cross-platform tools, which are case sensitive on other platforms, when executed from git bash with mismatched working directory casing may begin to treat such path as separate one, different from the original one. Especially tools which process files using their paths relative to the current directory, like TypeScript compiler for instance.

This can possibly be a wider class of problems and I guess there are other development tools which behave like that when launched from git bash under before mentioned conditions.

In C# interface implementations are not inherited

It may be obvious to some readers, however I was a little bit surprised when I discovered that. Actually, I realized this by looking at a non-trivial class hierarchy in real world application. One can easily think that discussion about inheritance is kind of theoretical area and it primarily appears during job interviews, but it is not true. I will explain real use case and real reasoning behind this hierarchy later in this post, now please take a look at the following program. Generally, the point is that 1) we have to use reference of an interface type and we want more than one specialized implementations of the interface 2) we need to have class B inherit from class A. Without the second requirement it would be obvious: it would be sufficient just to write two separate implementations of IActivity and we are done.

using static System.Console;

namespace ConsoleApplication2
{
    public interface IActivity
    {
        void DoActivity();
    }

    public class A : IActivity
    {
        public void DoActivity()
        {
            WriteLine("A does activity");
        }
    }

    public class B : A
    {
        public new void DoActivity()
        {
            WriteLine("B does activity");
        }
    }

    class Program
    {
        static void Main(string[] args)
        {
            IActivity ia = new B();
            ia.DoActivity();
            ReadKey();
        }
    }
}

It prints A does activity despite ia variable storing reference to an object of type B and also despite explicitly declaring hiding of base method. It was not clear to me why is it so. It is obvious that the type B has its own implementation, so why is it not run here? To overcome this I initially created base class declared as abstract:

using static System.Console;

namespace ConsoleApplication2
{
    public abstract class Base
    {
        public abstract void DoActivity();
    }

    public interface IActivity
    {
        void DoActivity();
    }

    public class A : Base, IActivity
    {
        public override void DoActivity()
        {
            WriteLine("A does activity");
        }
    }

    public class B : A
    {
        public override void DoActivity()
        {
            WriteLine("B does activity");
        }
    }

    class Program
    {
        static void Main(string[] args)
        {
            IActivity ia = new B();
            ia.DoActivity();
            ReadKey();
        }
    }
}

It prints B does activity, but it also is overcomplicated. Then I came up with simpler solution — it turns out we have to explicitly mark class B as implementing IActivity.

using static System.Console;

namespace ConsoleApplication2
{
    public interface IActivity
    {
        void DoActivity();
    }

    public class A : IActivity
    {
        public void DoActivity()
        {
            WriteLine("A does activity");
        }
    }

    public class B : A, IActivity
    {
        public new void DoActivity()
        {
            WriteLine("B does activity");
        }
    }

    class Program
    {
        static void Main(string[] args)
        {
            IActivity ia = new B();
            ia.DoActivity();
            ReadKey();
        }
    }
}

It prints B does activity, but it is not perfect. Method hiding is not a good practice. Finally I ended up with more elegant (and the simplest, I guess) solution:

using static System.Console;

namespace ConsoleApplication2
{
    public interface IActivity
    {
        void DoActivity();
    }

    public class A : IActivity
    {
        public virtual void DoActivity()
        {
            WriteLine("A does activity");
        }
    }

    public class B : A, IActivity
    {
        public override void DoActivity()
        {
            WriteLine("B does activity");
        }
    }

    class Program
    {
        static void Main(string[] args)
        {
            IActivity ia = new B();
            ia.DoActivity();
            ReadKey();
        }
    }
}

In here we are using virtual and override modifiers to clearly specify an intention of specializing the method. It is better than previous one, because just by looking at class A we are already informed that further specialization is going to occur.

The real usage scenario is that we have an interface representing Entity Framework Core context. We have two distinct implementations: one uses real database and the other uses in-memory database for tests. The latter inherits from the former because what inheritance is all about is copying code. We just want to have the same methods for in-memory implementation like for regular one, but with some slight modifications e.g. in methods executing raw SQL. We also have to use an interface to refer to these objects, because this is how dependency injection containers work.

As you can see, what might seem purely theoretical, actually is used in real world line of business application. Although I have been pretty confident I understand the principles of inheritance in object oriented programming since my undergrad computer science lectures, but as I mentioned, I was surprised discovering that we have to explicitly put : IActivity on class B. The implementation has already been there. Anyway, this example encourages me to be always prepared to verify assumptions I make.

When profiling a loop, look at entire execution time of that loop

Today’s piece of advice will not be backed by concrete examples of code. It will be just loose observations of some profiling session and related conclusions. Let’s assume there is a C# code doing some intense manipulations over in-memory stored data. The operations are done in a loop. If there is more data, there are more iterations of the loop. The aim of the operations is to generate kind of summary of data previously retrieved from a database. It has been proved to work well in test environments. Suddenly, in production, my team noticed this piece of code takes about 20 minutes to execute. It turned out there were about 16 thousand iterations of the loop. It was impossible to test such high load with manual testing. The testing only confirmed correctness of the logic itself.

After an investigation and some experiments it turned out that bad data structure was to blame. The loop did some lookup-heavy operations over List<T>. Which are obviously O(n), as list is implemented over array. Substitution of List<T> for Hashset<T> caused dramatic reduction of execution time to a dozen or so of seconds. This is not as surprising as it may seem because Hashset<T> is implemented over hashtable and has O(1) lookup time. These are some data structures fundamentals taught in every decent computer science lecture. The operation in a loop looked innocent and the author did not bother to try to anticipate future load of the logic. A programmer should always keep in mind the estimation of the amount of data to be processed by their implementation and think of appropriate data structure. By appropriate data structure I mean choice between fundamental structures like array (vector), linked list, hashtable, binary tree. This can be the first important conclusion, but I encouraged my colleagues to perform profiling. The results were not so obvious.

Although the measurement of timing of the entire loop with 16 thousand iterations showed clearly that hashtable based implementation performs orders of magnitude better, but when we stepped over individual loop instructions there was almost no difference in their timings. The loop consisted of several .Where calls over entire collection. These calls took something around 10 milliseconds in both List<T> and Hashset<T> implementations. If we had not bothered to measure entire loop execution, it would have been pretty easy to draw conclusion there is no difference! Even performing such step measurement during the development can me misleading, because at first sight, does 10 milliseconds look suspicious? Of course not. Not only does it look unsuspicious but also it works well. At least in test environment with test data.

As I understand it, the timings might have been similar, because we measured only some beginning iterations of the loop. If the data we are searching for are at the beginning of the array, the lookup can obviously be fast. For some edge cases even faster than doing hashing and looking up corresponding list of slots in a hashtable.

For me there are two important lessons learned here:

  • Always think of data structures and amount of data to be processed.
  • When doing performance profiling, measure everything. Concentrate on amortized timings of entire scope of code in question. Do not try to reason about individual subtotals.

My Visual Studio workflow when working on web applications

I use IIS Express for majority of my ASP.NET development. Generally I prefer not to restart IIS Express because it may take some time to load big application again. Unfortunately Visual Studio is eager to close IIS Express after you finish debugging. In past, there were some tips on how to prevent such behavior, for instance Edit and Continue should be disabled, and then IIS Express stayed after having finished debugging. I observed after Visual Studio 2015 Update 2 this no longer works. But there is yet another way to preserve IIS Express. We must not stop debugging, but detach the debugger instead in order to accomplish this. I prefer creating custom toolbar button:

Right click on toolbar Customize Commands tab Toolbar Debug Add command Debug Detach all

An important caveat is also to use Multiple startup projects configured in the solution properties. A web application typically consists of many projects, so it is convenient to run them all at once.

Having IIS Express continuously running in the background I would like to introduce how to quickly start debugging. Here I also prefer custom toolbar button:

Right click on toolbar Customize Commands tab Toolbar Standard Add command Debug Attach to process

Or even better, keyboard binding:

Tools Options Environment Keyboard Debug.AttachtoProcess

And now we face the most difficult part: choosing correct iisexpress.exe process. It is likely there will be a few of them. We can search for the PID manually by right clicking IIS Express icon Show All applications and we can view the PID. But this may drive you crazy as you have to do this many times. I recommend simple cmd script which invokes PowerShell command to query WMI:

@echo off
powershell.exe -Command "Get-WmiObject Win32_Process -Filter """CommandLine LIKE '%%webapi%%' AND Name = 'iisexpress.exe'""" | Select ProcessId | ForEach { $_.ProcessId }"

Here I am searching for the process whose path contains “webapi” string. I have to use triple quotes and double percent signs because that is how cmd’s escaping works. The final pipe to ForEach is not necessary, it serves the purpose of formatting the output to raw number, not a fragment of a table. I always have running cmd window so I put this little script into PATH variable and I can view desired PID instantaneously. By the way, Windows Management Instrumentation is tremendously powerful interface for obtaining information about anything in the operating system.

Knowing the PID you can scroll down the process list by simply pressing i letter and then you can visually distinguish the instance with a relevant PID.

AngularJS custom validator, an excellent example of duck typing

A StackOverflow answer usually is not worth writing a blog post, but this time I am sharing something, which 1) is an invaluable idea for developers using AngularJS 2) may serve as a beginning of deeper theoretical considerations regarding types in programming languages.

The link to before-mentioned SO:
http://stackoverflow.com/questions/18900308/angularjs-dynamic-ng-pattern-validation

I have created a codepen of this case:

See the Pen AngularJS ng-pattern — an excellent example of duck typing by Przemyslaw S. (@przemsen) on CodePen.0

In the codepen you can see a JavaScript object set up in ng-pattern directive, which should typically be a regular expression. The point is that AngularJS check for being a regular expression is simply a call to .test() function. So we can just attach such test function to an object and implement whatever custom validation logic we need. In here we can see the beauty of duck typing allowing us to freely customize behavior of the framework.

I said this could be a beginning of a discussion on how to actually understand duck typing. It is not that obvious and programmers tend to understand it intuitively rather than precisely, though it is not a term with any kind of formal definition. I recommend Eric Lippert’s blog post on the subject: https://ericlippert.com/2014/01/02/what-is-duck-typing/.

The horror of JavaScript Date

I have heard about difficulties of using JavaScript date APIs, but it was not until recently when I eventually experienced it myself. I am going to describe one particular phenomenon that can lead to wrong date values sent from client’s browser to a server. When analysing these examples please keep in mind they were executed on machine with UTC+01:00 time zone unless I explicitly tell that an example refers to a different time zone.

Let’s try to parse a JavaScript Date object:

 var c = new Date("2015-03-01");
c.toString();
> Sun Mar 01 2015 01:00:00 GMT+0100

What draws my attention is the time value. It is 01:00 which may look strange. But it is not if we correlate this value with time zone information which is stored along with JavaScript object. The time zone information is an inherent part of the object and it comes from the browser, which obviously derives it from the operating system’s culture settings. It turns out these two pieces of information are essential when making AJAX calls, because then .toJSON() method is called. I am making this assumption based on the example behaviour of ngResource library, but other frameworks or libraries probably do the same, because they must somehow convert JavaScript Date object to a universal text format to be able to send it via HTTP. By the way, .toJSON() returns the same result as to .toISOString().

c.toJSON();
> 2015-03-01T00:00:00.000Z

What we have got here is UTC-normalized date and time value. The time zone offset of actual time value used with time zone information allow us to normalize the date when sending it to a server. The most important thing here is that the values stored in Date object are expressed in local time zone i.e. the browser’s one. This implies some strange consequences like the one of being in UTC-xx:xx time zones. Let’s try the same example after setting time zone to UTC-01:00.

var c = new Date("2015-03-01");
c.toString();
> Sat Feb 28 2015 23:00:00 GMT-0100

The problem here is that we have actually ended up with parsed values which are different from their original textual representation i.e. March 1st versus February 28th. But it is still OK providing that our date processing logic relies on normalized values:

c.toJSON();
> 2015-03-01T00:00:00.000Z

However, it can be misleading when we try to get individual date components. Here we try to get day component.

c.getDate()
> 28

But in general the object still can serve the purpose if we rely on normalized values and call appropriate methods.

The problem is that not all Date constructors behave in the same way. Let’s try this one:

var b = new Date('03/01/2015');
b.toString();
> Sun Mar 01 2015 00:00:00 GMT+0100

Now the parsed date although it still contains time zone information derived from the operating system, but it does not contain time value modified by the corresponding offset. In the first example we have 1am time which corresponds to GMT+01:00, here we have just 00:00 time and, of course, we still do have GMT+01:00 time zone information. This time zone information without correctly shifted time value is actually catastrophic. Look what happens when .toJSON() is called:

b.toJSON()
> 2015-02-28T23:00:00.000Z

The result is wrong date sent to a server. This is not the end of the observation. The same phenomenon can also happen in the other way round, i.e. when we are transferring date values from a server to a client. Now let’s assume the server sent the following date and we are parsing it. Please keep in mind that the actual process of parsing may happen implicitly in some framework’s code, for instance when specifying data source for Kendo grid. So the one who parses it for us can be a framework’s code.

var d = new Date("2015-03-01T00:00:00.000Z");
d.toString();
> Sun Mar 01 2015 01:00:00 GMT+0100

As we see, this constructor results in shifted time value just like the Date(“2015-03-01”) one. But when considering displaying these retrieved values we inevitably have to answer the question, whether we aim at showing local time or the server time. We have to remember that in case when the client’s browser is in GMT-xx:xx time zone and we try to show the parsed value (like in c.getDate() example), not the normalized one, this may result in wrong date displayed in front of the user. I say `may`, because this can really be a desired behaviour depending on the requirements. For example, in Angularwe can enforce displaying normalized value by providing optional time zone parameter to $filter('date').

var c = new Date("2015-03-01");
$filter('date')(c, "yyyy-MM-dd", "+0000");
> 2015-03-01

Here we do not worry about internal, actual component values of object c whose prototype is Date. It internally may store February 28th but it does not matter. $filter is told to output values for UTC time. It is also worth mentioning that the Date constructor also assumes that its argument in specified in UTC time. So we populate the Date object with UTC value and return also UTC value not worrying about internal representation which is local. This approach results in the output date and time being equal to the intended input one.

As a conclusion I should write some recommendations on how to use Date object and what to avoid. But honestly I cannot say I gained enough knowledge in this area to make any kind of guidelines. I can only afford making some general statements. Just pay attention to what your library components like, for instance, a date picker control operates on. Is their output a Date object or string representation of the date? What do you do with this output? Is their input a Date object or a string representation and they do the parsing on their own? Just examine carefully and do not blindly trust your framework. I personally do not accept situation when something works and I do now know why it does. I tend to dig into details and try to find the nitty gritty. I have always believed deep research (even one which takes much time) and understanding of underlying technology is worthwhile and I often recall the post by Scott Hanselman who also appears to follow this principle.