My Visual Studio workflow when working on web applications

I use IIS Express for majority of my ASP.NET development. Generally I prefer not to restart IIS Express because it may take some time to load big application again. Unfortunately Visual Studio is eager to close IIS Express after you finish debugging. In past, there were some tips on how to prevent such behavior, for instance Edit and Continue should be disabled, and then IIS Express stayed after having finished debugging. I observed after Visual Studio 2015 Update 2 this no longer works. But there is yet another way to preserve IIS Express. We must not stop debugging, but detach the debugger instead in order to accomplish this. I prefer creating custom toolbar button:

Right click on toolbar Customize Commands tab Toolbar Debug Add command Debug Detach all

An important caveat is also to use Multiple startup projects configured in the solution properties. A web application typically consists of many projects, so it is convenient to run them all at once.

Having IIS Express continuously running in the background I would like to introduce how to quickly start debugging. Here I also prefer custom toolbar button:

Right click on toolbar Customize Commands tab Toolbar Standard Add command Debug Attach to process

Or even better, keyboard binding:

Tools Options Environment Keyboard Debug.AttachtoProcess

And now we face the most difficult part: choosing correct iisexpress.exe process. It is likely there will be a few of them. We can search for the PID manually by right clicking IIS Express icon Show All applications and we can view the PID. But this may drive you crazy as you have to do this many times. I recommend simple cmd script which invokes PowerShell command to query WMI:

@echo off
powershell.exe -Command "Get-WmiObject Win32_Process -Filter """CommandLine LIKE '%%webapi%%' AND Name = 'iisexpress.exe'""" | Select ProcessId | ForEach { $_.ProcessId }"

Here I am searching for the process whose path contains “webapi” string. I have to use triple quotes and double percent signs because that is how cmd’s escaping works. The final pipe to ForEach is not necessary, it serves the purpose of formatting the output to raw number, not a fragment of a table. I always have running cmd window so I put this little script into PATH variable and I can view desired PID instantaneously. By the way, Windows Management Instrumentation is tremendously powerful interface for obtaining information about anything in the operating system.

Knowing the PID you can scroll down the process list by simply pressing i letter and then you can visually distinguish the instance with a relevant PID.

Entity object in EF is partially silently read-only

What this post is all about is the following program written in C# using Entity Framework 6.1.3 throwing at line 25 and not at line 23.

We can see the simplest usage of Entity framework here. There is a Test class and a TestChild class which contains a reference to an instance of Test named Parent. This reference is marked as virtual so that Entity Framework in instructed to load an instance in a lazy manner, i.e. upon first usage of that reference. In DDL model obviously TestId column is a foreign key to Test table.

I create an entity object, save it into database and then I retrieve it at line 21. Because the class uses virtual properties, Entity Framework dynamically creates some custom type in order to be able to implement lazy behavior underneath.

Now let’s suppose I have a need to modify something in an object retrieved from a database. It turns out I can modify Value property, which is of pure string type. What is more, I can also modify Parent property, but… the modification is not preserved!. This program throws at line 25 because an assignment from line 24 is silently ignored by the framework.

I actually have been trapped by this when I was in need of modifying some collection in complicated object graph. I am deeply disappointed the Entity Framework on one hand allows modification of non-virtual properties, but on the other hand it ignores virtual ones. This can make a developer run into a trouble.

Of course I am aware it is not good practice to work on objects of data model classes. I recovered myself from this situation with AutoMapper. But this is kind of a quirk, and a skilled developer has to hesitate to even try to modify something returned by Entity Framework.

using System;
using System.Data.Entity;
using System.Linq;

namespace ConsoleApplication1
{
    class Program
    {
        static void Main()
        {
            using (var db = new TestDBContext())
            {
                var t = new Test { Value = "Hello" };
                var c = new TestChild { Value = "Hello from child", Parent = t };
                db.TestChildren.Add(c);
                db.SaveChanges();
            }
            using (var db = new TestDBContext())
            {
                // Type of c is System.Data.Entity.DynamicProxies.TestChild_47042601AE8E209C11CC25521C2746A2B9D93EC625A6F20BA3D60926278A3D21}
                var c = db.TestChildren.First();
                c.Value = string.Empty;
                if (c.Value != string.Empty) throw new Exception();
                c.Parent = null;
                if (c.Parent != null) throw new Exception();
            }
        }
    }

    public class TestDBContext : DbContext
    {
        public TestDBContext() : base("Name=default")
        {
            Database.SetInitializer<TestDBContext>(new CreateDatabaseIfNotExists<TestDBContext>());
        }
        public DbSet<Test> Tests { get; set; }
        public DbSet<TestChild> TestChildren { get; set; }
    }

    public class Test
    {
        public int Id { get; set; }
        public string Value { get; set; }
    }

    public class TestChild
    {
        public int Id { get; set; }
        public int TestId { get; set; }
        public string Value { get; set; }
        public virtual Test Parent { get; set; }
    }

}

AngularJS custom validator, an excellent example of duck typing

A StackOverflow answer usually is not worth writing a blog post, but this time I am sharing something, which 1) is an invaluable idea for developers using AngularJS 2) may serve as a beginning of deeper theoretical considerations regarding types in programming languages.

The link to before-mentioned SO:
http://stackoverflow.com/questions/18900308/angularjs-dynamic-ng-pattern-validation

I have created a codepen of this case:

See the Pen AngularJS ng-pattern — an excellent example of duck typing by Przemyslaw S. (@przemsen) on CodePen.0

In the codepen you can see a JavaScript object set up in ng-pattern directive, which should typically be a regular expression. The point is that AngularJS check for being a regular expression is simply a call to .test() function. So we can just attach such test function to an object and implement whatever custom validation logic we need. In here we can see the beauty of duck typing allowing us to freely customize behavior of the framework.

I said this could be a beginning of a discussion on how to actually understand duck typing. It is not that obvious and programmers tend to understand it intuitively rather than precisely, though it is not a term with any kind of formal definition. I recommend Eric Lippert’s blog post on the subject: https://ericlippert.com/2014/01/02/what-is-duck-typing/.

The horror of JavaScript Date

I have heard about difficulties of using JavaScript date APIs, but it was not until recently when I eventually experienced it myself. I am going to describe one particular phenomenon that can lead to wrong date values sent from client’s browser to a server. When analysing these examples please keep in mind they were executed on machine with UTC+01:00 time zone unless I explicitly tell that an example refers to a different time zone.

Let’s try to parse a JavaScript Date object:

 var c = new Date("2015-03-01");
c.toString();
> Sun Mar 01 2015 01:00:00 GMT+0100

What draws my attention is the time value. It is 01:00 which may look strange. But it is not if we correlate this value with time zone information which is stored along with JavaScript object. The time zone information is an inherent part of the object and it comes from the browser, which obviously derives it from the operating system’s culture settings. It turns out these two pieces of information are essential when making AJAX calls, because then .toJSON() method is called. I am making this assumption based on the example behaviour of ngResource library, but other frameworks or libraries probably do the same, because they must somehow convert JavaScript Date object to a universal text format to be able to send it via HTTP. By the way, .toJSON() returns the same result as to .toISOString().

c.toJSON();
> 2015-03-01T00:00:00.000Z

What we have got here is UTC-normalized date and time value. The time zone offset of actual time value used with time zone information allow us to normalize the date when sending it to a server. The most important thing here is that the values stored in Date object are expressed in local time zone i.e. the browser’s one. This implies some strange consequences like the one of being in UTC-xx:xx time zones. Let’s try the same example after setting time zone to UTC-01:00.

var c = new Date("2015-03-01");
c.toString();
> Sat Feb 28 2015 23:00:00 GMT-0100

The problem here is that we have actually ended up with parsed values which are different from their original textual representation i.e. March 1st versus February 28th. But it is still OK providing that our date processing logic relies on normalized values:

c.toJSON();
> 2015-03-01T00:00:00.000Z

However, it can be misleading when we try to get individual date components. Here we try to get day component.

c.getDate()
> 28

But in general the object still can serve the purpose if we rely on normalized values and call appropriate methods.

The problem is that not all Date constructors behave in the same way. Let’s try this one:

var b = new Date('03/01/2015');
b.toString();
> Sun Mar 01 2015 00:00:00 GMT+0100

Now the parsed date although it still contains time zone information derived from the operating system, but it does not contain time value modified by the corresponding offset. In the first example we have 1am time which corresponds to GMT+01:00, here we have just 00:00 time and, of course, we still do have GMT+01:00 time zone information. This time zone information without correctly shifted time value is actually catastrophic. Look what happens when .toJSON() is called:

b.toJSON()
> 2015-02-28T23:00:00.000Z

The result is wrong date sent to a server. This is not the end of the observation. The same phenomenon can also happen in the other way round, i.e. when we are transferring date values from a server to a client. Now let’s assume the server sent the following date and we are parsing it. Please keep in mind that the actual process of parsing may happen implicitly in some framework’s code, for instance when specifying data source for Kendo grid. So the one who parses it for us can be a framework’s code.

var d = new Date("2015-03-01T00:00:00.000Z");
d.toString();
> Sun Mar 01 2015 01:00:00 GMT+0100

As we see, this constructor results in shifted time value just like the Date(“2015-03-01”) one. But when considering displaying these retrieved values we inevitably have to answer the question, whether we aim at showing local time or the server time. We have to remember that in case when the client’s browser is in GMT-xx:xx time zone and we try to show the parsed value (like in c.getDate() example), not the normalized one, this may result in wrong date displayed in front of the user. I say `may`, because this can really be a desired behaviour depending on the requirements. For example, in Angularwe can enforce displaying normalized value by providing optional time zone parameter to $filter('date').

var c = new Date("2015-03-01");
$filter('date')(c, "yyyy-MM-dd", "+0000");
> 2015-03-01

Here we do not worry about internal, actual component values of object c whose prototype is Date. It internally may store February 28th but it does not matter. $filter is told to output values for UTC time. It is also worth mentioning that the Date constructor also assumes that its argument in specified in UTC time. So we populate the Date object with UTC value and return also UTC value not worrying about internal representation which is local. This approach results in the output date and time being equal to the intended input one.

As a conclusion I should write some recommendations on how to use Date object and what to avoid. But honestly I cannot say I gained enough knowledge in this area to make any kind of guidelines. I can only afford making some general statements. Just pay attention to what your library components like, for instance, a date picker control operates on. Is their output a Date object or string representation of the date? What do you do with this output? Is their input a Date object or a string representation and they do the parsing on their own? Just examine carefully and do not blindly trust your framework. I personally do not accept situation when something works and I do now know why it does. I tend to dig into details and try to find the nitty gritty. I have always believed deep research (even one which takes much time) and understanding of underlying technology is worthwhile and I often recall the post by Scott Hanselman who also appears to follow this principle.

The bookmarks problem

I have been using Mozilla based web browsers since 2003. Back in the days, the application was called Mozilla Suite, then in 2004 the Firefox showed up using the same engine, but with completely new front end. I migrated my profile over the years many times, but I always kept bookmarks. Some of my bookmarks surely remember those early days before Firefox (yet, majority of the oldest are no longer valid, because sites were shut down). The total number of my browser bookmarks gathered over that time is over 1k. And this is `the problem`.

I had several attempts to clean up and organise this huge collection. I have tried to remove dead ones and to group them in folders. I have tried using keywords and descriptions to be able to search more effectively. But with no success. Now I have something about dozen of folders, but I still find myself in trouble when I need to search for particular piece of information. The problem boils down to that: I absolutely remember what the site is about, I am absolutely sure I have it in my collection but I cannot find it because either it has some strange title or words in URL are meaningless (Firefox searches only within titles and urls, because obviously that is all it can do).

I realized I need a tool which is much more powerful when it comes to bookmarks searching. I could not find anything to satisfy my requirements so I implemented it myself. Today I am introducing BookmarksBase which is an open source tool written in C# to solve this issue.

BookmarksBase.Search

BookmarksBase embraces a concept that may seem ridiculous: why don’t we pull all textual contents from all sites in bookmarks. Do you think it is lots of data? How much it would be? Even if you were to sacrifice a few hundreds of megs in order to be able to search really effectively, isn’t it worth that space?

Well, it turns out it takes much less space than I originally expected and the tool works surprisingly fast, although it is implemented in managed code without any distinguished optimizations. First we have to run separate tool to collect data (BookmarksBase Importer). Downloading + parsing takes maybe a minute or two. Produced index file containing all text from all sites in bookmarks, which I call bookmarksbase.xml in my case is only 12 MiB (over 1000 bookmarks). Then we can run BookmarksBase Search that allows us to perform actual searching within contents/addresses/titles. Surely, when you have bookmarksbase.xml created you can run whatever serves the purpose for you e.g. grep, findstr (in Windows) or any kind of decent text editor that can handle big amounts of text. I crafted XML so that it can be easily readable by human: there is new lines, and the text is preserved in nice column of fixed width (thanks to Lynx — see source for details).

More details and download link are available on GitHub

PowerShell — my points of interest

I have never used PowerShell until quite recently. I successfully solved problems with bunch of other scripting languages e.g. Python, Perl, Bash, AWK. They all served the purpose really well and I did not feel like I need yet another scripting language. Furthermore, PowerShell looks nothing like any of those technologies that I am familiarized with, so I refused to start learning it many times.

However, when you work as a .NET developer, chances are sooner or later you will come across a solution implemented with PowerShell. It could be, for instance, a deployment script and you will have to maintain it. This happened to me a while ago. Although modification that I committed was relatively simple and I made it up rather quickly with little help of Google, I decided to dig into the subject and check few more things out. What I found after a bit of random research was quite impressive to me. I would like to share three main features I found so far and I consider valuable in a scripting technology. At the bottom of this post I also put some code snippets for quick reference how to accomplish particular tasks.

1. Out-GridView

In PowerShell you can manipulate format of the output in many ways. You can generate HTML, CSV, white space formatted text tables etc. But there is also an option to view output of a command with WPF grid that has built-in filter. Look at the effect of Get-Process | Out-GridView command — this is functionality you get out of the box with just a few keystrokes!

Out-GridView

Out-GridView

2. Embedding C# code

This feature seems quite powerful. If you need more advanced techniques in your script you can basically implement them inline using C# and then just invoke them.

Add-Type @'
using System;
using System.IO;
using System.Text;
      
public static class Program
{
    public static void Main()
    {
        Console.WriteLine("Hello World!");
    }
}
'@
 
[Program]::Main()

3. XML parsing done simply right

Any time I had to do some XML parsing in my scripts using other languages I always felt somewhat confused. This is not sort of things that you just recall from your head and type as a code. You have to use specific APIs, you have to call them in specific way, in specific order etc. I do not mean this complicated in any way, it is not, but it is cumbersome in many languages. I always had to look things up in a cheat-sheet. Not any more 🙂 From now on, I will always lean toward the simplest, and perhaps basically the best implementation of XML parsing:

$d =  "12"
$d.a.b

This outputs 1. Yes, it is as simple as that. You basically call member properties with appropriate names that match XML nodes.

I am sharing these features because I did not imagine a scripting language can offer something as powerful. And this possibly is only a tip of an iceberg, as I just scratched the surface of PowerShell world. I also suggest checking out little script I wrote to explore PowerShell functionalities: managesites.ps1. It may be useful for ASP.NET developers — it allows you to delete sites from IIS Express config file.

Miscellaneous code snippets:

  • if (test-path "c:\deploy"){ "aaa" }
  • $f="\file.txt";(gc $f) -replace "a","d" | out-file $f — this one is particularily important, because equivalent functionality of in-line editing in MinGW implementation of Perl and SED seems not to work correctly
  • foreach ($line in [System.IO.File]::ReadLines($filename)){ }
  • -match regex
  • ( Invoke-WebRequest URL | select content | ft -autosize -wrap | out-string )
  • reflection.assembly]::LoadWithPartialName("Microsoft.VisualBasic") | Out-Null
    $input = [Microsoft.VisualBasic.Interaction]::InputBox("Prompt", "Title", "Default", -1, -1);
  • foreach ($file in dir *.vhd) { }
  • Set-ExecutionPolicy unrestricted