Archive for pjsen

The Boy Scout rule and little things that matter

There is a rule called The Boy Scout Rule. In essence, it says that whenever you attempt to modify code, you should also try to improve something in the existing code so that you leave it a little bit better than it was before. By following the rule we can gradually and seamlessly get rid of technical debt and prevent deterioration in software systems.

The rule can be addressed at organizational level in an interesting way. I have come across an idea of a project manager who was responsible for multiple teams dealing with significant amounts of legacy code. They introduced some kind of a gamification to the development process. The teams were supposed to register as many improvements in code as they could and a team who got the biggest number won the game. The prize was some extra budget to spend on team party. Such idea may not be applicable in all organizations, but it clearly shows how to methodically approach the problem of technical debt at management level.

Although I do not immediately recommend the idea of gamification, but I certainly recommend creating some static (not assigned to any sprint) ticket for all the improvements and encourage developers to make even smallest refactor commits under such ticket during they normal development tasks. Below I would like to show some basic indicators that in my opinion qualify for being improved as soon as they are discovered.

  1. Improper naming causing an API ambiguity

    I see a few problems here. When I first saw client code calling GetValue I thought it returns some custom, domain specific type. I needed to search for a method returning string and I skipped GetValue, because it did not look like it returns a string. It was only later that I realized it actually does return a string. If it returns a string, it should be named appropriately.

    More general observation here is that we have 3 ways of converting the type into a string. In my particular case I had 10 usages of GetValue, 45 usages of the operator and 0 usages of ToString in the codebase. When talking to the maintainers, I was told there was a convention not to use the ToString method. That situation clearly shows some adjustments are needed both at the level of code quality and at the development process level. I have nothing against operator overloading, however it is not very frequently used in business code. The code readability is a top priority in such case and being as explicit as possible is in fact beneficial from the perspective of the long term maintenance.

    The unused method should obviously be removed, and the one returning a string should be named ToString. I would keep the overloaded operators, because why not, but I still am a little bit hesitant about using them in new code. It is cool language feature when you write code, but it appears not so cool when you have to read it. Even here, I would consider sacrificing the code aesthetics of the operator in favor on simplistic ToString.

  2. Misused pattern causing an API ambiguity

    This one is very similar to the previous one, as it boils down to the fact, that we can instantiate an object in two ways. When I was introducing some modifications to the code, at first I was forced to answer the question: should I use the constructor or the Create method. Of course, it turns out, indeed there is a slight difference, because Create returns a result object, which is a common way to model logic in a kind of functional way. But still, at the level of API surface we do not see the difference clearly enough.

    The gist of that case is, there is a pattern in tactical (I mean, at the level of the actual source code) Domain Driven Design to use private constructors and provide static factory methods. The purpose of that is primarily to prevent default construction of an object that would render it in a default state that is not meaningful from the business point of view. Also, factory methods can have more expressive names to indicate some specific extra tasks they do.

    The constructor should be made private and the factory method can be named CreateAsResult, if the wrapper type is prevalent in the code base.

The ideas behind such improvements can actually be very simple. Some of them have to do with trivial, but extremely relevant conclusions about engineering a software. For example:

  • any piece of code that slows down a programmer maintaining the code can potentially be considered not good enough
  • the code is written once, but read multiple times and thus, when writing a code, we should optimize for the easiness of reading it

The vital part of that mindset of clearly expressing intention is proper naming. I highly recommend watching excellent presentation CppCon 2019: Kate Gregory “Naming is Hard: Let’s Do Better”. It helps develop a proper way of thinking when writing a code.

A solution for Hangfire Dashboard authentication

Let’s assume we have a typical ASP.NET Web Api back-end and a Single Page Application front-end. The front-end is authenticated with JWT tokens. The problem is, the Hangfire dashboard is a classic ASP.NET MVC-like application (more precisely, Razor Pages application) and will not seamlessly integrate with the existing JWT token authentication approach used in the back-end Web Api.

I came up with the following solution: let’s create a new MVC endpoint authenticated using existing attributes, but with token included in the URL. Then use a browser’s session to communicate with Hangfire Dashboard and mark a request as authenticated, if it is such. A user accesses the dashboard by navigating to the new endpoint, then if authentication succeeds, they are redirected into the main dashboard URL which is authenticated just by a flag set in the session. The biggest advantage of this solution is it requires no changes in the existing authentication and authorization mechanisms.

  1. Create a new MVC controller in the back-end Web Api application. Use whatever authorization techniques and attributes are already used in the application for the API controllers

  2. Enable the session mechanism. It may look strange for the Web Api, but anyways. Call app.UseSession before app.UseMvc
  3. In the controller’s action set some flag in the session. This way, it will be set if, and only if the authentication and authorization succeeds
  4. Do the redirect to the Hangfire Dashboard endpoint
  5. Create a class that implements IDashboardAuthorizationFilter. It is the customization for the authentication of Hangfire Dashboard. Try to read the flag from the session and decide if the request is authenticated

    Use Authorization = new[] { new HangfireAuthorizationFilter() } in the DashboardOptions

  6. Now the most important part that enables the existing token-based authentication to work with the Hangfire Dashboard. Create new middleware class that rewrites the token from the URL into the headers. It will allow the existing authentication mechanism do its job without any modifications

    Call app.UseMiddleware<TokenFromUrlMiddleware> before app.UseAuthentication.

Though, there are some caveats for this solution.

  • The solution may require some additional session setup in multi-instance back-end configuration. By default, the session is stored in memory. Each instance will have its own copy of the session store causing the session flag to be unrecognizable between the instances, if it is not configured to use a distributed cache like e.g. Redis
  • The security of the token included in the URL is disputable. However, as for my architectural drivers, it is acceptable, because the application is internal
  • There are some rough edges if the application is hosted in a virtual subdirectory of the domain using Kestrel, not IIS. Please notice the Redirect action begins with / which is the root of the domain. We must adjust it accordingly if a subdirectory approach is used (append the subdirectory name). Also, we must somehow inform Hangfire about the subdirectory. If a subdirectory is used, then the real URL of the dashboard is not the main Hangfire path set in its options, but actually the path with prefixed with a subdirectory. The dashboard is a black box from our point of view, and we cannot influence the way it makes its own HTTP requests. The only way of configuring its behavior is by using its apis. We use PrefixPath property of the DashboardOptions to configure this
  • In my setup I also had to use IgnoreAntiforgeryToken = true because of some errors which occurred only on the containerized environment under Kubernetes. The final settings are as follows:

  • Due to the discrepancies between containerized environment and local one, it is worth considering the separate, conditionally compiled setup calls for the DEBUG local build and the RELEASE build. This way we can skip the prefixes required for the subdirectory based hosting if we run locally
  • There in an interesting SO post describing the differences between the Path and PathBase properties of the HttpRequest. These are used internally by the Hangfire to dynamically generate URLs for the requests sent by the dashboard. It turns out that, these properties are used to detect the subdirectory part of the URL. They behave differently under IIS and Kestrel, unless the particular middleware is additionally plugged into the pipeline
  • By default, the session cookie expires 20 minutes after closing the browser’s tab or right after closing the entire browser
  • One can imagine a very unlikely corner case, when the real token is invalidated while the Hangfire Session in open. In such case, the dashboard will remain logged in. I consider those properties as acceptable, though

Things I’ve learned about SQL Server the hard way

In this post I am presenting a couple of things I’ve learned from the analysis of a problem, that manifested itself in an occasional HTTP 500 errors in production instance of an ASP.NET application. This time I don’t aim at exhaustively explaining every single point, because each of them could be a subject of a dedicated blog post.

The story begins with SQL error: SQLEXCEPTION: Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim.

  1. In any reasonably modern version of SQL Server Management Studio there is an XEvent session system_health under ManagementExtended Events. It allows for viewing some important server logs, among which xml_deadlock_report in particularly interesting. It is very important to have an access to the production instance of database server in order to be able to watch the logs.
  2. System health XEvent session
  3. In this particular case, these xml_deadlock_reports contained one suspicious attribute: isolationlevel = Serializable (4) and the SQL code was a SELECT. I would not expect my SELECTs running with Serializable isolation level.
  4. Details of a deadlock
  5. The isolation level is an attribute of a connection between a client and the database server. A connection is called session in SQL Server terminology. An explicit BEGIN TRAN is not necessary for the isolation level to be applied. Every SQL statement runs in its own statement-wide transaction. However, for such narrow-scoped transactions, in practice it may not make any difference whether you raise the isolation level or not. The difference can be observed when a transaction is explicit and spans multiple SQL statements.
  6. The cause of setting the serialization level to Serializable was the behaviour of the TransactionScope [1]. If you use it, it raises the isolation level. It is just a peculiarity of this very API of the .NET framework. It is good to know this.
  7. SQL Server, at last in 2012 and some (I am not sure exactly which ones) later versions, does not reset the isolation level when ADO.NET disposes of a connection. A connection returns back to the connection pool [2] and is reused by subsequent SqlConnection objects unless they have different connection string.
  8. The connection pool size, if the connection pooling is active, poses the limit of how many concurrent connections to a database server a .NET application can make. If there are no free connections in the pool, an exception is thrown [3].
  9. Eliminating the usage of TransactionScope did not solve the issue. Even if you run SELECTs under the default Read Committed isolation level, these still issues Shared locks which may deadlock with Exclusive locks of UPDATEs. In any reasonably high production data traffic, where SELECTs span multiple tables, which are also very frequently updated, it is highly probable, that a deadlock will occur.
  10. The difference between running SELECT under Serializable isolation level and Read Committed level is that in the former, the locks are kept from the moment of executing the SELECT until the transaction ends. You can observe it by manually beginning a Serializable transaction, running any SELECT and observing dm_tran_locks DMV and only then committing (or rolling back, whatever) the transaction. With Read Committed level locks are not kept until an explicit transaction ends, they are released immediately after execution of a SELECT finishes. These are the same kind of locks, Shared locks. This implies one cannot observe the difference between executing a SELECT under Serializable and Read Committed, when there is no explicit transaction and thus, there is only a statement-wide transaction which releases locks immediately after the results are returned.
  11. Setting isolation level of Read Uncommitted is practically equivalent to running a SELECT WITH(NOLOCK) hint, even if you don’t explicitly open a transaction.
  12. In Entity Framework a SqlConnection is opened for every materialization of the query, the results are returned, and the connection is immediately closed and returned back to the pool [5]. The connection lifetime is by no means related to the scope of DbContext object. I can see a kind of similarity between how Entity Framework uses SqlConnections and how ASP.NET makes use of threads when executing async methods. A thread is released on every await and can be used for doing something more valuable than waiting. Similarly, a SqlConnection is released right after materialization and can be used for executing different command, in different request (in case of ASP.NET) even before DbContext is disposed of.
  13. It is not that obvious how to reset the isolation level of the connection. You see, every time your C# code using Entity Framework results in sending a SQL to the SQL Server, it can take different connection from the pool (if anyone knows if there is any ordering applied when retrieving connections from the pool, please feel free to comment). It may or may not be the same connection you used previously. Consequently, it is not easy to ‘catch’ the underlying connection using Entity Framework. You can call BeginTransaction every time you use DbContext, and then you are guaranteed to own the connection for all your SQL commands. But that way you are forcing opening transaction when you don’t really need one. What I recommend is to handle StateChange event of DbConnection object as described in [4]. You can do it either on opening the connection or on closing it.
  14. In SQL Server you can monitor open sessions with the following query:

References:

[1]    https://stackoverflow.com/questions/11292763/why-is-system-transactions-transactionscope-default-isolationlevel-serializable

[2]    https://stackoverflow.com/questions/9851415/sql-server-isolation-level-leaks-across-pooled-connections

[3]    https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling

[4]    https://stackoverflow.com/questions/28442558/entity-framework-and-transactionscope-doesnt-revert-the-isolation-level-after-d

[5]    https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/bb896325(v=vs.100)#connections-and-the-entity-framework

The worst Entity Framework pitfall

I work with a quite big enterprise system in my job. Not surprisingly, it uses Entity Framework (Core, but it does not matter) and SQL Server. The system consists of multiple reusable components also in the data access layer. I had to modify DbContext and write some flexible and reusable method accepting a predicate as an argument and apply the predicate on a DbContext. Let’s assume we are using the table A from the previous post. I happily coded the signature of the method to use Func. Let’s simulate this in the LINQPad and run our Func against a DbContext.

It did not work. Or… did it? The picture above shows only generated SQL, but I promise the results show correctly the one record. The problem is, the predicate has been applied in memory after having pulled all the records from table A into memory as well. I am not going to explain what it means for any reasonably sized system. The correct way of doing this is to use Expression<Func<A, bool>>.

The explanation is in fact really obvious for anyone deeply understanding how ORMs work. The data structure which allows for inspecting a predicate on the fly and building final SQL query is Expression. There is already an infrastructure for so-called expression visitors. Please also note, that you can always get your Func from Expression<Func> by calling Compile method on it.

Where to put condition in SQL?

Let’s suppose I am modeling a business domain with entities A, B and C. These entities have the following properties:

  • An entity A can have an entity B and C
  • An entity A can have only entity B
  • An entity A can exist without B and C
  • An entity B has not null property Active

I am implementing the domain with the following SQL. I omit foreign key constraints for brevity.

Now let’s suppose my task is to perform validity check according to special rules. I am given an Id of an entity A as an input and I have to check:

  1. If the entity exists and
  2. If it is valid

The existence will be checked by simply looking if corresponding row is present in the result set, and for validity check I will write simple CASE statement. These are my rules for my example data:

  • A.1 exists and has active B.10 and has C.100 => exists, correct
  • A.2 exists and has inactive B.20 and has C.200 => exists, incorrect
  • A.3 exists and has active B.30 and has C.300 => exists, correct
  • A.4 exists and has active B.40 and DOES NOT HAVE C => exists, incorrect
  • A.5 exists and DOES NOT HAVE NEITHER B NOR C => exists, incorrect
  • A.6 does not exist, incorrect

I write the following query to do the task:

My rules include checking if B.Active is true, so I just put this into WHERE. The result is:

AId  Correct 
---- --------
1    1       
3    1       
4    0       

The problem is, I have been given the exact set of Ids of A to check: 1, 2, 3, 4, 5, 6. But my result does not include 2, 5, 6. My application logic fails here, because it considers those A records as missing. For 6 this is fine, because it is absent in table A, but 2 and 5 must be present in the results for my validity check. The fix is extremely easy:

Now the result is:

AId  Correct 
---- --------
1    1       
2    0       
3    1       
4    0       
5    0       

It is very easy to understand, that WHERE is applied to filter all the results, no matter what my intention for JOIN was. When a record is LEFT JOINed, the condition is not met, because values from B are null. But I still need to have A record in my results. Thus, what I have to do is to include my condition in JOIN.

It is also very easy to fall into this trap of thoughtlessly writing all intended conditions in the WHERE clause.

A few random ASP.NET Core and .NET Core tips

I’ve been working with .NET core recently and I’d like to post some random observations on this subject for the future reference.

  1. It is possible to create Nuget package upon build. This option is actually available also from the VS2017 Project properties GUI. Add this code to csproj.

  2. It is possible to add local folder as Nuget feed. The folder can also be current user’s profile. This one is actually not Core specific. Nuget.config should look like this:

  3. You can compile for multiple targets in .NET Core compatible csproj. Please note the trailing s in the tag name. You can also conditionally include items in csproj. Use the following snippets:

    and:

    There is a reference documentation for the available targets: here.

  4. The listening port in Kestrel can be configured in multiple ways. It can be read from environment variable or can be passed as command line argument. An asterisk is required to bind to physical interfaces. It is needed e.g. when trying to display the application from mobile phone when being served from development machine. The following are equivalent:

    set ASPNETCORE_URLS=http://*:11399
    --urls http://*:11399
    
  5. The preferred way to pass hosting parameters to Kestrel is launchSettings.json file located in Properties of the solution root. You can select a profile defined there when running:

    dotnet run --launch-profile "Dev"
    

    dotnet run is used to build and run from the directory where csproj resides. It is not a good idea to run the app’s dll directly. Settings file can be missing from bin folder and/or launch profile may not be present there.