Creating ASP.Net Web API:s and generate clients with NSwag – Part 1

This is the first part of a two-post series about creating a Web API REST service written in ASP.Net Core and consuming it using the NSwag toolchain to automatically generate the C# client code. This way of consuming a web service puts some requirements on the OpenAPI definition (previously known as Swagger definition) of the service so in this first part I’ll describe hot to properly set up a service so it can be consumed properly using NSwag.

The second part shows how to generate client code for the service.

Example code is available on GitHub.

Swashbuckle

When a new Web Api project (this post describes Asp.Net Core 3.0) is created in Visual Studio, it contains a working sample service but the code is missing a few crucial ingredients.

The most notable omission is that the service is missing an OpenAPI service description (also known as a Swagger description). On the .Net platform, such a service description is usually created automatically based on the API controllers and contracts using the Swashbuckle library.

This is how to install it and enable it in Asp.Net Core:

  1. Add a reference to the Swashbuckle.AspNetCore Nuget package (the example code uses version 5.0.0-rc5).
  2. Add the following to ConfigureServices(IServiceCollection services) in Startup.cs:
    services.AddSwaggerGen(c =>
      {
        c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1" });
      });
    
  3. Also add these lines to Configure(IApplicationBuilder app, IWebHostEnvironment env) in the same file:
    app.UseSwagger();
    app.UseSwaggerUI(c =>
      {
        c.SwaggerEndpoint($"/swagger/v1/swagger.json", ApiName);
      });
    

The first call enable the creation of the service description file and the second enables the UI for showing the endpoints with documentation and basic test functionality.

After starting the service, the description is available at this adress: /swagger/v1/swagger.json

 

More useful than this is the automatically created Swagger UI with documentation and basic endpoint testing features, found at /swagger:

 

Swashbuckle configuration

The service description is important for two reasons:

  1. It’s an always up-to-date documentation of the service
  2. It allows for generating API clients (data contracts and methods)

For both of these use cases it’s important that the service description is both complete and correct in the details. Swashbuckle does a reasonable job of creating a service description with default settings, but there are a few behaviors that need to be overridden to get the full client experience.

XML Comments

The endpoints and data contracts are not associated with any human-readable documentation and Swashbuckle has a feature that copies all XML comments from the code into the service description so that the documentation is much more complete.

To include the comments in the service description, the comments must first be saved to an XML file during project build, so that this file can be merged with the service description. This is done in the project settings:

After the path to the XML file to create is given in the XML documentation file field, Visual Studio will start to show warning CS1591 for public methods without an XML comment. This is most likely not the preferred behavior and the warning can be turned off by adding it to the Suppress warnings field as seen in the image above.

After the file is generated then it can be referenced by Swashbuckle in the Swagger setup by using the IncludeXmlComments option. This is an example of how to compose the path to the XML file and supply it to Swashbuckle inside ConfigureServices:

            services.AddSwaggerGen(c =>
            {
                c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1" });

                // Set the comments path for the Swagger JSON and UI.
                var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
                var xmlPath = Path.Combine(_baseFolder, xmlFile);
                c.IncludeXmlComments(xmlPath, includeControllerXmlComments: true);
            });

The field _baseFolder is set like this in the Configure method:

        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            _baseFolder = env.ContentRootPath;
            ...
        }

The result should look like this:

 

Override data types

Some data types are not correctly preserved in the service description by default, for example the decimal type found in .Net, which is commonly used in financial applications. This can be fixed by overriding the behavior for those particular types.

This is how to set the options to serialize decimal values correctly in the call to AddSwaggerGen:

                // Mark decimal properties with the "decimal" format so client contracts can be correctly generated.
                c.MapType<decimal>(() => new OpenApiSchema { Type = "number", Format = "decimal" });
                c.MapType<decimal>(() => new OpenApiSchema { Type = "number", Format = "decimal" });

(This fix is from the discussion about a bug report for Swashbuckle.)

Before:

After:

 

Other service adjustments

In addition to adding and configuring Swashbuckle, there are a couple of other adjustments I like to make when creating services.

Serializing enums as strings

Enums are very useful in .Net when representing a limited number of values such as states, options, modes and similar. In .Net, enum values are normally just disguised integers which is evident in Web Api responses containing enum values, since they are returned as integers with unclear interpretation. Returning integers is not that descriptive so I typically configure my services to return enum values as strings. This is easiest done by setting an option for the API controllers in the ConfigureServices method in the Startup class:

services.AddControllers()
  .AddJsonOptions(options =>
    {
        // Serialize enums as string to be more readable and decrease
        // the risk of incorrect conversion of values.
        options.JsonSerializerOptions.Converters.Add(
            new JsonStringEnumConverter());
        });

 

Serializing dates as strings without time parts

In the .Net framework there is no data type for dates, so we’re normally relying on the good old DateTime also in cases where the time is irrelevant. This also means that API responses with dates often will contain time parts:

{
    "date": "2020-01-06T00:00:00+01:00",
    "temperatureC": 49,
    "temperatureF": 120,
    "summary": "Warm"
},

This is confusing and may lead to bugs because of the time zone specification part.

To fix this, we can create a Json serializer that extract the date from the DateTime value as described in this StackOverflow discussion.

public class ShortDateConverter : JsonConverter&lt;DateTime&gt;
{
  public override DateTime Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options)
  {
    return DateTime.Parse(reader.GetString());
  }

  public override void Write(Utf8JsonWriter writer, DateTime value, JsonSerializerOptions options)
  {
    writer.WriteStringValue(value.ToUniversalTime().ToString("yyyy-MM-dd"));
  }
}

The serializer must then be applied on the data contract properties as required:

public class WeatherForecast
{
  [JsonConverter(typeof(ShortDateConverter))]
  public DateTime Date { get; set; }
}

This is the result:

{
    "date": "2020-01-06",
    "temperatureC": 49,
    "temperatureF": 120,
    "summary": "Warm"
},

Much better. 🙂

Returning errors in a separate response type

In case of an error, information about it should be returned in a structured way just as any other content. I sometimes see added fields in the normal response types for error messages and similar but I think this is bad practice since it pullutes the data contracts with (generally) unused properties. It’s much clearer for the client if there is a specific data contract for returning errors, for example something like this:

{
  "errorMessage": "An unhandled error occurred."
}

Having separate contracts for normal responses and errors requires controller actions to return different types of values depending on whether there’s an error or not, and that the Swagger service description displays information about all the required types. Luckily Swashbuckle supports this well, as long as the return types are described using the ProducesResponseType attribute.

Here’s a complete controller action where we return different classes depending on if there’s an error or not:

        [HttpGet]
        [Route("fivedays")]
        [ProducesResponseType(
          statusCode: (int)HttpStatusCode.OK,
          type: typeof(IEnumerable<WeatherForecast>))]
        [ProducesResponseType(
          statusCode: (int)HttpStatusCode.InternalServerError,
          type: typeof(ApiError))]
        public ActionResult<IEnumerable<WeatherForecast>> Get()
        {
            try
            {
                var rng = new Random();
                return Enumerable.Range(1, 5)
                    .Select(index =&amp;amp;amp;amp;amp;gt; new WeatherForecast
                    {
                        Date = DateTime.Now.AddDays(index),
                        TemperatureC = rng.Next(-20, 55),
                        Summary = GetRandomSummary(rng)
                    }).ToArray();
            }
            catch (Exception ex)
            {
                _logger.LogError(ex, "Unhandled error");
                return StatusCode(
                    (int)HttpStatusCode.InternalServerError,
                    new ApiError { Message = "Server error" });
            }
        }

The Swagger page correctly displays the different data structures used for the different status codes:

 

Summary

This concludes the first part in this mini-series. We now have a complete Web Api service with the following properties:

  • A Swagger description file can be found at /swagger/v1/swagger.json
  • A Swagger documentation and testing page can be found at /swagger
  • Code comments are included in the service description
  • Enums are serialized as strings
  • Dates are serializes as short dates, without time part
  • Decimal values are correctly describes in the service description

In part 2 we show how we can consume this service in a client.

/Emil

 

Listing remote branches with commit author and age of last commit

Here’s a simple bash script that shows how to enumerate remote Git branches:

for k in `git branch -r | perl -pe 's/^..(.*?)( ->.*)?$/\1/'`; do echo -e `git show --pretty=format:"%ci\t%cr\t%an" $k -- | head -n 1`\\t$k; done | sort -r

The resulting output looks like this:

2019-09-26 20:42:44 +0200 8 weeks ago Emil Åström origin/master
2019-09-26 20:42:44 +0200 8 weeks ago Emil Åström origin/HEAD
2019-05-17 23:06:41 +0200 6 months ago Emil Åström origin/fix/travis-test-errors
2018-04-21 23:16:57 +0200 1 year, 7 months ago Emil Åström origin/fix/nlog-levels

For a selection of other fields to include, see the git-show documentation.

(Adapted from https://stackoverflow.com/a/2514279/736684).

Visual Studio Code configuration tips

These days I tend to leave Visual Studio more and more often for doing simpler things that don’t require the full power of Visual Studio’s development tools like IntelliSense, the powerful debugging tools and ReSharper refactorings. My preferred text editor for the more light-weight work has become Visual Studio Code which, despite the name, does not have anything to do with Visual Studio. Except that it’s Microsoft that is behind it (their marketing department no doubt had some influence in the naming process). Just like in the Visual Studio case, I cannot help myself from meddling with the configuration options, so I though it might be a good idea to write down what I have done.

Color theme

Just like for Visual Studio, I’m using Solarized Dark and switching to that is easy since it’s built in into Code (File/Preferences/Color Theme).

What might be less well-known is that a new feature was recently added that allows for overriding the colors of the selected theme, namely the editor.tokenColorCustomizations setting. It works like this:

First investigate what TextMate scope it is you want to change color for. A TextMate scope is just an identifier which is normally associated with a regular expression that defines the character pattern for the scope. Since most language extensions in Visual Studio Code define scopes with the standardized identifiers for comments, strings, etc, all languages can be consistently colored by the color theme (a comment in C# looks the same as a comment in Javascript). By the way, TextMate is a text editor for Mac that first defined the syntax used for language support in Visual Studio Code (it’s also used in for example Sublime Text).To find the name of the token to change, do the following:

  1. Place the cursor on an item whose color should be changed (e.g. a comment)
  2. Open the command palette by pressing Ctrl + Shift + P and then type Developer: Inspect Editor Tokens and Scopes (was previously called “Developer: Inspect TM Scopes“).
  3. A window opens, showing information about the current scope:In this case it’s comment. Moving the cursor will update the scope information in the window. Close it with the Esc key.
  4. To override the appearance of the identified scope, use the editor.tokenColorCustomizations setting:
    "editor.tokenColorCustomizations": {
        "textMateRules": [
            {
                "scope": "comment",
                "settings": {
                    "foreground": "#af1f1f",
                    "fontStyle": "bold"
                }
            }
        ]
    }
    
  5. Saving the settings files makes it take effect immediately, without restarting the editor. In addition to foreground color, it is also possible to set the the font style to “bold”, “italic”, “underline” or a combination.

You can also override colors in the editor UI framework using the workbench.colorCustomizations setting but I won’t cover that here. Microsoft has some good documentation of what can be done.

Visual Studio Code Extensions

The above takes care of the basic color highlighting of files but I also use a few extensions for Visual Studio Code that I feel are worthy of mentioning in a post like this.

Bracket Pair Colorizer

Adds the rainbow parenthesis feature to Visual Studio Code (matching parentheses and brackets have the same color which is different from the colors used by nested parentheses). It works really well and has configurable colors.

More info here.

Color Highlight

Visualized web color constants in source code. Very useful for all sorts of work, not just web. It’s also enabled in user settings, for example, so you can get a preview of colors when modifying the currrent theme colors 😉

More info here.

Log File Highlighter 

This extension is actually written by yours truly and is used for visualizing different items in log files, such as dates, strings, numbers, etc.

By default it associates with the .log file extension but it can be configured for other files types as well. It’s also easy to temporarily switch to the log file format by pressing Ctrl + K, M and the type Log File in the panel that opens.

More info here.

Nomo Dark Icon Theme

This extension adds really nice icons to the folder view in Visual Studio Code. It looks much better with this installed.

More info here.

Sort-lines

Sort lines is a little extension that has nothing to do with colors but I think it’s worth mentioning it anyway since I find it very useful from time to time. It adds several commands for sorting text, the one that is a bit unique is called Sort lines (unique). What it does is to sort the selected lines and remove all duplicates. very useful for aggregating data from data queries, log files etc.

More info here.

Roaming settings with symbolic links and DropBox

Once you get everything configured the way you want it, you have to repeat all the above steps on other computers you use Visual Studio Code on since it has no support for roaming user settings, which is a bit cumbersome. However, there’s an old trick that fixes that, namely to use symbolic directory link to a DropBox folder (or another similar file-sync service such as OneDrive) instead of the standard settings folder for Visual Studio Code.

To do this, follow these steps:

  1. Close Visual Studio Code if it’s open.
  2. Move the settings folder %APPDATA%\Code\User\ to somewhere in your DropBox (or OneDrive) directory tree.
  3. Create a symbolic directory link replacing the old folder with a link to its new location inside DropBox:
    mklink /d %APPDATA%\Code\User\ "C:\Users\easw3p\Dropbox\Shared\VS Code\AppData_Code_User"
    

    (adjust the DropBox path so it matches your setup).

  4. Start Visual Studio Code again to verify that everything works.

On all other machines that should use the same settings, just remove the default settings folder and create the symbolic link like shown above, and all settings should be synchronized across all machines. As a bonus, you have just enabled version controlled settings since it’s easy DropBox to show file histories and roll back changes in DropBox.

And with that I’ll end this post. Feel free to add suggestions or comment on the above 🙂

/Emil

Uninstalling McAfee LiveSafe

I recently bought a new computer and as usual there was quite a bit of unwanted software on it, such as McAfeee LiveSafe anti-virus software. Getting rid of it proved near impossible, using normal Windows uninstall procedures resulted in strange error messages and there were services, registry entries and McAfee folders in many places so manual deletion was not really an option either.

Luckily, McAfee has created a utility for uninstalling their software, named McAfee Consumer Product Removal, MCPR. If find it interesting that a separate tool was developed for something so basic as uninstalling the product, but at least it worked. I’m writing this small post to remind myself of it the next time I need it, but maybe someone else will find it useful as well…

The tool is described and can be downloaded here: https://service.mcafee.com/webcenter/portal/cp/home/articleview?articleId=TS101331

/Emil

Generating semantic version build numbers in Teamcity

Background

This is what we want to achieve, build numbers with minor, major and build counter parts together with the branch name.

This is what we want to achieve, build numbers with minor, major and build counter parts together with the branch name.

What is a good version number? That depends on who you ask I suppose, but one of the most popular versioning schemes is semantic versioning. A version number following that scheme can look like the following:

1.2.34-beta

In this post I will show how an implementation for generating this type of version numbers in Teamcity with the following key aspects:

  1. The major and minor version numbers (“1” and “2” in the example above) will be coming for the AssemblyInfo.cs file.
    • It is important that these numbers come from files in the VCS repository (I’m using Git) where the source code is stored so that different VCS branches can have different version numbers.
  2. The third group (“34”) is the build counter in Teamcity and is used to separate different builds with the same major and minor versions from each other.
  3. The last part is the pre-release version tag and we use the branch name from the Git repository for this.
    • We will apply some filtering and formatting for our final version since the branch name may contain unsuitable characters.

In the implementation I’m using a Teamcity feature called File Content Replacer which was introduced in Teamcity 9.1. Before this we we had to use another feature called Assembly Info Patcher but that method had several disadvantages, mainly that there was no easy way of using the generated version number in other build steps.

In the solution described below, we will replace the default Teamcity build number (which is equal to the build counter) with a custom one following the version outlined above. This makes it possible to reuse the version number everywhere, e.g. in file names, Octopus Release names, etc. This is a great advantage since it helps make connect everything produced in a build chain together (assemblies, deployments,  build monitors, etc).

Solution outline

The steps involved to achieve this are the following:

  1. Assert that the VCS root used is properly configured
  2. Use the File Content Replacer to update AssemblyInfo.cs
  3. Use a custom build step with a Powershell script to extract the generated version number from AssemblyInfo.cs, format it if needed, and then tell Teamcity to use this as the build number rather than the default one
  4. After the build is complete, we use the VCS Labeling build feature to add a build number tag into the VCS

VCS root

The first step is to make sure we get proper branch names when we use the Teamcity %teamcity.build.branch% variable. This will contain the branch name from the version control system (Git in our case) but there is a special detail worth mentioning, namely that the branch specification’s wildcard part should be surrounded with parentheses:

VCS root branch specification with parenthesis.

VCS root branch specification with parenthesis.

If we don’t do this, then the default branch (“develop” in this example) will be shown as <default> which is not what we want. The default branch should be the name branch name just like every other branch, and adding the parentheses ensures that.

Updating AssemblyInfo.cs using the File Content Replacer build feature

In order for the generated assembly to have the correct version number we update the AssemblyInfo.cs before building it. We want to update the following two lines:


[assembly: AssemblyVersion("1.2.0.0")]
[assembly: AssemblyFileVersion("1.2.0.0")]

The AssemblyVersion attribute is used to generate the File version property of the file and the AssemblyFileVersion attribute is used for the Product version.

File version and product version of the assembly.

File version and product version of the assembly.

We keep the first two integers to use them as major and minor versions in the version number. Note that the AssemblyVersion attribute has restriction on it so that it must be four integers separated by dots, while AssemblyFileVersion does not have this restriction and can contain our branch name as well.

To accomplish the update, we use the File Content Replacer build feature in Teamcity two times (one for each attribute) with the following settings:

AssemblyVersion

  • Find what:
    (^\s*\[\s*assembly\s*:\s*((System\s*\.)?\s*Reflection\s*\.)?\s*AssemblyVersion(Attribute)?\(")([0-9\*]+\.[0-9\*]+\.)([0-9\*]+\.[0-9\*]+)("\)\])$
  • Replace with:
    $1$5\%build.counter%$7

AssemblyFileVersion

  • Find what:
    (^\s*\[\s*assembly\s*:\s*((System\s*\.)?\s*Reflection\s*\.)?\s*AssemblyFileVersion(Attribute)?\(")([0-9\*]+\.[0-9\*]+\.)([0-9\*]+\.[0-9\*]+)("\)\])$
  • Replace with:
    $1$5\%build.counter%-%teamcity.build.branch%$7

As you can see, the “Find what” parts are regular expressions that finds the part of AssemblyInfo.cs that we want to update and “Replace with” are replacement expressions in which we can reference the matching groups of the regex and also use Teamcity variables. The latter is used to insert the Teamcity build counter and the branch name.

In our case we keep the first two numbers but if the patch number (the third integer) should also be included, then these two expressions can be adjusted to accomodate for this.

The FIle Content Replacer build feature in Teamcity.

The FIle Content Replacer build feature in Teamcity.

When we have done the above, the build will produce an assembly with proper version numbers, similarly to what we could accomplish with the old Assembly Info Patcher, but the difference with the old method is that we now have an patched AssemblyInfo.cs file whereas with the old method it was unchanged as only the generated assembly DLL file was patched. This allows us to extract the generated version number in the next step.

Setting the Teamcity build number

Up to now, the Teamcity build number has been unchanged from the default of being equal to the build counter (a single integer, increased after every build). The format of the build number is set in the General Settings tab of the build configuration.

The General Settings tab for a build configuration.

The General Settings tab for a build configuration.

The build number is just a string uniquely identifying a build and it’s displayed in the Teamcity build pages and everywhere else where builds are displayed, so it would be useful to include our full version number in it. Doing that also makes it easy to use the version number in other build steps and configurations since the build number is always accessible with the Teamcity variable %system.build.number%.

To update the Teamcity build number, we rely on a Teamcity service message for setting the build number. The only thing we have to do is to make sure that our build process outputs a string like the following to the standard output stream:

##teamcity[buildNumber '1.2.34-beta']

When Teamcity sees this string, it will update the build number with the supplied new value.

To output this string, we’re using a separate Powershell script build step that extracts the version string from the AssemblyInfo.cs file and does some filtering and truncates it. The latter is not strictly necessary but in our case we want the build number to be usable as the name of a release in Octopus Deploy so we format it to be correct in that regard, and truncate it if it grows beyond 20 characters in length.

Build step for setting the Teamcity build number

Build step for setting the Teamcity build number

The actual script looks like this (%MainAssemblyInfoFilePath% is a variable pointing to the relative location of the AssemblyInfo.cs file):

function TruncateString([string] $s, [int] $maxLength)
{
	return $s.substring(0, [System.Math]::Min($maxLength, $s.Length))
}

# Fetch AssemblyFileVersion from AssemblyInfo.cs for use as the base for the build number. Example of what
# we can expect: "1.1.82.88-releases/v1.1"
# We need to filter out some invalid characters and possibly truncate the result and then we're good to go.  
$info = (Get-Content %MainAssemblyInfoFilePath%)

Write-Host $info

$matches = ([regex]'AssemblyFileVersion\(\"([^\"]+)\"\)').Matches($info)
$newBuildNumber = $matches[0].Groups[1].Value

# Split in two parts:  "1.1.82.88" and "releases/v1.1"
$newBuildNumber -match '^([^-]*)-(.*)$'
$baseNumber = $Matches[1]
$branch = $Matches[2]

# Remove "parent" folders from branch name.
# Example "1.0.119-bug/XENA-5834" =&amp;amp;amp;gt; "1.0.119-XENA-5834"
$branch = ($branch -replace '^([^/]+/)*(.+)$','$2' )

# Filter out illegal characters, replace with '-'
$branch = ($branch -replace '[/\\ _\.]','-')

$newBuildNumber = "$baseNumber-$branch"

# Limit build number to 20 characters to make it work with Octopack
$newBuildNumber = (TruncateString $newBuildNumber 20)

Write-Host "##teamcity[buildNumber '$newBuildNumber']"

(The script is based on a script in a blog post by the Octopus team.)

When starting a new build with this build step in it, the build number will at first be the one set in the General Settings tab, but when Teamcity sees the output service message, it will be updated to our version number pattern. Pretty nifty.

Using the Teamcity build numbers

To wrap this post up, here’s an example of how to use the updated build number to create an Octopus release.

Creating an Octopus release with proper version number from within Teamcity.

Creating an Octopus release with proper version number from within Teamcity.

 

In Octopus, the releases will now have the same names as the build numbers in Teamcity, making it easy to know what code is part of the different releases.

The releases show up in Octopus with the same names as the build number in Teamcity.

The releases show up in Octopus with the same names as the build number in Teamcity.

Tagging the VCS repository with the build number

The final step for adding full tracability to our build pipeline is to make sure that successful builds adds a tag to the last commit included in the build. This makes it easy in the VCS to know exactly what code is part of a given version, all the way out to deployed Octopus releases. This is very easy to accomplish using the Teamcity VCS labeling build feature. Just add it to the build configuration with values like in the image below, and tags will created automatically everytime a build succeeds.

The VCS Labeling build feature in Teamcity.

The VCS Labeling build feature in Teamcity.

The tags show up in Git like this:

Tags in the Git repository connects the versions of successful builds with the commit included in the build.

Tags in the Git repository connects the versions of successful builds with the commits included in the build.

Mission accomplished.

/Emil

NDepend v6

I have written about NDepend a few times before and now that version 6 has been released this summer it’s time to mention it again, as I was given a licence for testing it by the kind NDepend guys 🙂

Trend monitoring

The latest version I have been using prior to version 6 is version 4, so my favorite new feature is the trend monitoring functionality (which was actually introduced in version 5). Such a great idea to integrate it into the client tool for experimenting! Normally you would define different metrics and let the build server store the history but having the possibility of working with this right inside of Visual Studio makes it so much easier to experiment with metrics without having to configure this in the build server.

Here is a screen shot of what it may look like (the project name is blurred so to not reveal the customer):

Dashboard with metrics and deltas compared to a given baseline plus two trend charts

Dashboard with metrics and deltas compared to a given baseline plus two trend charts

  • At the top of the dashboard there is information about the NDepend project that has been analyzed and the baseline analysis used for comparison.
  • Below this there are several different groups with metrics and deltas, e.g. # Lines of Code (apparently the code base has grown with 1.12%, or 255 lines, in this case when compared to the baseline).
  • Next to the numeric metrics are the trend charts, in my case just two of them, showing the number of lines of code and critical rule violations, respectively. Many more are available and it’s easy to create your own charts with custom metrics. BTW, “critical” refers to the rules deemed critical in this project. These rules will differ from project to project.
    • In the image we can see that the number of lines of code grows steadily which is to be expected in a project which is actively developed.
    • The number of critical errors also grows steadily which probably indicates an insufficient focus on code quality.
    • There is a sudden decrease in rule violations in the beginning of July where one of the developers of the project decided to refactor some “smelly” code.

This is just a simple example but I’m really liking how easy it now is to get a feeling for the code trends of a project with just a glance on the dashboard every now and then.

The Dependency Matrix

The trend monitoring features may be very useful but the trademark feature of NDepend is probably the dependency matrix. Most people who have started up NDepend has probably seen the following rather bewildering matrix:

The dependency matrix can be used to discover all sorts of structual propertis of the code base

The dependency matrix can be used to discover all sorts of structual propertis of the code base

I must confess that I haven’t really spent too much time with this view before since I’ve had some problems grasping it fully, but this time around I decided it was time to dive into it a little more. I think it might be appropriate to write a few words on my findings, so here we go.

Since it’s a little difficult to see what’s going on with a non-trivial code base, I started with something trivial, with code in a main namespace referencing code in NamespaceA that in turn references code in NamespaceB. If the view does not show my namespaces (which is what I normally want, then the first thing to do when opening the matrix is to set the most suitable row/column filter with the dropdown):

The dependency matrix filtering dropdown

I tend to use View Application Namespaces Only most of the time since this filters out all third party namespaces and also expands all my application namespaces (the top level of the row/column headers are assemblies which is not normally what I want).

Also note that the calculation of the number shown inside a dependency cell can be changed independently of the filtering. In my case it’s 0 on all cells which seems strange since there are in fact dependencies, but the reason for this is that it shows the number of members used in the target namespace and in this case I only refer to types. Changing this is done in another dropdown in the matrix window.

Another thing I learned recently is that it may be very useful to switch back and forth between the Dependency Matrix and the Dependency Graph and in the image below I show both windows next to each other. In this simple case they show the same thing but when the code base grows then dependencies become too numerous to be shown visually in a useful way. Luckily there are options in the matrix to show parts of it the graph, and vice versa. For example, right clicking on namespace row heading opens a menu with a View Internal Dependencies On Graph option so that only a subset of the code base dependencies are shown. Very useful indeed.

Here’s what it may look like:

A simple sample project with just three namespaces

A simple sample project with just three namespaces

Also note that hovering over a dependency cell displays useful popups and changes the cursor into an arrow indicating the direction of the dependency, which is also reflected by the color of the cell (by the way, look out for black cells, they indicate circular references!)

Another way to invoke the graph is to right click a dependency cell in the matrix:

Context menu for a dependency

 

The top option, Build a Graph made of Code Elements involved in this dependency does just what is says. Also very useful.

By using the expand/collapse functionality of the dependency matrix together with the option to display dependencies in the graph view it becomes easier to pinpoint structual problems in the code base. It takes a bit of practise because of the sheer amount of information in the matrix but I’m growing into liking it more and more. I would suggest anyone interested to spend some time on this view and it’s many options. I found this official description to be useful for a newcomer: Dependency Structure Matrix

Wrapping it up

NDepend 6 has many more features than what I have described here and it’s such a useful tool that I would suggest anyone interested in code quality to download a trial and play around with it. Just be prepared to invest some time into learning the tool to get the most out of it.

A very good way to get started is the Pluralsight course by Eric Dietrich. It describes NDepend version 5 but all of it applies to version 6 as well and it covers the basics of the tool very well. Well worth a look if you’re a Pluralsight subscriber.

Global text highlighting in Sublime Text 3

Sublime Text is currently my favorite text editor that I use whenever I have to leave Visual Studio and this post is about how make it highlight ISO dates in any text file regardless of file format.

Sublime Text obviously has great syntax coloring support for highlighting keywords, strings and comments in many different languages but what if you want to highlight text regardless of the type of file you’re editing? In my case I want to highlight dates to make it easier to read logfiles and other places where ISO dates are used, but the concept is general. You might want to indicate TODO och HACK items and whatnot and this post is about how to do that in Sublime Text.

Here’s an example of what we want to achieve:

Log file with highlighted date

Here is the wanted result, a log file with a clearly indicated date, marking the start of a logged item.

 

We’re going to solve this using a great Sublime package written by Scott Kuroda, PersistentRegexHighlight. To add highlighting of dates in Sublime text, follow these steps:

  1. Install the package by pressing Ctrl + Shift + P to open the command palette, type “ip” and select the “Package Control: Install Package” command. Press Enter to show a list of packages. Select the PersistentRegexHighlight package and press Enter again.
  2. Next we need to start configuring the package. Select the menu item Preferences / Package Settings / PersistentRegexHighlight / Settings – User to show an empty settings file for the current user. Add the following content:
    
    {
       // Array of objects containing a regular expression
       // and an optional coloring scheme
       "regex":[
         {
           // Match 2015-06-02, 2015-06-02 12:00, 2015-06-02 12:00:00,
           // 2015-06-02 12:00:00,100
           "pattern": "\\d{4}-\\d{2}-\\d{2}( \\d{2}:\\d{2}(:\\d{2}(,\\d{3})?)?)?",
           "color": "F5DB95",
           "ignore_case": true
         },
         {
           "pattern": "\\bTODO\\b",
           "color_scope": "keyword",
           "ignore_case": true
         }
       ],
    
       // If highlighting is enabled
       "enabled": true,
    
       // If highlighting should occur when a view is loaded
       "on_load": true,
    
       // If highlighting should occur as modifications happen
       "on_modify": true,
    
       // File pattern to disable on. Should be specified as Unix style patterns
       // Note, this looks at the absolute path to match the pattern. So if trying
       // ignore a single file (e.g. README.md), you will need to specify
       // "**/README.md"
       "disable_pattern": [],
    
       // Maximum file size to run the the PersistentRegexHighlight on.
       // Any value less than or equal to zero will be treated as a non
       // limiting value.
       "max_file_size": 0
    }
    
  3. Most of the settings should be pretty self-explanatory, basically we’re using two highlighting rules in this example:
    1. First we specify a regex to find all occurances of ISO dates (e.g. “2015-06-02”, with or without a time part appended) and mark these with a given color (using the color property).
    2. The second regex specifies that all TODO items should be colored like code keywords (using the color_scope property). Other valid values for the scope are “name”, “comment”, “string”.
  4. When saving the settings file you will be asked to create custom color theme. Click OK in this dialog.

Done! Now, when you open any file with content matching the regexes given in the settings file, that content will be colored.

Tips

  1. Sometimes it’s necessary to touch the file to trigger a repaint (type a character and delete it).
  2. The regex option is an array so it’s easy to add as many items we want with different colors.
  3. To find more values for the color_scope property, you can place the cursor in a code file of choice and press Ctrl + Alt + Shift + P. The current scope is then displayed in the status bar. However it’s probably easier to just use the color property instead and set the wanted color directly.

Happy highlighting!

/Emil

Devart LINQ Insight

I was recently approached by a representative from Devart who asked if I wanted to have a look at some of their products, so I decided to try out the LINQ Insight add-in for Visual Studio.

LINQ Insight has two main functions:

  • Profiler for LINQ expressions
  • Design-time LINQ query analyzer and editor

If you work much with LINQ queries you probably know that Visual Studio is somewhat lacking with functionality around LINQ queries by default so the functions that LINQ Insight offers should be pretty welcome for any database developer out there on the .Net platform (which should be pretty many of us these days). Let’s discuss the two main features of LINQ Insight in some more detail.

Profiling LINQ queries

If you’re using Entity Framework (LINQ Insight apparently also supports NHibernate, RavenDB, and a few others but I have not tested any of those) and LINQ it can be a little difficult to know exactly what database activity occurs during the execution of the applications. After all, the main objective of OR mappers is to abstract away the details of the database and instead let the developer focus on the domain model. But when you’re debugging errors or analyzing performance it’s crucial to analyze the database activity as well, and that’s what LINQ Insight’s profiling function helps with.

There are other tools to this of course, such as IntelliTrace in Visual Studio Ultimate, but since it’s only included in Ultimate, not many developers have access to it. LINQ Insight profiler is very easy to use and gives access to a lot of information.

To enable profiling, follow these steps:

  1. Make sure that IIS Express process is not started. Stop it if it is. (This assumes we’re using IIS Express of course. I’m not quite sure how to work with the full IIS server in conjunction with LINQ Insight.)
  2. Open the profiling window by selecting View/Other Windows/LINQ Profiler, or pressing Ctrl+W, F
  3. Press the “Start profiler session” button in the upper left corner of the window (it looks like a small “Play” icon)
  4. Start debugging your application, for example by pressing F5.
  5. Debugging information such as this should now start to fill the profiler window:
    The profiler displays all LINQ activity in the application.

    The profiler displays all LINQ activity in the application.

    As you can see, in this case we have several different contexts that have executed LINQ queries. For example, ApplicationContext is used by ASP.Net Identity and HistoryContext is used by Code First Database Migrations. Context is our application context.

  6. We can now drill down into the queries and see what triggered them and what SQL statements were executed.
    Drilling down into profiler data.

    Drilling down into profiler data.

    We can see the LINQ query that was executed, the SQL statements, duration, call stack, etc. Very useful stuff indeed.

Query debugger and editor

The other feature LINQ Insight brings into Visual Studio is to help writing LINQ queries and debug them. To debug a query, follow these steps:

  1. To open a query in the query editor, just right-click on it in the standard C# code editor window and select the “Run LINQ Query” option:

    To debug or edit a LINQ query, use the right-click menu.

    To debug or edit a LINQ query, use the right-click menu.

  2. If the query contains one or more parameters, a popup will be shown where values for the parameters can be given.
  3. Next, the query will be executed, and the results will be displayed:

    Query results are displayed in the Results tab.

    Query results are displayed in the Results tab.

  4. This is of course useful in itself, and even better is that the generated Sql statements are displayed in the SQL tab and the original LINQ query is in the LINQ tab, where it can be edited and re-executed, after which the Sql and Results tab are updated. Really, really useful!

If an error is displayed in the Results tab, then the most probably reason is that the database could not be found in the project’s config file, or that it could not be interpreted correctly. The latter is the case if using the LocalDB provider with the "|DataDirecory|" placeholder, which only can be evaluated at runtime in a ASP.Net project. To make LINQ Insight find a database MDF file in App_Data in a web project, you can follow these steps:

  1. Make sure that your DbContext sub-class (for Entity Framework, that is) has an overloaded constructor that takes a single string parameter, namely the connection string to use:
    public Context(string connString) : base(connString) {}
    

    This is required if LINQ Insight cannot deduce the connection string for the project’s config file. This is usually a problem in my projects since I like to separate domain logic into a separate project (normally a class library) from my “host application”.

  2. Double-click the MDF file in the App_Data folder to make sure it’s present in the Server Explorer panel in Visual Studio.
  3. Select the database in the Server Explorer and right-click it and select Properties. Copy its Connection String property.
  4. In the LINQ Interactive window, click the Edit Connection String button, which is only enabled if the DbContext class has a constructor overload with a connection string parameter, which we ensured in step 1.
  5. Paste the connection string to the Data/ConnectionString field in the panel:
    Use the connection string dialog to override the "guessed" connection string.

    Use the connection string dialog to override the “guessed” connection string.

    Click OK to close the dialog.

  6. Re-run the query with the Run LINQ Query button in the LINQ Interactive window, and it should now work correctly. If it doesn’t, try to Run LINQ Query command in the C# code editor again, since it re-initializes the query.

The ability to freely set the connection string should make it possible to work against any database, be it a local MDF file, a full SQL Server database or a Windows Azure database. This could be used as a simple way to try out new or modified LINQ queries against a staging or production database, right from the development enviroment. Could be very useful in some situations for debugging nasty errors and such.

Summary

All in all, I think the LINQ Insight is a very useful tool and I recommend you try it out if you find yourself writing LINQ queries from time to time.

I should also mention that if you have tried LINQ Insight before and found it be slightly unstable then I should mention that Devart have recently fixed a few errors that really makes the tool much more robust and useful. If unsure, just download the trial version and test it out.

Happy Linqing!

Emil

AutoHotkey – the essential tool for all “automate-it-all” practioners

AutoHotkey logo

Introduction

This post is about a tool I always feel very crippled without, namely the incredibly useful AutoHotkey. I always install this onto new machines and I rely on it heavily every day. The strange thing is that rather few people I meet know about it so I thought I’d remedy that ignorance somewhat by posting an overview here on my blog.

So what is it? AutoHotkey allows you to map keypresses to custom macros.

This key press mapping works in any application by default and the macros that are possible are very flexible.

Some examples

The most common use is probably to start applications for a given key combination. Here’s one I use:

!#n::Run Notepad++

This starts Notepad++ when I press Ctrl-Alt-N

You can also write a little more complex macro code (the macro language is a custom one for AutoHotkey, here’s some documentation)

^!Enter::
	FormatTime, CurrentDateTime,, yyyy-MM-dd
	EnvGet, UserName, UserName
	SendInput /%CurrentDateTime% %UserName%
	return

This inserts a timestamp with my username when I press Ctrl-Alt-Enter.
Example: /2013-11-02 emila

Note that this works in any application since AutoHotkey sends standard Windows messages simulating keypresses for the text. So there’s no problem to use this macro in Notepad, for example.

To take this one step further, the key code trigger does not have to be based on qualifier keys such as Ctrl, Alt etc, it can also be a sequence of standard characters. Here’s one example of this:

; Today
::tdy::
	FormatTime, CurrentDateTime,, yyyy-MM-dd
	SendInput %CurrentDateTime%
	return

This creates a macro that inserts the current date every time I type the sequence “tdy” followed by a word delimiter (space, tab, dot, comma, etc). This simple macro is probably the one I use the most, it’s so incredibly useful to have easy access to the current date when taking notes, creating folders, etc.

I also have a few code snippets I use a lot when programming:

::isne::String.IsNullOrEmpty
::isnw::String.IsNullOrWhiteSpace
::sfm::String.Format 

This way I never feel tempted to write string comparisons such as if (s == null) { ... }, it’s just as easy to write if (String.IsNullOrEmpty(s) { ... } using my snippet. And this kind of snippet works even in Notepad 🙂

This ability to replace character sequences is also very useful for correcting my common spelling errors:

::coh::och
::elelr::eller
::perosn::person
::teh::the

I try to detect common operations that I often perform that can be automated and try to write an AutoHotkey macro for them. An example of this is that I have noted I often have to write a valid Swedish social security number (personnummer) when testing applications I write. This can be a pain since the number has to end with a correct checksum digit, so I wrote a simple web service that creates a random SSN and returns it. This service can be called from AutoHotkey like this:

; Anropa web service för att skapa personnummer
::pnr::
	EnvGet tmpDir,TEMP
	UrlDownloadToFile http://kreverautils.azurewebsites.net/api/testdata/personnummer,%tmpDir%\random.pnr.txt
	FileRead, pnrRaw, %tmpDir%\random.pnr.txt
	StringReplace, pnrClean, pnrRaw, ", , All
	SendInput %pnrClean%
	return

This is really convenient, I just type “pnr” and it’s replaced with a valid SSN. This really lowers the mental barrier when testing applications where this data is required, resulting in better testing. (Mental barriers when testing applications are very interesting and is perhaps worth a separate blog post some time…)

Summing it up

The above examples absolutely just scratch the surface of what you can do with AutoHotkey, so why not give it a try? It’s free, so it’s just a matter of downloading it and start experimenting. Download it from its home page.

Final tip

I like to have all my computers share the same AuoHotkey setup so I have created my main macro file (general.ahk) in a DropBox folder. I also create a shortcut in my startup folder (find it by “running” shell:startup using Win + R) with this target string:

C:\Users\emila\Dropbox\Utils\AutoHotKey\general.ahk

Since AutoHotkey associates itself to the “.ahk” file extension, this is enough start start the script on startup. Any change I make to the macro is automatically propagated to all my computers.

Good luck with your automations!

/Emil

Using NDepend 4 to analyze code usage

This week I was assigned the task to analyze the usage of an ASMX web service we are planning to remove since it has a number of problems (which is another story) and switch to new and better written WCF services. As the service is rather large and has been around for years the first step was to analyze if it had methods that no client actually used anymore.

For this task I decided to use the brilliant code query functionality built into NDepend 4. I have briefly reviewed earlier versions of this tool on this blog but this time I thought an actual example of how to use it in a specific situation would be illuminating.

The first step was retrieve a list of the methods in the web service. To do that, I added an NDepend project to the web service solution. Se below for an example of the dialog used for this:

Attaching an NDepend project to a Visual Studio solution

After this NDepend performed an analysis of my solution, after which I was able to start querying my code using the CQLinq querying language. NDepend has for a long time had its SQL-like CQL (Code Querying Language) but for some reason I never got around to using it. NDepend 4 introduces CQLinq which is much nicer syntactically and has a good editor for writing code queries, including IntelliSense. For more info about CQLinq, see this introduction.

What I needed was a list of methods on my Web Service class. To retrieve this, I opened the “Queries and Rules Edit” window (Alt-Q) and typed:

from m in Methods
where m.ParentType.FullName == 
   "ActiveSolution.Lernia.Butler.EducationSI.Education" && m.IsPublic
select m

The CQLinq query window.

The results is displayed in the bottom pane. I exported the list to an Excel file for further processing.

The next step was to see which of the web service methods the different clients used, so I analyzed each client with NDepend. Note that I excluded their test projects from the NDepend analysis to make sure that no lingering old integration tests affected the results.

For each client I listed those methods of their respective web service proxy classes that they were actually calling. A query for that can look like this:

from m in Methods  
where m.ParentType.FullName == "ActiveSolution.Lernia.SFI.WinClientFacade.Butler_EducationSI.Education"  
&& m.IsPublic  
&& !m.Name.StartsWith("Begin") 
&& !m.Name.StartsWith("End") 
&& !m.Name.Contains("Completed") 
&& !m.Name.Contains("Async") 
&& m.NbMethodsCallingMe > 0 
select m

The ParentType is of course the proxy class that gets generated when adding web service references. For this type, I list all public methods (except the asynchronous helper methods that we don’t use anyway) that are used by at least one other method. The results were copied into the already mentioned Excel document and when all clients’ data was retrieved I was able to do some Excel magic to get this result:

The resulting Excel report listing the reference count for each method in the web service class.

The columns B through G contains ‘1’ of the respective client is calling it. Rows with a sum of zero in column H are not used by any client and can be safely removed. Mission accomplished.

This has absolutely just scratched scratched the surface of what can be done using CQLinq, and there is much more functionality in NDepend than just queries (the diagram tools, for example). It’s a great product for anyone that is seriously interested in code quality. And we all should be, right?

/Emil