Generating semantic version build numbers in Teamcity

Background

This is what we want to achieve, build numbers with minor, major and build counter parts together with the branch name.

This is what we want to achieve, build numbers with minor, major and build counter parts together with the branch name.

What is a good version number? That depends on who you ask I suppose, but one of the most popular versioning schemes is semantic versioning. A version number following that scheme can look like the following:

1.2.34-beta

In this post I will show how an implementation for generating this type of version numbers in Teamcity with the following key aspects:

  1. The major and minor version numbers (“1” and “2” in the example above) will be coming for the AssemblyInfo.cs file.
    • It is important that these numbers come from files in the VCS repository (I’m using Git) where the source code is stored so that different VCS branches can have different version numbers.
  2. The third group (“34”) is the build counter in Teamcity and is used to separate different builds with the same major and minor versions from each other.
  3. The last part is the pre-release version tag and we use the branch name from the Git repository for this.
    • We will apply some filtering and formatting for our final version since the branch name may contain unsuitable characters.

In the implementation I’m using a Teamcity feature called File Content Replacer which was introduced in Teamcity 9.1. Before this we we had to use another feature called Assembly Info Patcher but that method had several disadvantages, mainly that there was no easy way of using the generated version number in other build steps.

In the solution described below, we will replace the default Teamcity build number (which is equal to the build counter) with a custom one following the version outlined above. This makes it possible to reuse the version number everywhere, e.g. in file names, Octopus Release names, etc. This is a great advantage since it helps make connect everything produced in a build chain together (assemblies, deployments,  build monitors, etc).

Solution outline

The steps involved to achieve this are the following:

  1. Assert that the VCS root used is properly configured
  2. Use the File Content Replacer to update AssemblyInfo.cs
  3. Use a custom build step with a Powershell script to extract the generated version number from AssemblyInfo.cs, format it if needed, and then tell Teamcity to use this as the build number rather than the default one
  4. After the build is complete, we use the VCS Labeling build feature to add a build number tag into the VCS

VCS root

The first step is to make sure we get proper branch names when we use the Teamcity %teamcity.build.branch% variable. This will contain the branch name from the version control system (Git in our case) but there is a special detail worth mentioning, namely that the branch specification’s wildcard part should be surrounded with parentheses:

VCS root branch specification with parenthesis.

VCS root branch specification with parenthesis.

If we don’t do this, then the default branch (“develop” in this example) will be shown as <default> which is not what we want. The default branch should be the name branch name just like every other branch, and adding the parentheses ensures that.

Updating AssemblyInfo.cs using the File Content Replacer build feature

In order for the generated assembly to have the correct version number we update the AssemblyInfo.cs before building it. We want to update the following two lines:


[assembly: AssemblyVersion("1.2.0.0")]
[assembly: AssemblyFileVersion("1.2.0.0")]

The AssemblyVersion attribute is used to generate the File version property of the file and the AssemblyFileVersion attribute is used for the Product version.

File version and product version of the assembly.

File version and product version of the assembly.

We keep the first two integers to use them as major and minor versions in the version number. Note that the AssemblyVersion attribute has restriction on it so that it must be four integers separated by dots, while AssemblyFileVersion does not have this restriction and can contain our branch name as well.

To accomplish the update, we use the File Content Replacer build feature in Teamcity two times (one for each attribute) with the following settings:

AssemblyVersion

  • Find what:
    (^\s*\[\s*assembly\s*:\s*((System\s*\.)?\s*Reflection\s*\.)?\s*AssemblyVersion(Attribute)?\(“)([0-9\*]+\.[0-9\*]+\.)([0-9\*]+\.[0-9\*]+)(“\)\])$
  • Replace with:
    $1$5\%build.counter%$7

AssemblyFileVersion

  • Find what:
    (^\s*\[\s*assembly\s*:\s*((System\s*\.)?\s*Reflection\s*\.)?\s*AssemblyFileVersion(Attribute)?\(“)([0-9\*]+\.[0-9\*]+\.)([0-9\*]+\.[0-9\*]+)(“\)\])$
  • Replace with:
    $1$5\%build.counter%-%teamcity.build.branch%$7

As you can see, the “Find what” parts are regular expressions that finds the part of AssemblyInfo.cs that we want to update and “Replace with” are replacement expressions in which we can reference the matching groups of the regex and also use Teamcity variables. The latter is used to insert the Teamcity build counter and the branch name.

In our case we keep the first two numbers but if the patch number (the third integer) should also be included, then these two expressions can be adjusted to accomodate for this.

The FIle Content Replacer build feature in Teamcity.

The FIle Content Replacer build feature in Teamcity.

When we have done the above, the build will produce an assembly with proper version numbers, similarly to what we could accomplish with the old Assembly Info Patcher, but the difference with the old method is that we now have an patched AssemblyInfo.cs file whereas with the old method it was unchanged as only the generated assembly DLL file was patched. This allows us to extract the generated version number in the next step.

Setting the Teamcity build number

Up to now, the Teamcity build number has been unchanged from the default of being equal to the build counter (a single integer, increased after every build). The format of the build number is set in the General Settings tab of the build configuration.

The General Settings tab for a build configuration.

The General Settings tab for a build configuration.

The build number is just a string uniquely identifying a build and it’s displayed in the Teamcity build pages and everywhere else where builds are displayed, so it would be useful to include our full version number in it. Doing that also makes it easy to use the version number in other build steps and configurations since the build number is always accessible with the Teamcity variable %system.build.number%.

To update the Teamcity build number, we rely on a Teamcity service message for setting the build number. The only thing we have to do is to make sure that our build process outputs a string like the following to the standard output stream:

##teamcity[buildNumber '1.2.34-beta']

When Teamcity sees this string, it will update the build number with the supplied new value.

To output this string, we’re using a separate Powershell script build step that extracts the version string from the AssemblyInfo.cs file and does some filtering and truncates it. The latter is not strictly necessary but in our case we want the build number to be usable as the name of a release in Octopus Deploy so we format it to be correct in that regard, and truncate it if it grows beyond 20 characters in length.

Build step for setting the Teamcity build number

Build step for setting the Teamcity build number

The actual script looks like this (%MainAssemblyInfoFilePath% is a variable pointing to the relative location of the AssemblyInfo.cs file):

function TruncateString([string] $s, [int] $maxLength)
{
	return $s.substring(0, [System.Math]::Min($maxLength, $s.Length))
}

# Fetch AssemblyFileVersion from AssemblyInfo.cs for use as the base for the build number. Example of what
# we can expect: "1.1.82.88-releases/v1.1"
# We need to filter out some invalid characters and possibly truncate the result and then we're good to go.  
$info = (Get-Content %MainAssemblyInfoFilePath%)

Write-Host $info

$matches = ([regex]'AssemblyFileVersion\(\"([^\"]+)\"\)').Matches($info)
$newBuildNumber = $matches[0].Groups[1].Value

# Split in two parts:  "1.1.82.88" and "releases/v1.1"
$newBuildNumber -match '^([^-]*)-(.*)$'
$baseNumber = $Matches[1]
$branch = $Matches[2]

# Remove "parent" folders from branch name.
# Example "1.0.119-bug/XENA-5834" =&amp;amp;amp;gt; "1.0.119-XENA-5834"
$branch = ($branch -replace '^([^/]+/)*(.+)$','$2' )

# Filter out illegal characters, replace with '-'
$branch = ($branch -replace '[/\\ _\.]','-')

$newBuildNumber = "$baseNumber-$branch"

# Limit build number to 20 characters to make it work with Octopack
$newBuildNumber = (TruncateString $newBuildNumber 20)

Write-Host "##teamcity[buildNumber '$newBuildNumber']"

(The script is based on a script in a blog post by the Octopus team.)

When starting a new build with this build step in it, the build number will at first be the one set in the General Settings tab, but when Teamcity sees the output service message, it will be updated to our version number pattern. Pretty nifty.

Using the Teamcity build numbers

To wrap this post up, here’s an example of how to use the updated build number to create an Octopus release.

Creating an Octopus release with proper version number from within Teamcity.

Creating an Octopus release with proper version number from within Teamcity.

 

In Octopus, the releases will now have the same names as the build numbers in Teamcity, making it easy to know what code is part of the different releases.

The releases show up in Octopus with the same names as the build number in Teamcity.

The releases show up in Octopus with the same names as the build number in Teamcity.

Tagging the VCS repository with the build number

The final step for adding full tracability to our build pipeline is to make sure that successful builds adds a tag to the last commit included in the build. This makes it easy in the VCS to know exactly what code is part of a given version, all the way out to deployed Octopus releases. This is very easy to accomplish using the Teamcity VCS labeling build feature. Just add it to the build configuration with values like in the image below, and tags will created automatically everytime a build succeeds.

The VCS Labeling build feature in Teamcity.

The VCS Labeling build feature in Teamcity.

The tags show up in Git like this:

Tags in the Git repository connects the versions of successful builds with the commit included in the build.

Tags in the Git repository connects the versions of successful builds with the commits included in the build.

Mission accomplished.

/Emil

Debugging a memory leak in an ASP.Net MVC application

Background

We recently had a nasty memory leak after deploying a new version of a project I’m involved in. The new version contained 4 months of changes compared to the previous version and because so many things had changed it was not obvious what caused the problem. We tried to go through all the changes using code compare tools (Code Compare is a good one) but could not find anything suspicious. It took us about a week to finally track down the problem and in this post I’ll write down a few tips that we found helpful. The next time I’m having a difficult memory problem I know where to look, and even if this post is a bit unstructured I hope it contains something useful for other people as well.

.Net memory vs native memory

The first few days we were convinced that we had made a logical error or had made some mistakes in I/O access. We also suspected that our caching solutions had gone bad, but it was hard to be sure. We used Dynatrace to get .Net memory dumps from other environments than our production environment which has zero downtime requirements. We also used a memory profiler (dotMemory) to see if we could see any trends in memory usage one local dev machines with a crawler running, but nothing conclusive could be found.

Then we got a tip to have a look at a few Windows performance counters that can help track down this kind of problem:

  1. Process / Private Bytes – the total memory a process has allocated (.Net and native combined)
  2. .NET CLR Memory / # Bytes in all Heaps – the memory allocated for .Net objects

We added these two for our IIS application pool process (w3p.exe) and it turned out that the total memory allocations increased but that the .Net memory heap did not:

Perf counter #1

Total memory usage (red line) is increasing but the .Net memory heap allocations (blue) are not.

This means that it’s native memory that gets leaked and we could rule out our caching and other .Net object allocations.

What is allocated?

So we now knew it was native memory that was consumed, but not what kind of memory.

One classic type of memory leak is to not release files and other I/O objects properly and we got another tip for how to check for that, namely to add the Process / Handle Count performance counter. Handles are small objects used to reference different types of Windows objects, such as files, registry items, window items, threads, etc, etc, so it’s useful to see if that number increases. And it did:

The handle count (green) followed the increase memory usage very closely.

The handle count (green) followed the increased memory usage very closely.

By clicking on a counter in the legend we could see that the number of active handles increased to completely absurd levels, a few hours after an app pool recycle we had 2-300 000 active handles which definitely indicates a serious problem.

What type of handles are created?

The next step was to try to decide what type of handles were created. We suspected some network problem but were not sure. We then find out about this little gem of a tool: Sysinternals Handle. It’s a command line tool that can list all active handles in a process and to function properly it must be executed with administrative privileges (i.e. start the Powershell console with “Run as Administrator”). It also has a handy option to summarize the number of handles of each type which we used like this:

PS C:\utils\Handle> .\Handle.exe -p 13724 -s

Handle v4.0
Copyright (C) 1997-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

Handle type summary:
  ALPC Port       : 8
  Desktop         : 1
  Directory       : 5
  EtwRegistration : 158
  Event           : 14440
  File            : 226
  IoCompletion    : 8
  IRTimer         : 6
  Job             : 1
  Key             : 96
  Mutant          : 59
  Section         : 258
  Semaphore       : 14029
  Thread          : 80
  Timer           : 1
  Token           : 5
  TpWorkerFactory : 3
  WaitCompletionPacket: 15
  WindowStation   : 2
Total handles: 29401

It was obvious that we had a problem with handles of the Event and Semaphore types. To focus on just those two when experimenting we used simple PowerShell string filtering to make these two stand out better:

PS C:\utils\Handle> .\Handle.exe -p 13724 -s | select-string "event|semaphore"

  Event           : 14422
  Semaphore       : 14029

At this point we had a look again at the code changes made during the 4 months but could still not see what could be causing the problems. There was a new XML file that was accesses but that code used an existing code pattern we had and since we were looking at Event and Semaphore handles it did not seem related.

Non-suspending memory dumps

After a while someone suggested using Sysinternals Procdump to get a memory dump from the production environment without suspending the process being dumped (which happens when using the Create dump file option in Task Manager) using a command line like this:


PS C:\Utils\Procdump> .\procdump64.exe -ma 13724 -r

ProcDump v8.0 - Writes process dump files
Copyright (C) 2009-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
With contributions from Andrew Richards

[00:31:19] Dump 1 initiated: C:\Utils\Procdump\iisexpress.exe_160619_003119.dmp
[00:31:20] Waiting for dump to complete...
[00:31:20] Dump 1 writing: Estimated dump file size is 964 MB.
[00:31:24] Dump 1 complete: 967 MB written in 4.4 seconds
[00:31:24] Dump count reached.

 

The -r option results in that a clone of the process being dumped is created so that the dump can be taken without bringing the site to a halt. We monitored the number of requests per second during the dump file creation using the ASP.NET Applications / Requests/Sec performance counter and it was not affected at all.

Now that we had a dump file, we analyzed it in the Debug Diagnostic Tool v2 from Microsoft. We used the MemoryAnalysis option and loaded the previously created dump under Data Files:

Using the memory analysis function on a dump file.

Using the memory analysis function on a dump file.

The report showed a warning about the finalize queue being very long but that did not explain very much to us, except that something was wrong with deallocating some types of objects.

Debug Diagnostic Tool report

Debug Diagnostic Tool report

There was just one warning after the memory analysis of the dump, that there were a lot of object that were not finalized.

The report also contained a section about the type of object in the finalize queue:

The finalizer queue in the Debug Diagnostic Tool report

The finalizer queue in the Debug Diagnostic Tool report

The most frequent type of object in the queue is undeniably related to our Event and Semaphore handles.

The solution

The next day, one of the developers thought again about what we had changed in the code with regards to handles and again landed on the code that opened an XML file. The code looked like this:

private static IEnumerable<Country> GetLanguageList(string fileFullPath)
{
    List<Country> languages;
    var serializer = new XmlSerializer(typeof(List<Country>),
        new XmlRootAttribute("CodeList"));
    using (var reader = XmlReader.Create(fileFullPath))
    {
        languages = (List<Country>)serializer.Deserialize(reader);
        foreach (var c in languages)
            c.CountryName = c.CountryName.TrimStart().TrimEnd();
    }
    return languages;
}

It looks pretty innocent but he decided to Google “XmlSerializer memory leak”, and what do you know, the first match is a blog post by Tess Fernandez called .NET Memory Leak: XmlSerializing your way to a Memory Leak… It turns out that there is an age-old bug (there is no other way of classifying this behavior) in XmlSerializer that it will not return all memory when deallocated, for some of its constructors. This is even documented by Microsoft themselves in the docs for the XmlSerializer class, under the Dynamically Generated Assemblies heading it says:

If you use any of the other constructors, multiple versions of the same assembly are generated and never unloaded, which results in a memory leak and poor performance.

Yes, indeed it does… Since .Net Framework 1.1, it seems. It turns out we should not create new instances of the XmlSerializer class, but cache and reuse them instead. So we implemented a small cache class that handles the allocation and caching of these instances:

using System.Collections.Concurrent;
using System.Xml.Serialization;

namespace Xena.Web.Services
{
    public interface IXmlSerializerFactory
    {
        XmlSerializer GetXmlSerializer<T>(string rootAttribute);
    }

    public class XmlSerializerFactory : IXmlSerializerFactory
    {
        private readonly ConcurrentDictionary<string, XmlSerializer> _xmlSerializerCache;

        public XmlSerializerFactory()
        {
            _xmlSerializerCache = new ConcurrentDictionary<string, XmlSerializer>();
        }

        public XmlSerializer GetXmlSerializer<T>(string rootAttribute)
        {
            var key = typeof(T).FullName + "#" + rootAttribute;

            var serializer = _xmlSerializerCache.GetOrAdd(key,
                k => new XmlSerializer(typeof (T), new XmlRootAttribute(rootAttribute)));

            return serializer;
        }
    }
}

This class has to be a singleton, of course, which was configured in our DI container StructureMap like this:

container.Configure(c => c.For(typeof(IXmlSerializerFactory)).Singleton().Use(typeof(XmlSerializerFactory)));

And finally, everything worked like a charm, with horizontal memory graphs. 🙂

Using handle.exe it was easy to verify on the developer machines that the XmlSerializerFactory actually solved the problem since the Semaphore handle count now remained constant after page views. If we only had had the memory graphs to go by, it would have taken much longer to verify the non-growing memory trend since the total memory allocations always fluctuates during execution.

/Emil

Analyzing battery health on a Windows PC

Recently, my wife’s laptop has started to exhibit shorter and shorter battery life after each full load so I suspected that it was time for exchanging the battery. But how can one be sure that replacing the battery solves the problem? Laptop batteries are not exactly cheap, the one I found for the wife’s Asus A53S costs about 50 €.

It turns out that there’s a useful command line tool built into Windows for this: powercfg.exe

PS C:\WINDOWS\system32> powercfg.exe -energy
Enabling tracing for 60 seconds...
Observing system behavior...
Analyzing trace data...
Analysis complete.

Energy efficiency problems were found.

14 Errors
17 Warnings
24 Informational

See C:\WINDOWS\system32\energy-report.html for more details.
PS C:\WINDOWS\system32> start C:\WINDOWS\system32\energy-report.html

A lot of errors and warnings, apparently. Viewing the generated file reveals the details, and apart from the errors and warnings (that were mostly related to energy savings not being enabled when running on battery) there was this interesting tidbit of information:

powercfg_output

This is in obviously in Swedish, in English the labels would be Design Capacity 56160 and Last Full Charge 28685, so it seems that the battery cannot be loaded to its full capacity anymore but rather about half of it. Seem it’s indeed time for a new battery.

/Emil

(I wrote this post to remind myself of this useful utility. I did by the way buy a new battery and now battery life is back to normal.)

 

NDepend v6

I have written about NDepend a few times before and now that version 6 has been released this summer it’s time to mention it again, as I was given a licence for testing it by the kind NDepend guys 🙂

Trend monitoring

The latest version I have been using prior to version 6 is version 4, so my favorite new feature is the trend monitoring functionality (which was actually introduced in version 5). Such a great idea to integrate it into the client tool for experimenting! Normally you would define different metrics and let the build server store the history but having the possibility of working with this right inside of Visual Studio makes it so much easier to experiment with metrics without having to configure this in the build server.

Here is a screen shot of what it may look like (the project name is blurred so to not reveal the customer):

Dashboard with metrics and deltas compared to a given baseline plus two trend charts

Dashboard with metrics and deltas compared to a given baseline plus two trend charts

  • At the top of the dashboard there is information about the NDepend project that has been analyzed and the baseline analysis used for comparison.
  • Below this there are several different groups with metrics and deltas, e.g. # Lines of Code (apparently the code base has grown with 1.12%, or 255 lines, in this case when compared to the baseline).
  • Next to the numeric metrics are the trend charts, in my case just two of them, showing the number of lines of code and critical rule violations, respectively. Many more are available and it’s easy to create your own charts with custom metrics. BTW, “critical” refers to the rules deemed critical in this project. These rules will differ from project to project.
    • In the image we can see that the number of lines of code grows steadily which is to be expected in a project which is actively developed.
    • The number of critical errors also grows steadily which probably indicates an insufficient focus on code quality.
    • There is a sudden decrease in rule violations in the beginning of July where one of the developers of the project decided to refactor some “smelly” code.

This is just a simple example but I’m really liking how easy it now is to get a feeling for the code trends of a project with just a glance on the dashboard every now and then.

The Dependency Matrix

The trend monitoring features may be very useful but the trademark feature of NDepend is probably the dependency matrix. Most people who have started up NDepend has probably seen the following rather bewildering matrix:

The dependency matrix can be used to discover all sorts of structual propertis of the code base

The dependency matrix can be used to discover all sorts of structual propertis of the code base

I must confess that I haven’t really spent too much time with this view before since I’ve had some problems grasping it fully, but this time around I decided it was time to dive into it a little more. I think it might be appropriate to write a few words on my findings, so here we go.

Since it’s a little difficult to see what’s going on with a non-trivial code base, I started with something trivial, with code in a main namespace referencing code in NamespaceA that in turn references code in NamespaceB. If the view does not show my namespaces (which is what I normally want, then the first thing to do when opening the matrix is to set the most suitable row/column filter with the dropdown):

The dependency matrix filtering dropdown

I tend to use View Application Namespaces Only most of the time since this filters out all third party namespaces and also expands all my application namespaces (the top level of the row/column headers are assemblies which is not normally what I want).

Also note that the calculation of the number shown inside a dependency cell can be changed independently of the filtering. In my case it’s 0 on all cells which seems strange since there are in fact dependencies, but the reason for this is that it shows the number of members used in the target namespace and in this case I only refer to types. Changing this is done in another dropdown in the matrix window.

Another thing I learned recently is that it may be very useful to switch back and forth between the Dependency Matrix and the Dependency Graph and in the image below I show both windows next to each other. In this simple case they show the same thing but when the code base grows then dependencies become too numerous to be shown visually in a useful way. Luckiliy there are options in the matrix to show parts of it the graph, and vice versa. For example, right clicking on namespace row heading opens a menu with a View Internal Dependencies On Graph option so that only a subset of the code base dependencies are shown. Very useful indeed.

Here’s what it may look like:

A simple sample project with just three namespaces

A simple sample project with just three namespaces

Also note that hovering over a dependency cell displays useful popups and changes the cursor into an arrow indicating the direction of the dependency, which is also reflected by the color of the cell (by the way, look out for black cells, they indicate circular references!)

Another way to invoke the graph is to right click a dependency cell in the matrix:

Context menu for a dependency

 

The top option, Build a Graph made of Code Elements involved in this dependency does just what is says. Also very useful.

By using the expand/collapse functionality of the dependency matrix together with the option to display dependencies in the graph view it becomes easier to pinpoint structual problems in the code base. It takes a bit of practise because of the sheer amount of information in the matrix but I’m growing into liking it more and more. I would suggest anyone interested to spend some time on this view and it’s many options. I found this official description to be useful for a newcomer: Dependency Structure Matrix

Wrapping it up

NDepend 6 has many more features than what I have described here and it’s such a useful tool that I would suggest anyone interested in code quality to download a trial and play around with it. Just be prepared to invest some time into learning the tool to get the most out of it.

A very good way to get started is the Pluralsight course by Eric Dietrich. It’s describes NDepend version 5 but all of it applies to version 6 as well and it covers the basics of the tool very well. Well worth a look if you’re a Pluralsight subscriber.

Global text highlighting in Sublime Text 3

Sublime Text is currently my favorite text editor that I use whenever I have to leave Visual Studio and this post is about how make it highlight ISO dates in any text file regardless of file format.

Sublime Text obviously has great syntax coloring support for highlighting keywords, strings and comments in many different languages but what if you want to highlight text regardless of the type of file you’re editing? In my case I want to highlight dates to make it easier to read logfiles and other places where ISO dates are used, but the concept is general. You might want to indicate TODO och HACK items and whatnot and this post is about how to do that in Sublime Text.

Here’s an example of what we want to achieve:

Log file with highlighted date

Here is the wanted result, a log file with a clearly indicated date, marking the start of a logged item.

 

We’re going to solve this using a great Sublime package written by Scott Kuroda, PersistentRegexHighlight. To add highlighting of dates in Sublime text, follow these steps:

  1. Install the package by pressing Ctrl + Shift + P to open the command palette, type “ip” and select the “Package Control: Install Package” command. Press Enter to show a list of packages. Select the PersistentRegexHighlight package and press Enter again.
  2. Next we need to start configuring the package. Select the menu item Preferences / Package Settings / PersistentRegexHighlight / Settings – User to show an empty settings file for the current user. Add the following content:
    
    {
       // Array of objects containing a regular expression
       // and an optional coloring scheme
       "regex":[
         {
           // Match 2015-06-02, 2015-06-02 12:00, 2015-06-02 12:00:00,
           // 2015-06-02 12:00:00,100
           "pattern": "\\d{4}-\\d{2}-\\d{2}( \\d{2}:\\d{2}(:\\d{2}(,\\d{3})?)?)?",
           "color": "F5DB95",
           "ignore_case": true
         },
         {
           "pattern": "\\bTODO\\b",
           "color_scope": "keyword",
           "ignore_case": true
         }
       ],
    
       // If highlighting is enabled
       "enabled": true,
    
       // If highlighting should occur when a view is loaded
       "on_load": true,
    
       // If highlighting should occur as modifications happen
       "on_modify": true,
    
       // File pattern to disable on. Should be specified as Unix style patterns
       // Note, this looks at the absolute path to match the pattern. So if trying
       // ignore a single file (e.g. README.md), you will need to specify
       // "**/README.md"
       "disable_pattern": [],
    
       // Maximum file size to run the the PersistentRegexHighlight on.
       // Any value less than or equal to zero will be treated as a non
       // limiting value.
       "max_file_size": 0
    }
    
  3. Most of the settings should be pretty self-explanatory, basically we’re using two highlighting rules in this example:
    1. First we specify a regex to find all occurances of ISO dates (e.g. “2015-06-02”, with or without a time part appended) and mark these with a given color (using the color property).
    2. The second regex specifies that all TODO items should be colored like code keywords (using the color_scope property). Other valid values for the scope are “name”, “comment”, “string”.
  4. When saving the settings file you will be asked to create custom color theme. Click OK in this dialog.

Done! Now, when you open any file with content matching the regexes given in the settings file, that content will be colored.

Tips

  1. Sometimes it’s necessary to touch the file to trigger a repaint (type a character and delete it).
  2. The regex option is an array so it’s easy to add as many items we want with different colors.
  3. To find more values for the color_scope property, you can place the cursor in a code file of choice and press Ctrl + Alt + Shift + P. The current scope is then displayed in the status bar. However it’s probably easier to just use the color property instead and set the wanted color directly.

Happy highlighting!

/Emil

Unit testing an EPiServer ContentArea

Background

Consider the following simple method that returns the names of the items of a content area:

public class ContentAreaHelper
{
    public static IEnumerable<string> GetItemNames(ContentArea contentArea)
    {
        return contentArea.Items.Select(item => item.GetContent().Name);
    }
}

The method is made up to be as simple as possible to illustrate the how unit testing against a ContentArea can be done, which turns out to be non-trivial.

You might often have more complex logic that really needs to be unit tested, but I’d argue that even simple methods like these are entitled to a few unit tests. Rather than just describe a finished solution, this post describes how to do this step by step and also includes some error messages you’re likely to encounter. The idea is to make it easier to find the post when googling for the subject matter or problems you might have. (For the record, this code has been tested with EPiServer 7.19, for other versions some adjustments may be required.)

To get started, let’s assume that we want to create a simple test, something like this:

[Test]
public void GetNames_WithContentAreaItem_ShouldReturnTheName()
{
    // Arrange
    var contentReference = new ContentReference(1);
    var content = new BasicContent { Name = "a" };

    // TODO: Associate GetContent calls to content

    var contentArea = new ContentArea();
    contentArea.Items.Add(new ContentAreaItem {ContentLink = contentReference});

    // Act
    var names = ContentAreaHelper.GetItemNames(contentArea);

    // Assert
    Assert.That(names.Count(), Is.EqualTo(1));
    Assert.That(names.First(), Is.EqualTo("a"));
}

However, this is not a finished test since we don’t connect the content variable to be returned from the call to item.GetContent() in the ContentAreaHelper.GetItemNames() method. So how do we do that?

Well it’s not immediately obvious but if you dig into the GetContent() method you’ll find that it’s an extension method that retrieves an instance of IContentRepository from the ServiceLocator and then calls an overloaded extension method that takes an IContentRepository as parameter. That repository’s TryGet method is then called, so that’s where we need to hook up our content.

So first we need a IContentRepository that returns our content when queried. The easiest way to do that is by using a mocking framework, in this example I’m using FakeItEasy.

var contentRepository = A.Fake<IContentRepository>();
IContent outContent;
A.CallTo(() => contentRepository.TryGet(contentReference,
        A<ILanguageSelector>.Ignored, out outContent))
    .Returns(true)
    .AssignsOutAndRefParameters(content);

This basically tells FakeItEasy that we need a IContentRepository instance that returns our content when queried for the given content reference. EPiServer also passes a ILanguageSelector object but we’re not interested in its value so we ignore that parameter. The code is further complicated by the fact that TryGet is a method with an output parameter.

We’re not finished yet, but if you’re tempted and try to run the test right now, you’ll probably get an exception such as this:

EPiServer.ServiceLocation.ActivationException : Activation error occurred while trying to get instance of type ISecuredFragmentMarkupGeneratorFactory, key ""
  ----> StructureMap.StructureMapException : StructureMap Exception Code:  202
No Default Instance defined for PluginFamily EPiServer.Core.Html.StringParsing.ISecuredFragmentMarkupGeneratorFactory, EPiServer, Version=7.19.2.0, Culture=neutral, PublicKeyToken=8fe83dea738b45b7

This might seem very cryptic but basically the problem is that EPiServer’s service locator is not set up which is needed when adding items to a content area, for some obscure reason. So we now have two options, either we do just that, or we take a short cut. I like short cuts, so let’s try that first.

A shortcut – faking a ContentArea

As I mentioned earlier the GetContent() extension method has an overload that takes a IContentRepository repository as parameter. We could use that instead of GetContent and our test should work. So we rewrite the class under test like this:

public class ContentAreaHelper
{
    public static IEnumerable<string> GetItemNames(ContentArea contentArea)
    {
        var rep = ServiceLocator.Current.GetInstance<IContentRepository>();
        return GetItemNames(contentArea, rep);
    }

    public static IEnumerable<string> GetItemNames(ContentArea contentArea,
        IContentRepository rep)
    {
        return contentArea.Items.Select(item => item.GetContent(rep).Name);
    }
}

And then we can change our test to call the second overload, passing in our fake IContentRepository. However, if we run the test now, which feels it should work, it still gives the ActivationException mentioned above. It occurs when adding items to the ContentArea as EPiServer apparently needs a live datasource for this to work. This is utterly confusing of course, and it might seem that the short cut is doomed. Not so!

Here comes the next trick. We don’t really need a “real” content area for the test, all we need is an object that looks like a ContentArea and behaves like it. If it looks like a duck, and all that. 🙂

So what we can do is to fake a ContentArea object and define ourselves what the Items collection contains:

//var contentArea = new ContentArea();
//contentArea.Items.Add(new ContentAreaItem {ContentLink = contentReference});
var contentArea = A.Fake<ContentArea>();
A.CallTo(() => contentArea.Items).Returns(new List<ContentAreaItem> {contentAreaItem});

If we run the test now, we get a new error:

EPiServer.BaseLibrary.ClassFactoryException : ClassFactory not initialized

Most people will now give up on the $@& EPiServer framework deeming unit testing impossible, but since we’re not quitters we add this to our test:

// Avoid "EPiServer.BaseLibrary.ClassFactoryException : ClassFactory not 
// initialized"
EPiServer.BaseLibrary.ClassFactory.Instance =
    new EPiServer.Implementation.DefaultBaseLibraryFactory(string.Empty);
EPiServer.BaseLibrary.ClassFactory.RegisterClass(
    typeof(EPiServer.BaseLibrary.IRuntimeCache),
    typeof(EPiServer.Implementation.DefaultRuntimeCache));
EPiServer.Globalization.ContentLanguage.PreferredCulture =
    new CultureInfo("en");

And finally, our test is green! This is what the final short cut version of the test looks like:

private IContentRepository _contentRepository;

[SetUp]
public void SetUp()
{
    _contentRepository = A.Fake<IContentRepository>();

    // Avoid "EPiServer.BaseLibrary.ClassFactoryException : ClassFactory not
    // initialized"
    ClassFactory.Instance = new DefaultBaseLibraryFactory(string.Empty);
    ClassFactory.RegisterClass(typeof(IRuntimeCache),
        typeof(DefaultRuntimeCache));
    ContentLanguage.PreferredCulture = new CultureInfo("en");
}



[Test]
public void GetNames_WithContentAreaItem_ShouldReturnTheName()
{
    // Arrange
    var contentReference = new ContentReference(1);
    var content = new BasicContent { Name = "a" };

    // Create fake IContentRepository that returns our content
    IContent outContent;
    A.CallTo(() => _contentRepository.TryGet(contentReference,
            A<ILanguageSelector>.Ignored, out outContent))
        .Returns(true)
        .AssignsOutAndRefParameters(content);

    // Create fake ContentArea with a ContentAreaItem
    var contentAreaItem = new ContentAreaItem {
        ContentLink = contentReference
    };
    var contentArea = A.Fake<ContentArea>();
    A.CallTo(() => contentArea.Items)
        .Returns(new List<ContentAreaItem> { contentAreaItem });

    // Act
    var names = ContentAreaHelper.GetItemNames(contentArea, _contentRepository);

    // Assert
    Assert.That(names.Count(), Is.EqualTo(1));
    Assert.That(names.First(), Is.EqualTo("a"));
}

I moved parts of the code into a SetUp method that NUnit executes prior to each test to make the actual test method a little more clean, but it still isn’t very pretty. Extracting some of the setup into some helper methods is probably a good idea, but for brevity we’ll leave it like it is.

Ok, that was the shortcut version with a fake ContentArea, but what if we don’t want to rewrite our method to take a IContentRepository parameter? Or perhaps we’re writing tests against other methods that don’t have these handy overloads? Well, then we need to setup up a basic service locator registry and initialize the EPiServer framework’s ServiceLocator prior to the test.

Running the test with a configured ServiceLocator

Ok, time to go back to our original method under test:

public class ContentAreaHelper
{
    public static IEnumerable<string> GetItemNames(ContentArea contentArea)
    {
        return contentArea.Items.Select(item => item.GetContent().Name);
    }
}

And for the test, remember that we had created a IContentRepository fake that we want EPiServer to use. This is how we create a StructureMap object factory and tell EPiServer to use it for its ServiceLocator:

// Clear the StructureMap registry and reinitialize
ObjectFactory.Initialize(expr => { });
ObjectFactory.Configure(expr => expr.For<IContentRepository>()
    .Use(contentRepository));
ObjectFactory.Configure(expr => 
    expr.For<ISecuredFragmentMarkupGeneratorFactory>()
        .Use(A.Fake<ISecuredFragmentMarkupGeneratorFactory>()));
ObjectFactory.Configure(expr => expr.For<IContentTypeRepository>()
    .Use(A.Fake<IContentTypeRepository>()));
ObjectFactory.Configure(expr => expr.For<IPublishedStateAssessor>()
    .Use(A.Fake<IPublishedStateAssessor>()));

// Set up the EPiServer service locator with our fakes
ServiceLocator.SetLocator(
    new StructureMapServiceLocator(ObjectFactory.Container));

The fakes for ISecuredFragmentMarkupGeneratorFactory, IContentTypeRepository and IPublishedStateAssessor were added because StructureMap complained that it did not know where to find instances for those interfaces when running the tests.

We still get the “ClassFactory not initialized” exception as above so we must apply the same fix again. After that, the test works.

After some refactoring, this is what the test looks like:

private IContentRepository _contentRepository;

[SetUp]
public void SetUp()
{
    _contentRepository = A.Fake<IContentRepository>();

    // Clear the StructureMap registry and reinitialize
    ObjectFactory.Initialize(expr => { });
    ObjectFactory.Configure(expr => expr.For<IContentRepository>()
        .Use(_contentRepository));
    ObjectFactory.Configure(expr => 
        expr.For<ISecuredFragmentMarkupGeneratorFactory>()
            .Use(A.Fake<ISecuredFragmentMarkupGeneratorFactory>()));
    ObjectFactory.Configure(expr => expr.For<IContentTypeRepository>()
        .Use(A.Fake<IContentTypeRepository>()));
    ObjectFactory.Configure(expr => expr.For<IPublishedStateAssessor>()
        .Use(A.Fake<IPublishedStateAssessor>()));

    // Set up the EPiServer service locator with our fakes
    ServiceLocator.SetLocator(
        new StructureMapServiceLocator(ObjectFactory.Container));

    // Avoid "EPiServer.BaseLibrary.ClassFactoryException : ClassFactory not
    // initialized"
    ClassFactory.Instance = new DefaultBaseLibraryFactory(string.Empty);
    ClassFactory.RegisterClass(typeof(IRuntimeCache),
        typeof(DefaultRuntimeCache));
    ContentLanguage.PreferredCulture = new CultureInfo("en");
}


[Test]
public void GetNames_WithContentAreaItem_ShouldReturnTheName()
{
    // Arrange
    var contentReference = new ContentReference(1);
    var content = new BasicContent { Name = "a" };

    // Associate GetContent calls to 'content'
    IContent outContent;
    A.CallTo(() => _contentRepository.TryGet(contentReference,
            A<ILanguageSelector>.Ignored, out outContent))
        .Returns(true)
        .AssignsOutAndRefParameters(content);

    var contentArea = new ContentArea();
    contentArea.Items.Add(new ContentAreaItem {ContentLink = contentReference});

    // Act
    var names = ContentAreaHelper.GetItemNames(contentArea);

    // Assert
    Assert.That(names.Count(), Is.EqualTo(1));
    Assert.That(names.First(), Is.EqualTo("a"));
} 

As before we do general initialization in the SetUp method and only do test-specific stuff in the actual test. This is so we can reuse as much setup as possible for the next test.

Final thoughts

So there you go, two ways of writing unit tests against an EPiServer ContentArea. Use the one most suitable for you. I tend to like the “faked ContentArea” version since you don’t have to get quite as messy with EPiServer internals, but sometimes it is not enough and I then use the other one. It’s useful to have both in your toolbox, and now you do as well. 🙂

There are probably other ways of accomplishing the same task, so feel free to comment below if you have opinions!

Cheers,

Emil

New blog theme

I just changed the theme of the blog since I got tired of the old one, and also because I’m working on a post where the old theme’s low width became a problem. The new theme is called
Catch Responsive and I really like it so far. It works much better on mobile devices and is very configurable.

This is what the old theme looked like, I thought it would be nice to have a record of it to look back on in the future.Old blog theme

Visual Studio slow? Try disabling hardware acceleration.

Ever since I got a new computer I’ve been frustrated by how slow Visual Studio 2013 has been. Granted, the computer is a little weak performance-wise, it’s of the Ultrabook type, but it’s been slow even at basic tasks such as moving the cursor around. Really strange.

But after a tip from an article at Code Project I decided to disable hardware acceleration:

Disable hardware acceleration in Visual Studio 2013

Disable hardware acceleration in Visual Studio 2013

And what do you know, it worked! Moving around with the cursor keys in source code is now much more responsive and no longer a source of frustration. Oh joy!

/Emil

Changing a dynamic disk into a basic one

I have just removed a SATA hard disk from my old desktop computer and inserted it into an external case from Deltaco so that it could be used from a laptop. Unfortunatly it did not work straight away, no disk showed up in the File Explorer.

Slighty worried, I opened the Disk Manager and there it was shown as “Dynamic” and “Invalid”. More worried, I started googling and found a solution that involved using a hex editor to modify a byte directly on the hard drive to switch it from dynamic into basic. It worked perfectly and the drive now works as expected. I’m not sure what the change means exactly but I’m very happy right now. It felt kind of hard core to use a hex editor to fix a problem like this, that does not happen every day. 🙂

/Emil

Setting the font of a PowerShell console to Lucida Console won’t work

Ever tried changing the font of a PowerShell console to Lucida Console, only to see the setting gone the next time you open the console? In that case, you’re not alone! I’ve been pulling my hair over this problem many times but today I decided to investigate it further.

There are several different solutions and none of them worked for me. For some people it helps if you set the font size to something other than 12 points, but not for me. For others it helps to start the console as administrator, but not for me. And here’s a strange thing: In a good old Command Prompt (CMD.EXE) Lucida Console works as a default font with no problem at all. It’s only in PowerShell console I can’t set it as default.A few of these tricks are discussed at superuser.com.

My problem turned out to be different as it was related to the regional settings of my Windows installations. The problem is described very briefly by a Microsoft support engineer here. Apparently Lucide Console “is not properly supported on CJK languages and other languages that are not localized by PowerShell (e.g. Arabic, Hebrew, etc.)”. It seems that Swedish belongs to the same category of languages that for some reason is not deemed compatible with Lucida Console. Strange thing is that it works perfectly when setting it on the console instance…

Anyway, to fix the problem all I had to do was to change my system locale to “English (United States)”:

Setting system locale to a language that is "supported" for Lucida Console solves the problem...

Setting system locale to a language that is “supported” for Lucida Console solves the problem…

Voila, my PowerShell prompt is pretty everytime I open it, instead of using the ugly “Raster Fonts” which it falled back to before.

The problem description that Lucida Console is not compatible with all languages makes very little sense to me, but at least my problem is solved.

/Emil