NDepend v6

I have written about NDepend a few times before and now that version 6 has been released this summer it’s time to mention it again, as I was given a licence for testing it by the kind NDepend guys 🙂

Trend monitoring

The latest version I have been using prior to version 6 is version 4, so my favorite new feature is the trend monitoring functionality (which was actually introduced in version 5). Such a great idea to integrate it into the client tool for experimenting! Normally you would define different metrics and let the build server store the history but having the possibility of working with this right inside of Visual Studio makes it so much easier to experiment with metrics without having to configure this in the build server.

Here is a screen shot of what it may look like (the project name is blurred so to not reveal the customer):

Dashboard with metrics and deltas compared to a given baseline plus two trend charts

Dashboard with metrics and deltas compared to a given baseline plus two trend charts

  • At the top of the dashboard there is information about the NDepend project that has been analyzed and the baseline analysis used for comparison.
  • Below this there are several different groups with metrics and deltas, e.g. # Lines of Code (apparently the code base has grown with 1.12%, or 255 lines, in this case when compared to the baseline).
  • Next to the numeric metrics are the trend charts, in my case just two of them, showing the number of lines of code and critical rule violations, respectively. Many more are available and it’s easy to create your own charts with custom metrics. BTW, “critical” refers to the rules deemed critical in this project. These rules will differ from project to project.
    • In the image we can see that the number of lines of code grows steadily which is to be expected in a project which is actively developed.
    • The number of critical errors also grows steadily which probably indicates an insufficient focus on code quality.
    • There is a sudden decrease in rule violations in the beginning of July where one of the developers of the project decided to refactor some “smelly” code.

This is just a simple example but I’m really liking how easy it now is to get a feeling for the code trends of a project with just a glance on the dashboard every now and then.

The Dependency Matrix

The trend monitoring features may be very useful but the trademark feature of NDepend is probably the dependency matrix. Most people who have started up NDepend has probably seen the following rather bewildering matrix:

The dependency matrix can be used to discover all sorts of structual propertis of the code base

The dependency matrix can be used to discover all sorts of structual propertis of the code base

I must confess that I haven’t really spent too much time with this view before since I’ve had some problems grasping it fully, but this time around I decided it was time to dive into it a little more. I think it might be appropriate to write a few words on my findings, so here we go.

Since it’s a little difficult to see what’s going on with a non-trivial code base, I started with something trivial, with code in a main namespace referencing code in NamespaceA that in turn references code in NamespaceB. If the view does not show my namespaces (which is what I normally want, then the first thing to do when opening the matrix is to set the most suitable row/column filter with the dropdown):

The dependency matrix filtering dropdown

I tend to use View Application Namespaces Only most of the time since this filters out all third party namespaces and also expands all my application namespaces (the top level of the row/column headers are assemblies which is not normally what I want).

Also note that the calculation of the number shown inside a dependency cell can be changed independently of the filtering. In my case it’s 0 on all cells which seems strange since there are in fact dependencies, but the reason for this is that it shows the number of members used in the target namespace and in this case I only refer to types. Changing this is done in another dropdown in the matrix window.

Another thing I learned recently is that it may be very useful to switch back and forth between the Dependency Matrix and the Dependency Graph and in the image below I show both windows next to each other. In this simple case they show the same thing but when the code base grows then dependencies become too numerous to be shown visually in a useful way. Luckily there are options in the matrix to show parts of it the graph, and vice versa. For example, right clicking on namespace row heading opens a menu with a View Internal Dependencies On Graph option so that only a subset of the code base dependencies are shown. Very useful indeed.

Here’s what it may look like:

A simple sample project with just three namespaces

A simple sample project with just three namespaces

Also note that hovering over a dependency cell displays useful popups and changes the cursor into an arrow indicating the direction of the dependency, which is also reflected by the color of the cell (by the way, look out for black cells, they indicate circular references!)

Another way to invoke the graph is to right click a dependency cell in the matrix:

Context menu for a dependency

 

The top option, Build a Graph made of Code Elements involved in this dependency does just what is says. Also very useful.

By using the expand/collapse functionality of the dependency matrix together with the option to display dependencies in the graph view it becomes easier to pinpoint structual problems in the code base. It takes a bit of practise because of the sheer amount of information in the matrix but I’m growing into liking it more and more. I would suggest anyone interested to spend some time on this view and it’s many options. I found this official description to be useful for a newcomer: Dependency Structure Matrix

Wrapping it up

NDepend 6 has many more features than what I have described here and it’s such a useful tool that I would suggest anyone interested in code quality to download a trial and play around with it. Just be prepared to invest some time into learning the tool to get the most out of it.

A very good way to get started is the Pluralsight course by Eric Dietrich. It describes NDepend version 5 but all of it applies to version 6 as well and it covers the basics of the tool very well. Well worth a look if you’re a Pluralsight subscriber.

Global text highlighting in Sublime Text 3

Sublime Text is currently my favorite text editor that I use whenever I have to leave Visual Studio and this post is about how make it highlight ISO dates in any text file regardless of file format.

Sublime Text obviously has great syntax coloring support for highlighting keywords, strings and comments in many different languages but what if you want to highlight text regardless of the type of file you’re editing? In my case I want to highlight dates to make it easier to read logfiles and other places where ISO dates are used, but the concept is general. You might want to indicate TODO och HACK items and whatnot and this post is about how to do that in Sublime Text.

Here’s an example of what we want to achieve:

Log file with highlighted date

Here is the wanted result, a log file with a clearly indicated date, marking the start of a logged item.

 

We’re going to solve this using a great Sublime package written by Scott Kuroda, PersistentRegexHighlight. To add highlighting of dates in Sublime text, follow these steps:

  1. Install the package by pressing Ctrl + Shift + P to open the command palette, type “ip” and select the “Package Control: Install Package” command. Press Enter to show a list of packages. Select the PersistentRegexHighlight package and press Enter again.
  2. Next we need to start configuring the package. Select the menu item Preferences / Package Settings / PersistentRegexHighlight / Settings – User to show an empty settings file for the current user. Add the following content:
    
    {
       // Array of objects containing a regular expression
       // and an optional coloring scheme
       "regex":[
         {
           // Match 2015-06-02, 2015-06-02 12:00, 2015-06-02 12:00:00,
           // 2015-06-02 12:00:00,100
           "pattern": "\\d{4}-\\d{2}-\\d{2}( \\d{2}:\\d{2}(:\\d{2}(,\\d{3})?)?)?",
           "color": "F5DB95",
           "ignore_case": true
         },
         {
           "pattern": "\\bTODO\\b",
           "color_scope": "keyword",
           "ignore_case": true
         }
       ],
    
       // If highlighting is enabled
       "enabled": true,
    
       // If highlighting should occur when a view is loaded
       "on_load": true,
    
       // If highlighting should occur as modifications happen
       "on_modify": true,
    
       // File pattern to disable on. Should be specified as Unix style patterns
       // Note, this looks at the absolute path to match the pattern. So if trying
       // ignore a single file (e.g. README.md), you will need to specify
       // "**/README.md"
       "disable_pattern": [],
    
       // Maximum file size to run the the PersistentRegexHighlight on.
       // Any value less than or equal to zero will be treated as a non
       // limiting value.
       "max_file_size": 0
    }
    
  3. Most of the settings should be pretty self-explanatory, basically we’re using two highlighting rules in this example:
    1. First we specify a regex to find all occurances of ISO dates (e.g. “2015-06-02”, with or without a time part appended) and mark these with a given color (using the color property).
    2. The second regex specifies that all TODO items should be colored like code keywords (using the color_scope property). Other valid values for the scope are “name”, “comment”, “string”.
  4. When saving the settings file you will be asked to create custom color theme. Click OK in this dialog.

Done! Now, when you open any file with content matching the regexes given in the settings file, that content will be colored.

Tips

  1. Sometimes it’s necessary to touch the file to trigger a repaint (type a character and delete it).
  2. The regex option is an array so it’s easy to add as many items we want with different colors.
  3. To find more values for the color_scope property, you can place the cursor in a code file of choice and press Ctrl + Alt + Shift + P. The current scope is then displayed in the status bar. However it’s probably easier to just use the color property instead and set the wanted color directly.

Happy highlighting!

/Emil

Unit testing an EPiServer ContentArea

Background

Consider the following simple method that returns the names of the items of a content area:

public class ContentAreaHelper
{
    public static IEnumerable<string> GetItemNames(ContentArea contentArea)
    {
        return contentArea.Items.Select(item => item.GetContent().Name);
    }
}

The method is made up to be as simple as possible to illustrate the how unit testing against a ContentArea can be done, which turns out to be non-trivial.

You might often have more complex logic that really needs to be unit tested, but I’d argue that even simple methods like these are entitled to a few unit tests. Rather than just describe a finished solution, this post describes how to do this step by step and also includes some error messages you’re likely to encounter. The idea is to make it easier to find the post when googling for the subject matter or problems you might have. (For the record, this code has been tested with EPiServer 7.19, for other versions some adjustments may be required.)

To get started, let’s assume that we want to create a simple test, something like this:

[Test]
public void GetNames_WithContentAreaItem_ShouldReturnTheName()
{
    // Arrange
    var contentReference = new ContentReference(1);
    var content = new BasicContent { Name = "a" };

    // TODO: Associate GetContent calls to content

    var contentArea = new ContentArea();
    contentArea.Items.Add(new ContentAreaItem {ContentLink = contentReference});

    // Act
    var names = ContentAreaHelper.GetItemNames(contentArea);

    // Assert
    Assert.That(names.Count(), Is.EqualTo(1));
    Assert.That(names.First(), Is.EqualTo("a"));
}

However, this is not a finished test since we don’t connect the content variable to be returned from the call to item.GetContent() in the ContentAreaHelper.GetItemNames() method. So how do we do that?

Well it’s not immediately obvious but if you dig into the GetContent() method you’ll find that it’s an extension method that retrieves an instance of IContentRepository from the ServiceLocator and then calls an overloaded extension method that takes an IContentRepository as parameter. That repository’s TryGet method is then called, so that’s where we need to hook up our content.

So first we need a IContentRepository that returns our content when queried. The easiest way to do that is by using a mocking framework, in this example I’m using FakeItEasy.

var contentRepository = A.Fake<IContentRepository>();
IContent outContent;
A.CallTo(() => contentRepository.TryGet(contentReference,
        A<ILanguageSelector>.Ignored, out outContent))
    .Returns(true)
    .AssignsOutAndRefParameters(content);

This basically tells FakeItEasy that we need a IContentRepository instance that returns our content when queried for the given content reference. EPiServer also passes a ILanguageSelector object but we’re not interested in its value so we ignore that parameter. The code is further complicated by the fact that TryGet is a method with an output parameter.

We’re not finished yet, but if you’re tempted and try to run the test right now, you’ll probably get an exception such as this:

EPiServer.ServiceLocation.ActivationException : Activation error occurred while trying to get instance of type ISecuredFragmentMarkupGeneratorFactory, key ""
  ----> StructureMap.StructureMapException : StructureMap Exception Code:  202
No Default Instance defined for PluginFamily EPiServer.Core.Html.StringParsing.ISecuredFragmentMarkupGeneratorFactory, EPiServer, Version=7.19.2.0, Culture=neutral, PublicKeyToken=8fe83dea738b45b7

This might seem very cryptic but basically the problem is that EPiServer’s service locator is not set up which is needed when adding items to a content area, for some obscure reason. So we now have two options, either we do just that, or we take a short cut. I like short cuts, so let’s try that first.

A shortcut – faking a ContentArea

As I mentioned earlier the GetContent() extension method has an overload that takes a IContentRepository repository as parameter. We could use that instead of GetContent and our test should work. So we rewrite the class under test like this:

public class ContentAreaHelper
{
    public static IEnumerable<string> GetItemNames(ContentArea contentArea)
    {
        var rep = ServiceLocator.Current.GetInstance<IContentRepository>();
        return GetItemNames(contentArea, rep);
    }

    public static IEnumerable<string> GetItemNames(ContentArea contentArea,
        IContentRepository rep)
    {
        return contentArea.Items.Select(item => item.GetContent(rep).Name);
    }
}

And then we can change our test to call the second overload, passing in our fake IContentRepository. However, if we run the test now, which feels it should work, it still gives the ActivationException mentioned above. It occurs when adding items to the ContentArea as EPiServer apparently needs a live datasource for this to work. This is utterly confusing of course, and it might seem that the short cut is doomed. Not so!

Here comes the next trick. We don’t really need a “real” content area for the test, all we need is an object that looks like a ContentArea and behaves like it. If it looks like a duck, and all that. 🙂

So what we can do is to fake a ContentArea object and define ourselves what the Items collection contains:

//var contentArea = new ContentArea();
//contentArea.Items.Add(new ContentAreaItem {ContentLink = contentReference});
var contentArea = A.Fake<ContentArea>();
A.CallTo(() => contentArea.Items).Returns(new List<ContentAreaItem> {contentAreaItem});

If we run the test now, we get a new error:

EPiServer.BaseLibrary.ClassFactoryException : ClassFactory not initialized

Most people will now give up on the $@& EPiServer framework deeming unit testing impossible, but since we’re not quitters we add this to our test:

// Avoid "EPiServer.BaseLibrary.ClassFactoryException : ClassFactory not 
// initialized"
EPiServer.BaseLibrary.ClassFactory.Instance =
    new EPiServer.Implementation.DefaultBaseLibraryFactory(string.Empty);
EPiServer.BaseLibrary.ClassFactory.RegisterClass(
    typeof(EPiServer.BaseLibrary.IRuntimeCache),
    typeof(EPiServer.Implementation.DefaultRuntimeCache));
EPiServer.Globalization.ContentLanguage.PreferredCulture =
    new CultureInfo("en");

And finally, our test is green! This is what the final short cut version of the test looks like:

private IContentRepository _contentRepository;

[SetUp]
public void SetUp()
{
    _contentRepository = A.Fake<IContentRepository>();

    // Avoid "EPiServer.BaseLibrary.ClassFactoryException : ClassFactory not
    // initialized"
    ClassFactory.Instance = new DefaultBaseLibraryFactory(string.Empty);
    ClassFactory.RegisterClass(typeof(IRuntimeCache),
        typeof(DefaultRuntimeCache));
    ContentLanguage.PreferredCulture = new CultureInfo("en");
}



[Test]
public void GetNames_WithContentAreaItem_ShouldReturnTheName()
{
    // Arrange
    var contentReference = new ContentReference(1);
    var content = new BasicContent { Name = "a" };

    // Create fake IContentRepository that returns our content
    IContent outContent;
    A.CallTo(() => _contentRepository.TryGet(contentReference,
            A<ILanguageSelector>.Ignored, out outContent))
        .Returns(true)
        .AssignsOutAndRefParameters(content);

    // Create fake ContentArea with a ContentAreaItem
    var contentAreaItem = new ContentAreaItem {
        ContentLink = contentReference
    };
    var contentArea = A.Fake<ContentArea>();
    A.CallTo(() => contentArea.Items)
        .Returns(new List<ContentAreaItem> { contentAreaItem });

    // Act
    var names = ContentAreaHelper.GetItemNames(contentArea, _contentRepository);

    // Assert
    Assert.That(names.Count(), Is.EqualTo(1));
    Assert.That(names.First(), Is.EqualTo("a"));
}

I moved parts of the code into a SetUp method that NUnit executes prior to each test to make the actual test method a little more clean, but it still isn’t very pretty. Extracting some of the setup into some helper methods is probably a good idea, but for brevity we’ll leave it like it is.

Ok, that was the shortcut version with a fake ContentArea, but what if we don’t want to rewrite our method to take a IContentRepository parameter? Or perhaps we’re writing tests against other methods that don’t have these handy overloads? Well, then we need to setup up a basic service locator registry and initialize the EPiServer framework’s ServiceLocator prior to the test.

Running the test with a configured ServiceLocator

Ok, time to go back to our original method under test:

public class ContentAreaHelper
{
    public static IEnumerable<string> GetItemNames(ContentArea contentArea)
    {
        return contentArea.Items.Select(item => item.GetContent().Name);
    }
}

And for the test, remember that we had created a IContentRepository fake that we want EPiServer to use. This is how we create a StructureMap object factory and tell EPiServer to use it for its ServiceLocator:

// Clear the StructureMap registry and reinitialize
ObjectFactory.Initialize(expr => { });
ObjectFactory.Configure(expr => expr.For<IContentRepository>()
    .Use(contentRepository));
ObjectFactory.Configure(expr => 
    expr.For<ISecuredFragmentMarkupGeneratorFactory>()
        .Use(A.Fake<ISecuredFragmentMarkupGeneratorFactory>()));
ObjectFactory.Configure(expr => expr.For<IContentTypeRepository>()
    .Use(A.Fake<IContentTypeRepository>()));
ObjectFactory.Configure(expr => expr.For<IPublishedStateAssessor>()
    .Use(A.Fake<IPublishedStateAssessor>()));

// Set up the EPiServer service locator with our fakes
ServiceLocator.SetLocator(
    new StructureMapServiceLocator(ObjectFactory.Container));

The fakes for ISecuredFragmentMarkupGeneratorFactory, IContentTypeRepository and IPublishedStateAssessor were added because StructureMap complained that it did not know where to find instances for those interfaces when running the tests.

We still get the “ClassFactory not initialized” exception as above so we must apply the same fix again. After that, the test works.

After some refactoring, this is what the test looks like:

private IContentRepository _contentRepository;

[SetUp]
public void SetUp()
{
    _contentRepository = A.Fake<IContentRepository>();

    // Clear the StructureMap registry and reinitialize
    ObjectFactory.Initialize(expr => { });
    ObjectFactory.Configure(expr => expr.For<IContentRepository>()
        .Use(_contentRepository));
    ObjectFactory.Configure(expr => 
        expr.For<ISecuredFragmentMarkupGeneratorFactory>()
            .Use(A.Fake<ISecuredFragmentMarkupGeneratorFactory>()));
    ObjectFactory.Configure(expr => expr.For<IContentTypeRepository>()
        .Use(A.Fake<IContentTypeRepository>()));
    ObjectFactory.Configure(expr => expr.For<IPublishedStateAssessor>()
        .Use(A.Fake<IPublishedStateAssessor>()));

    // Set up the EPiServer service locator with our fakes
    ServiceLocator.SetLocator(
        new StructureMapServiceLocator(ObjectFactory.Container));

    // Avoid "EPiServer.BaseLibrary.ClassFactoryException : ClassFactory not
    // initialized"
    ClassFactory.Instance = new DefaultBaseLibraryFactory(string.Empty);
    ClassFactory.RegisterClass(typeof(IRuntimeCache),
        typeof(DefaultRuntimeCache));
    ContentLanguage.PreferredCulture = new CultureInfo("en");
}


[Test]
public void GetNames_WithContentAreaItem_ShouldReturnTheName()
{
    // Arrange
    var contentReference = new ContentReference(1);
    var content = new BasicContent { Name = "a" };

    // Associate GetContent calls to 'content'
    IContent outContent;
    A.CallTo(() => _contentRepository.TryGet(contentReference,
            A<ILanguageSelector>.Ignored, out outContent))
        .Returns(true)
        .AssignsOutAndRefParameters(content);

    var contentArea = new ContentArea();
    contentArea.Items.Add(new ContentAreaItem {ContentLink = contentReference});

    // Act
    var names = ContentAreaHelper.GetItemNames(contentArea);

    // Assert
    Assert.That(names.Count(), Is.EqualTo(1));
    Assert.That(names.First(), Is.EqualTo("a"));
} 

As before we do general initialization in the SetUp method and only do test-specific stuff in the actual test. This is so we can reuse as much setup as possible for the next test.

Final thoughts

So there you go, two ways of writing unit tests against an EPiServer ContentArea. Use the one most suitable for you. I tend to like the “faked ContentArea” version since you don’t have to get quite as messy with EPiServer internals, but sometimes it is not enough and I then use the other one. It’s useful to have both in your toolbox, and now you do as well. 🙂

There are probably other ways of accomplishing the same task, so feel free to comment below if you have opinions!

Cheers,

Emil

New blog theme

I just changed the theme of the blog since I got tired of the old one, and also because I’m working on a post where the old theme’s low width became a problem. The new theme is called
Catch Responsive and I really like it so far. It works much better on mobile devices and is very configurable.

This is what the old theme looked like, I thought it would be nice to have a record of it to look back on in the future.Old blog theme

Visual Studio slow? Try disabling hardware acceleration.

Ever since I got a new computer I’ve been frustrated by how slow Visual Studio 2013 has been. Granted, the computer is a little weak performance-wise, it’s of the Ultrabook type, but it’s been slow even at basic tasks such as moving the cursor around. Really strange.

But after a tip from an article at Code Project I decided to disable hardware acceleration:

Disable hardware acceleration in Visual Studio 2013

Disable hardware acceleration in Visual Studio 2013

And what do you know, it worked! Moving around with the cursor keys in source code is now much more responsive and no longer a source of frustration. Oh joy!

/Emil

Changing a dynamic disk into a basic one

I have just removed a SATA hard disk from my old desktop computer and inserted it into an external case from Deltaco so that it could be used from a laptop. Unfortunatly it did not work straight away, no disk showed up in the File Explorer.

Slighty worried, I opened the Disk Manager and there it was shown as “Dynamic” and “Invalid”. More worried, I started googling and found a solution that involved using a hex editor to modify a byte directly on the hard drive to switch it from dynamic into basic. It worked perfectly and the drive now works as expected. I’m not sure what the change means exactly but I’m very happy right now. It felt kind of hard core to use a hex editor to fix a problem like this, that does not happen every day. 🙂

/Emil

Setting the font of a PowerShell console to Lucida Console won’t work

Ever tried changing the font of a PowerShell console to Lucida Console, only to see the setting gone the next time you open the console? In that case, you’re not alone! I’ve been pulling my hair over this problem many times but today I decided to investigate it further.

There are several different solutions and none of them worked for me. For some people it helps if you set the font size to something other than 12 points, but not for me. For others it helps to start the console as administrator, but not for me. And here’s a strange thing: In a good old Command Prompt (CMD.EXE) Lucida Console works as a default font with no problem at all. It’s only in PowerShell console I can’t set it as default.A few of these tricks are discussed at superuser.com.

My problem turned out to be different as it was related to the regional settings of my Windows installations. The problem is described very briefly by a Microsoft support engineer here. Apparently Lucide Console “is not properly supported on CJK languages and other languages that are not localized by PowerShell (e.g. Arabic, Hebrew, etc.)”. It seems that Swedish belongs to the same category of languages that for some reason is not deemed compatible with Lucida Console. Strange thing is that it works perfectly when setting it on the console instance…

Anyway, to fix the problem all I had to do was to change my system locale to “English (United States)”:

Setting system locale to a language that is "supported" for Lucida Console solves the problem...

Setting system locale to a language that is “supported” for Lucida Console solves the problem…

Voila, my PowerShell prompt is pretty everytime I open it, instead of using the ugly “Raster Fonts” which it falled back to before.

The problem description that Lucida Console is not compatible with all languages makes very little sense to me, but at least my problem is solved.

/Emil

Devart LINQ Insight

I was recently approached by a representative from Devart who asked if I wanted to have a look at some of their products, so I decided to try out the LINQ Insight add-in for Visual Studio.

LINQ Insight has two main functions:

  • Profiler for LINQ expressions
  • Design-time LINQ query analyzer and editor

If you work much with LINQ queries you probably know that Visual Studio is somewhat lacking with functionality around LINQ queries by default so the functions that LINQ Insight offers should be pretty welcome for any database developer out there on the .Net platform (which should be pretty many of us these days). Let’s discuss the two main features of LINQ Insight in some more detail.

Profiling LINQ queries

If you’re using Entity Framework (LINQ Insight apparently also supports NHibernate, RavenDB, and a few others but I have not tested any of those) and LINQ it can be a little difficult to know exactly what database activity occurs during the execution of the applications. After all, the main objective of OR mappers is to abstract away the details of the database and instead let the developer focus on the domain model. But when you’re debugging errors or analyzing performance it’s crucial to analyze the database activity as well, and that’s what LINQ Insight’s profiling function helps with.

There are other tools to this of course, such as IntelliTrace in Visual Studio Ultimate, but since it’s only included in Ultimate, not many developers have access to it. LINQ Insight profiler is very easy to use and gives access to a lot of information.

To enable profiling, follow these steps:

  1. Make sure that IIS Express process is not started. Stop it if it is. (This assumes we’re using IIS Express of course. I’m not quite sure how to work with the full IIS server in conjunction with LINQ Insight.)
  2. Open the profiling window by selecting View/Other Windows/LINQ Profiler, or pressing Ctrl+W, F
  3. Press the “Start profiler session” button in the upper left corner of the window (it looks like a small “Play” icon)
  4. Start debugging your application, for example by pressing F5.
  5. Debugging information such as this should now start to fill the profiler window:
    The profiler displays all LINQ activity in the application.

    The profiler displays all LINQ activity in the application.

    As you can see, in this case we have several different contexts that have executed LINQ queries. For example, ApplicationContext is used by ASP.Net Identity and HistoryContext is used by Code First Database Migrations. Context is our application context.

  6. We can now drill down into the queries and see what triggered them and what SQL statements were executed.
    Drilling down into profiler data.

    Drilling down into profiler data.

    We can see the LINQ query that was executed, the SQL statements, duration, call stack, etc. Very useful stuff indeed.

Query debugger and editor

The other feature LINQ Insight brings into Visual Studio is to help writing LINQ queries and debug them. To debug a query, follow these steps:

  1. To open a query in the query editor, just right-click on it in the standard C# code editor window and select the “Run LINQ Query” option:

    To debug or edit a LINQ query, use the right-click menu.

    To debug or edit a LINQ query, use the right-click menu.

  2. If the query contains one or more parameters, a popup will be shown where values for the parameters can be given.
  3. Next, the query will be executed, and the results will be displayed:

    Query results are displayed in the Results tab.

    Query results are displayed in the Results tab.

  4. This is of course useful in itself, and even better is that the generated Sql statements are displayed in the SQL tab and the original LINQ query is in the LINQ tab, where it can be edited and re-executed, after which the Sql and Results tab are updated. Really, really useful!

If an error is displayed in the Results tab, then the most probably reason is that the database could not be found in the project’s config file, or that it could not be interpreted correctly. The latter is the case if using the LocalDB provider with the "|DataDirecory|" placeholder, which only can be evaluated at runtime in a ASP.Net project. To make LINQ Insight find a database MDF file in App_Data in a web project, you can follow these steps:

  1. Make sure that your DbContext sub-class (for Entity Framework, that is) has an overloaded constructor that takes a single string parameter, namely the connection string to use:
    public Context(string connString) : base(connString) {}
    

    This is required if LINQ Insight cannot deduce the connection string for the project’s config file. This is usually a problem in my projects since I like to separate domain logic into a separate project (normally a class library) from my “host application”.

  2. Double-click the MDF file in the App_Data folder to make sure it’s present in the Server Explorer panel in Visual Studio.
  3. Select the database in the Server Explorer and right-click it and select Properties. Copy its Connection String property.
  4. In the LINQ Interactive window, click the Edit Connection String button, which is only enabled if the DbContext class has a constructor overload with a connection string parameter, which we ensured in step 1.
  5. Paste the connection string to the Data/ConnectionString field in the panel:
    Use the connection string dialog to override the "guessed" connection string.

    Use the connection string dialog to override the “guessed” connection string.

    Click OK to close the dialog.

  6. Re-run the query with the Run LINQ Query button in the LINQ Interactive window, and it should now work correctly. If it doesn’t, try to Run LINQ Query command in the C# code editor again, since it re-initializes the query.

The ability to freely set the connection string should make it possible to work against any database, be it a local MDF file, a full SQL Server database or a Windows Azure database. This could be used as a simple way to try out new or modified LINQ queries against a staging or production database, right from the development enviroment. Could be very useful in some situations for debugging nasty errors and such.

Summary

All in all, I think the LINQ Insight is a very useful tool and I recommend you try it out if you find yourself writing LINQ queries from time to time.

I should also mention that if you have tried LINQ Insight before and found it be slightly unstable then I should mention that Devart have recently fixed a few errors that really makes the tool much more robust and useful. If unsure, just download the trial version and test it out.

Happy Linqing!

Emil

Grouping by feature in ASP.Net MVC

Motivation

When I initially switched from using ASP.Net Web Forms to MVC as my standard technology for building web sites on the Microsoft stack I really liked the clean separation of the models, views and controllers. No more messy code-behind files with scattered business logic all over the place!

After a while though, I started to get kind of frustrated when editing the different parts of the code. I often find that a change in one type of file (e.g. the model) tend to result in corresponding changes in other related files (the view or the controller) and for any reasonably large project you’ll start to spend considerable time in the Solution Explorer trying to find the files that are affected by your modifications. It turns out that the default project structure of ASP.Net MVC actually does a pretty poor job of limiting change propagation between the files used by a certain feature. It’s still much better than Web Forms, but it’s by no means perfect.

The problem is that the default project structure separates the files by file type first and then by controller. This image shows the default project structure for ASP.Net MVC:

Default ASP.Net MVC project structure

Default ASP.Net MVC project structure

The interesting folders are Controllers, Models (which really should be called ViewModels) and Views. A given feature, for example the Index action on the Account controller, is implemented using files in all these folders and potentially also one or more files in the Scripts folder, if Javascript is used (which is very likely these days). Even if you need to make only a simple change, such as adding a property to the model and show it in a view, you’ll probably need to edit files in several of these directories which is fiddly since they are completely separated in the folder structure.

Wouldn’t it be much better to group files by feature instead? Yes, of course it would, when you think about it. Fortunately it’s quite easy to reconfigure ASP.Net MVC a bit to accomplish this. This is the goal:

Grouping by feature

Grouping by feature

 

Instead of the Controllers, Models and Views folders, we now have a Features folder. Each controller now has a separate sub-folder and each action method has a sub-folder of their own:

  • Features/
    • Controller1/
      • Action 1/
      • Action 2/
    • Controller2/
      • Action 1/
      • Action 2/

Each action folder contains the files needed for its implementation, such as the view, view model and specific Javascript files. The controller is stored on level up, in the controller folder.

What we accomplish by doing this is the following:

  1. All the files that are likely to be affected by a modification are stored together and are therefore much easier to find.
  2. It’s also easier to get an overview of the implementation of a feature and this in turn makes it easier to understand and work with the code base.
  3. It’s much easier to delete a feature and all related files. All that has to be done is to delete the action folder for the feature, and the corresponding controller action.

Implementation

After that rather long motivation, I’ll now show how to implement the group by feature structure. Luckily, it’s not very hard once you know the trick. This is what has to be done:

  1. Create a new folder called Features, and sub-folders for the controllers.
  2. Move the controller classes into their respective sub-folders. They don’t need to be changed in any way for this to work (although it might be nice to adjust their namespaces to reflect their new locations). It turns out that MVC does not assume that the controllers are located in any specific folder, they can be placed anywhere we like.
  3. Create sub-folders for each controller action and move their view files there. Rename the view files to View.html as there’s no need to reflect the action name in the file name anymore.

If you try to run your application at this point, you’ll get an error saying that the view cannot be found:

The view 'Index' or its master was not found or no view engine supports the searched locations. The following locations were searched:
~/Views/Home/Index.aspx
~/Views/Home/Index.ascx
~/Views/Shared/Index.aspx
~/Views/Shared/Index.ascx
~/Views/Home/Index.cshtml
~/Views/Home/Index.vbhtml
~/Views/Shared/Index.cshtml
~/Views/Shared/Index.vbhtml

This is exactly what we’d expect, as we just moved the files.

What we need to do is to tell MVC to look for the view files in their new locations and this can be accomplished by creating a custom view engine. That sounds much harder than it is, since we can simply inherit the standard Razor view engine and override its folder setup:

using System.Web.Mvc;

namespace RetailHouse.ePos.Web.Utils
{
    /// <summary>
    /// Modified from the suggestion at
    /// http://timgthomas.com/2013/10/feature-folders-in-asp-net-mvc/
    /// </summary>
    public class FeatureViewLocationRazorViewEngine : RazorViewEngine
    {
        public FeatureViewLocationRazorViewEngine()
        {
            var featureFolderViewLocationFormats = new[]
            {
                // First: Look in the feature folder
                "~/Features/{1}/{0}/View.cshtml",
                "~/Features/{1}/Shared/{0}.cshtml",
                "~/Features/Shared/{0}.cshtml",
                // If needed: standard  locations
                "~/Views/{1}/{0}.cshtml",
                "~/Views/Shared/{0}.cshtml"
            };

            ViewLocationFormats = featureFolderViewLocationFormats;
            MasterLocationFormats = featureFolderViewLocationFormats;
            PartialViewLocationFormats = featureFolderViewLocationFormats;
        }
    }
}

The above creates a view engine that searches the following folders in order (assuming the Url is /Foo/Index):

  1. ~/Features/Foo/Index/View.cshtml
  2. ~/Features/Foo/Shared/Index.cshtml
  3. ~/Features/Shared/Index/View.cshtml
  4. ~/Views/Foo/Index.cshtml
  5. ~/Views/Shared/Index.cshtml

The last two are just used for backward compatibility so that it isn’t necessary to refactor all controllers at once.

To use the new view engine, do the following on application startup:

ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new FeatureViewLocationRazorViewEngine());

The view will now load and the application will work as before, but with better structure.

The last step is to move view models and custom Javascript files into the action folders as well (note that the latter requires adjusting the paths in the HTML code that includes the Javascript files to reflect the new locations).

Once everything is up and running, the project becomes much easier to work with and when you get used to working like this you really start to wonder why Microsoft is not doing it by default in their Visual Studio templates. Maybe in a future version?

/Emil

Updates
2014-11-21 Updated the images to clarify the concept

Overriding the placement of validation error messages in knockout-validation

By default, knockout-validation places error messages just after the input element containing the error. This may not be what you want, and you can create UI elements of your own and bind them to validation error messages but this means writing a lot of boilerplate markup and it messes up the HTML. An alternative to this is to override the knockout-validation function that creates the error messages on the page. For example:

ko.validation.insertValidationMessage = function(element) {
    var span = document.createElement('SPAN');
    span.className = "myErrorClass";

    if ($(element).hasClass("error-before"))
        element.parentNode.insertBefore(span, element);
    else
        element.parentNode.insertBefore(span, element.nextSibling);

    return span;
};  

In this example I check whether the input element contains the custom class “error-before”, and if so, I move the error message to before rather than after the input field.

Another example, suitable for Bootstrap when using input groups:

ko.validation.insertValidationMessage = function(element) {
    var span = document.createElement('SPAN');
    span.className = "myErrorClass";

    var inputGroups = $(element).closest(".input-group");
    if (inputGroups.length > 0) {
        // We're in an input-group so we place the message after
        // the group rather than inside it in order to not break the design
        $(span).insertAfter(inputGroups);
    } else {
        // The default in knockout-validation
        element.parentNode.insertBefore(span, element.nextSibling);
    }
    return span;
};

One disadvantage of this is that we have no access to the knockout-validation options such as “errorMessageClass” so we have to hard code that value here (“myErrorClass” in the examples above.

/Emil