NDepend v6

I have written about NDepend a few times before and now that version 6 has been released this summer it’s time to mention it again, as I was given a licence for testing it by the kind NDepend guys :-)

Trend monitoring

The latest version I have been using prior to version 6 is version 4, so my favorite new feature is the trend monitoring functionality (which was actually introduced in version 5). Such a great idea to integrate it into the client tool for experimenting! Normally you would define different metrics and let the build server store the history but having the possibility of working with this right inside of Visual Studio makes it so much easier to experiment with metrics without having to configure this in the build server.

Here is a screen shot of what it may look like (the project name is blurred so to not reveal the customer):

Dashboard with metrics and deltas compared to a given baseline plus two trend charts

Dashboard with metrics and deltas compared to a given baseline plus two trend charts

  • At the top of the dashboard there is information about the NDepend project that has been analyzed and the baseline analysis used for comparison.
  • Below this there are several different groups with metrics and deltas, e.g. # Lines of Code (apparently the code base has grown with 1.12%, or 255 lines, in this case when compared to the baseline).
  • Next to the numeric metrics are the trend charts, in my case just two of them, showing the number of lines of code and critical rule violations, respectively. Many more are available and it’s easy to create your own charts with custom metrics. BTW, “critical” refers to the rules deemed critical in this project. These rules will differ from project to project.
    • In the image we can see that the number of lines of code grows steadily which is to be expected in a project which is actively developed.
    • The number of critical errors also grows steadily which probably indicates an insufficient focus on code quality.
    • There is a sudden decrease in rule violations in the beginning of July where one of the developers of the project decided to refactor some “smelly” code.

This is just a simple example but I’m really liking how easy it now is to get a feeling for the code trends of a project with just a glance on the dashboard every now and then.

The Dependency Matrix

The trend monitoring features may be very useful but the trademark feature of NDepend is probably the dependency matrix. Most people who have started up NDepend has probably seen the following rather bewildering matrix:

The dependency matrix can be used to discover all sorts of structual propertis of the code base

The dependency matrix can be used to discover all sorts of structual propertis of the code base

I must confess that I haven’t really spent too much time with this view before since I’ve had some problems grasping it fully, but this time around I decided it was time to dive into it a little more. I think it might be appropriate to write a few words on my findings, so here we go.

Since it’s a little difficult to see what’s going on with a non-trivial code base, I started with something trivial, with code in a main namespace referencing code in NamespaceA that in turn references code in NamespaceB. If the view does not show my namespaces (which is what I normally want, then the first thing to do when opening the matrix is to set the most suitable row/column filter with the dropdown):

The dependency matrix filtering dropdown

I tend to use View Application Namespaces Only most of the time since this filters out all third party namespaces and also expands all my application namespaces (the top level of the row/column headers are assemblies which is not normally what I want).

Also note that the calculation of the number shown inside a dependency cell can be changed independently of the filtering. In my case it’s 0 on all cells which seems strange since there are in fact dependencies, but the reason for this is that it shows the number of members used in the target namespace and in this case I only refer to types. Changing this is done in another dropdown in the matrix window.

Another thing I learned recently is that it may be very useful to switch back and forth between the Dependency Matrix and the Dependency Graph and in the image below I show both windows next to each other. In this simple case they show the same thing but when the code base grows then dependencies become too numerous to be shown visually in a useful way. Luckiliy there are options in the matrix to show parts of it the graph, and vice versa. For example, right clicking on namespace row heading opens a menu with a View Internal Dependencies On Graph option so that only a subset of the code base dependencies are shown. Very useful indeed.

Here’s what it may look like:

A simple sample project with just three namespaces

A simple sample project with just three namespaces

Also note that hovering over a dependency cell displays useful popups and changes the cursor into an arrow indicating the direction of the dependency, which is also reflected by the color of the cell (by the way, look out for black cells, they indicate circular references!)

Another way to invoke the graph is to right click a dependency cell in the matrix:

Context menu for a dependency

 

The top option, Build a Graph made of Code Elements involved in this dependency does just what is says. Also very useful.

By using the expand/collapse functionality of the dependency matrix together with the option to display dependencies in the graph view it becomes easier to pinpoint structual problems in the code base. It takes a bit of practise because of the sheer amount of information in the matrix but I’m growing into liking it more and more. I would suggest anyone interested to spend some time on this view and it’s many options. I found this official description to be useful for a newcomer: Dependency Structure Matrix

Wrapping it up

NDepend 6 has many more features than what I have described here and it’s such a useful tool that I would suggest anyone interested in code quality to download a trial and play around with it. Just be prepared to invest some time into learning the tool to get the most out of it.

A very good way to get started is the Pluralsight course by Eric Dietrich. It’s describes NDepend version 5 but all of it applies to version 6 as well and it covers the basics of the tool very well. Well worth a look if you’re a Pluralsight subscriber.

Devart LINQ Insight

I was recently approached by a representative from Devart who asked if I wanted to have a look at some of their products, so I decided to try out the LINQ Insight add-in for Visual Studio.

LINQ Insight has two main functions:

  • Profiler for LINQ expressions
  • Design-time LINQ query analyzer and editor

If you work much with LINQ queries you probably know that Visual Studio is somewhat lacking with functionality around LINQ queries by default so the functions that LINQ Insight offers should be pretty welcome for any database developer out there on the .Net platform (which should be pretty many of us these days). Let’s discuss the two main features of LINQ Insight in some more detail.

Profiling LINQ queries

If you’re using Entity Framework (LINQ Insight apparently also supports NHibernate, RavenDB, and a few others but I have not tested any of those) and LINQ it can be a little difficult to know exactly what database activity occurs during the execution of the applications. After all, the main objective of OR mappers is to abstract away the details of the database and instead let the developer focus on the domain model. But when you’re debugging errors or analyzing performance it’s crucial to analyze the database activity as well, and that’s what LINQ Insight’s profiling function helps with.

There are other tools to this of course, such as IntelliTrace in Visual Studio Ultimate, but since it’s only included in Ultimate, not many developers have access to it. LINQ Insight profiler is very easy to use and gives access to a lot of information.

To enable profiling, follow these steps:

  1. Make sure that IIS Express process is not started. Stop it if it is. (This assumes we’re using IIS Express of course. I’m not quite sure how to work with the full IIS server in conjunction with LINQ Insight.)
  2. Open the profiling window by selecting View/Other Windows/LINQ Profiler, or pressing Ctrl+W, F
  3. Press the “Start profiler session” button in the upper left corner of the window (it looks like a small “Play” icon)
  4. Start debugging your application, for example by pressing F5.
  5. Debugging information such as this should now start to fill the profiler window:
    The profiler displays all LINQ activity in the application.

    The profiler displays all LINQ activity in the application.

    As you can see, in this case we have several different contexts that have executed LINQ queries. For example, ApplicationContext is used by ASP.Net Identity and HistoryContext is used by Code First Database Migrations. Context is our application context.

  6. We can now drill down into the queries and see what triggered them and what SQL statements were executed.
    Drilling down into profiler data.

    Drilling down into profiler data.

    We can see the LINQ query that was executed, the SQL statements, duration, call stack, etc. Very useful stuff indeed.

Query debugger and editor

The other feature LINQ Insight brings into Visual Studio is to help writing LINQ queries and debug them. To debug a query, follow these steps:

  1. To open a query in the query editor, just right-click on it in the standard C# code editor window and select the “Run LINQ Query” option:

    To debug or edit a LINQ query, use the right-click menu.

    To debug or edit a LINQ query, use the right-click menu.

  2. If the query contains one or more parameters, a popup will be shown where values for the parameters can be given.
  3. Next, the query will be executed, and the results will be displayed:
    Query results are displayed in the Results tab.

    Query results are displayed in the Results tab.

  4. This is of course useful in itself, and even better is that the generated Sql statements are displayed in the SQL tab and the original LINQ query is in the LINQ tab, where it can be edited and re-executed, after which the Sql and Results tab are updated. Really, really useful!

If an error is displayed in the Results tab, then the most probably reason is that the database could not be found in the project’s config file, or that it could not be interpreted correctly. The latter is the case if using the LocalDB provider with the "|DataDirecory|" placeholder, which only can be evaluated at runtime in a ASP.Net project. To make LINQ Insight find a database MDF file in App_Data in a web project, you can follow these steps:

  1. Make sure that your DbContext sub-class (for Entity Framework, that is) has an overloaded constructor that takes a single string parameter, namely the connection string to use:
    public Context(string connString) : base(connString) {}
    

    This is required if LINQ Insight cannot deduce the connection string for the project’s config file. This is usually a problem in my projects since I like to separate domain logic into a separate project (normally a class library) from my “host application”.

  2. Double-click the MDF file in the App_Data folder to make sure it’s present in the Server Explorer panel in Visual Studio.
  3. Select the database in the Server Explorer and right-click it and select Properties. Copy its Connection String property.
  4. In the LINQ Interactive window, click the Edit Connection String button, which is only enabled if the DbContext class has a constructor overload with a connection string parameter, which we ensured in step 1.
  5. Paste the connection string to the Data/ConnectionString field in the panel:
    Use the connection string dialog to override the "guessed" connection string.

    Use the connection string dialog to override the “guessed” connection string.

    Click OK to close the dialog.

  6. Re-run the query with the Run LINQ Query button in the LINQ Interactive window, and it should now work correctly. If it doesn’t, try to Run LINQ Query command in the C# code editor again, since it re-initializes the query.

The ability to freely set the connection string should make it possible to work against any database, be it a local MDF file, a full SQL Server database or a Windows Azure database. This could be used as a simple way to try out new or modified LINQ queries against a staging or production database, right from the development enviroment. Could be very useful in some situations for debugging nasty errors and such.

Summary

All in all, I think the LINQ Insight is a very useful tool and I recommend you try it out if you find yourself writing LINQ queries from time to time.

I should also mention that if you have tried LINQ Insight before and found it be slightly unstable then I should mention that Devart have recently fixed a few errors that really makes the tool much more robust and useful. If unsure, just download the trial version and test it out.

Happy Linqing!

Emil

Grouping by feature in ASP.Net MVC

Motivation

When I initially switched from using ASP.Net Web Forms to MVC as my standard technology for building web sites on the Microsoft stack I really liked the clean separation of the models, views and controllers. No more messy code-behind files with scattered business logic all over the place!

After a while though, I started to get kind of frustrated when editing the different parts of the code. I often find that a change in one type of file (e.g. the model) tend to result in corresponding changes in other related files (the view or the controller) and for any reasonably large project you’ll start to spend considerable time in the Solution Explorer trying to find the files that are affected by your modifications. It turns out that the default project structure of ASP.Net MVC actually does a pretty poor job of limiting change propagation between the files used by a certain feature. It’s still much better than Web Forms, but it’s by no means perfect.

The problem is that the default project structure separates the files by file type first and then by controller. This image shows the default project structure for ASP.Net MVC:

Default ASP.Net MVC project structure

Default ASP.Net MVC project structure

The interesting folders are Controllers, Models (which really should be called ViewModels) and Views. A given feature, for example the Index action on the Account controller, is implemented using files in all these folders and potentially also one or more files in the Scripts folder, if Javascript is used (which is very likely these days). Even if you need to make only a simple change, such as adding a property to the model and show it in a view, you’ll probably need to edit files in several of these directories which is fiddly since they are completely separated in the folder structure.

Wouldn’t it be much better to group files by feature instead? Yes, of course it would, when you think about it. Fortunately it’s quite easy to reconfigure ASP.Net MVC a bit to accomplish this. This is the goal:

Grouping by feature

Grouping by feature

Instead of the Controllers, Models and Views folders, we now have a Features folder. Each controller now has a separate sub-folder and each action method has a sub-folder of their own:

  • Features/
    • Controller1/
      • Action 1/
      • Action 2/
    • Controller2/
      • Action 1/
      • Action 2/

Each action folder contains the files needed for its implementation, such as the view, view model and specific Javascript files. The controller is stored on level up, in the controller folder.

What we accomplish by doing this is the following:

  1. All the files that are likely to be affected by a modification are stored together and are therefore much easier to find.
  2. It’s also easier to get an overview of the implementation of a feature and this in turn makes it easier to understand and work with the code base.
  3. It’s much easier to delete a feature and all related files. All that has to be done is to delete the action folder for the feature, and the corresponding controller action.

Implementation

After that rather long motivation, I’ll now show how to implement the group by feature structure. Luckily, it’s not very hard once you know the trick. This is what has to be done:

  1. Create a new folder called Features, and sub-folders for the controllers.
  2. Move the controller classes into their respective sub-folders. They don’t need to be changed in any way for this to work (although it might be nice to adjust their namespaces to reflect their new locations). It turns out that MVC does not assume that the controllers are located in any specific folder, they can be placed anywhere we like.
  3. Create sub-folders for each controller action and move their view files there. Rename the view files to View.html as there’s no need to reflect the action name in the file name anymore.

If you try to run your application at this point, you’ll get an error saying that the view cannot be found:

The view 'Index' or its master was not found or no view engine supports the searched locations. The following locations were searched:
~/Views/Home/Index.aspx
~/Views/Home/Index.ascx
~/Views/Shared/Index.aspx
~/Views/Shared/Index.ascx
~/Views/Home/Index.cshtml
~/Views/Home/Index.vbhtml
~/Views/Shared/Index.cshtml
~/Views/Shared/Index.vbhtml

This is exactly what we’d expect, as we just moved the files.

What we need to do is to tell MVC to look for the view files in their new locations and this can be accomplished by creating a custom view engine. That sounds much harder than it is, since we can simply inherit the standard Razor view engine and override its folder setup:

using System.Web.Mvc;

namespace RetailHouse.ePos.Web.Utils
{
    /// <summary>
    /// Modified from the suggestion at
    /// http://timgthomas.com/2013/10/feature-folders-in-asp-net-mvc/
    /// </summary>
    public class FeatureViewLocationRazorViewEngine : RazorViewEngine
    {
        public FeatureViewLocationRazorViewEngine()
        {
            var featureFolderViewLocationFormats = new[]
            {
                // First: Look in the feature folder
                "~/Features/{1}/{0}/View.cshtml",
                "~/Features/{1}/Shared/{0}.cshtml",
                "~/Features/Shared/{0}.cshtml",
                // If needed: standard  locations
                "~/Views/{1}/{0}.cshtml",
                "~/Views/Shared/{0}.cshtml"
            };

            ViewLocationFormats = featureFolderViewLocationFormats;
            MasterLocationFormats = featureFolderViewLocationFormats;
            PartialViewLocationFormats = featureFolderViewLocationFormats;
        }
    }
}

The above creates a view engine that searches the following folders in order (assuming the Url is /Foo/Index):

  1. ~/Features/Foo/Index/View.cshtml
  2. ~/Features/Foo/Shared/Index.cshtml
  3. ~/Features/Shared/Index/View.cshtml
  4. ~/Views/Foo/Index.cshtml
  5. ~/Views/Shared/Index.cshtml

The last two are just used for backward compatibility so that it isn’t necessary to refactor all controllers at once.

To use the new view engine, do the following on application startup:

ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new FeatureViewLocationRazorViewEngine());

The view will now load and the application will work as before, but with better structure.

The last step is to move view models and custom Javascript files into the action folders as well (note that the latter requires adjusting the paths in the HTML code that includes the Javascript files to reflect the new locations).

Once everything is up and running, the project becomes much easier to work with and when you get used to working like this you really start to wonder why Microsoft is not doing it by default in their Visal Studio templates. Maybe in a future version?

/Emil

Updates
2014-11-21 Updated the images to clarify the concept

Connection strings used for logging should not enlist in transactions

I was just reminded of an old truth when it comes to logging errors in a database: Do not enlist the error logging operation in the current transaction, if there is one! The default is to do just that, so be sure to add Enlist=false in your connection string used for logging:

Example:

  <connectionStrings>
    <add name="myDb" connectionString="..."  providerName="System.Data.SqlClient"/>
    <add name="logging" connectionString="...;Enlist=false"  providerName="System.Data.SqlClient"/>
  </connectionStrings>

If you don’t, then the error logging will be rolled back with the transaction in case of an error. Not so good…

Also note that it’s a good idea to use separate connection strings for the normal database operations (which should be done inside transactions) and the logging operations (that shouldn’t).

Now that I have written this down I will hopefully remember this the next time I do error logging in a database…

/Emil

Fixing CLR trigger error after installing .Net Framework updates

Today I discovered an error in one of our CLR triggers in a SQL Server database:

System.Data.SqlClient.SqlException: A .NET Framework error occurred during execution of user-defined routine or aggregate "PersonalChangedTrigger": 
System.IO.FileLoadException: Could not load file or assembly 'System.Messaging, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. Assembly in host store has a different signature than assembly in GAC. (Exception from HRESULT: 0x80131050) See Microsoft Knowledge Base article 949080 for more information.

This was kind of unexpected since we didn’t modify the trigger. It turns out that it was probably a Windows Update patch that modified the System.Messaging assembly so we had to “re-import” it into the database. This is how to do that, in our case:

ALTER ASSEMBLY Messaging
FROM 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll'
WITH PERMISSION_SET = UNSAFE

/Emil

Enumerating the current user’s NT groups

Here’s how to enumerate the currently logged in user’s NT groups (local and domain groups):

// Enumerate current user's NT groups
var i = WindowsIdentity.GetCurrent();

// Translate groups from type System.Security.Principal.SecurityIdentifier
// into System.Security.Principal.NTAccount.
var groups = i.Groups.Select(g => g.Translate(typeof(NTAccount)) as NTAccount);

/Emil

Queued WCF services

Wcf contains a great feature called queued services. This is basically a Wcf binding, netMsmqBinding, that uses MSMQ queues for message transport. It’s very easy to use (once set up) and allows for asynchronous and decoupled communication. It’s useful in several scenarios:

  • Handling load peaks
  • Robustness, if the service is down the messages are queued up in the MSMQ queue and are consumed by the service again when it’s available.

Requirements

The following is required for queued services to work

  • MSMQ must be installed
  • WAS must be enabled and running
  • All service methods must be one-way

How it works

The client adds a service reference using a normal HTTP meta data endpoint and gets a reference to endpoints that are exposed with the netMsmqBinding. The client endpoint’s address is the MSMQ queue to post to in the following Uri format:

<endpoint address="net.msmq://localhost/private/myservice/myservice.svc" binding="netMsmqBinding"  ... />

The client calls the client proxy just like for any other Wcf service:

var client = new MyService.MyServiceClient();
client.MyMethod(...);

The service is hosted in IIS and WAS (Windows Process Activation Service) is used for listening for messages in the queue. When a message is detected, WAS starts the IIS application containing the service that should receive the message. The service then consumes the message using the netMsmqBinding and processes it.

As already mentioned, the exposed service methods must be one-way:

[ServiceContract]
public interface IMyService
{
    [OperationContract(IsOneWay = true)]
    void MyMethod(...);
}

Throttling

If there are many messages in the queue then our service will process many of them in parallel. I think the default number of simultaneous requests is 12 and this may be too much for our service if the reason we’re doing queued services is to handle peak loads. Luckily this is really easy to configure:

<behaviors>
  <serviceBehaviors>
    <behavior name="MsmqWithMex">
      <!-- Expose metadata via HTTP GET. -->
      <serviceMetadata httpGetEnabled="true"/>
  
      <!-- At most 3 concurrent requests are allowed -->
      <serviceThrottling maxConcurrentCalls="3" />
    </behavior>
  </serviceBehaviors>
</behaviors>

Note: Throttling is available in all Wcf bindings, not just the netMsmqBinding.

Setting it all up

So far we have only discussed the implementation of the service, but we also have to do some configuration in the operating system and in IIS:

  • Install Messaging – if this is not already done, then install it
  • Create queue – Wcf relies on that the necessary queues exist. A good pattern is to check for their existence on application startup and create then if they’re missing.
  • Install/configure WAS for MSMQ in IIS – This is probable the hardest step so I’ll describe in detail:
    1. Enable the net.msmq protocol for the site containing the service:
      %windir%\system32\inetsrv\appcmd.exe set site "Default Web Site" -+bindings.[protocol='net.msmq',bindingInformation='localhost']
      

      The result should be that “net.msmq” is included in the list of protocols:

    2. We can also enable the same protocol in the single application containing the Wcf service:
      %windir%\system32\inetsrv\appcmd.exe set app "Default Web Site/DeltagarloggService" /enabledProtocols:http,net.msmq
      

      Make sure that the protocol is added:

  • Make sure that Net Msmq Listener Adapter service is running:

Troubleshooting
If something goes wrong, here are a few tips:

  • Are the Message Queueing and Net Msmq Listener Adapter services running?
  • net.msmq protocol enable on IIS site and application?
  • Messages still in queue? If empty: check dead-letter queues and outgoing queues.
  • Enable msmq journaling
  • Check event log
  • Restart app pool for the service
  • Restart web site
  • If queue is transactional, check the DTC
  • Browse to the service’s svc file. If that consumes the messages it’s a WAS problem.
  • If the service was hosted in Windows Server AppFabric, then see if it has any logged errors.

For more tips, Tom Hollander have written great blog posts about queued service here.

Happy queueing!

/Emil

Viewing assembly binding logs using fuslogvw.exe

Sometimes it’s very useful analyze how an application binds to referenced assemblies but this process is fairly hidden from us. However, Microsoft has given us a way to look into this process via the fuslogvw tool. This tool is not overly documented so this post describes how to install it on a computer that has neither Visual Studio nor the Windows SDK installed, such as a server.

Follow these steps:

Copy the following files from a computer with Visual Studio or the Windows SDK installed:

  • "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\FUSLOGVW.exe"
  • "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\1033\flogvwrc.dll"

Put them into a folder on the target computer.

Create logging folder, e.g. c:\fuslog

Start FUSLOGVW.exe. Update the following settings:

  • Log all binds to disk
  • Enable Custom log path
  • Custom lag path = c:\fuslog\

Finally, enable assembly binding logging in the registry by setting the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Fusion\ForceLog value to 1.

Click Refresh in FUSLOGVW.exe and behold! A list of assembly binding events are displayed:

The individual events contain useful details:

/Emil

2014-11-10 Update: Removed the mention of editing the registry. I added that a little too quickly, fuslogvw.exe is supposed to do that stuff for us…

Retrieving the message count for MSMQ queues

Sometimes it can be useful to retrieve the number of messages in an MSMQ queue, for example for monitoring. However, it’s not immediately apparent how to do it if you google it, so here are my thoughts on the subject.

Other blog posts suggest iterating over messages (e.g. Counting Messages in an MSMQ MessageQueue from C#) or doing it using WMI. WMI is the best alternative in my opinion and if you want a quick way of doing it then PowerShell is the easiest:

$queues = Get-WmiObject Win32_PerfFormattedData_msmq_MSMQQueue
$queues | ft -property Name,MessagesInQueue

The result will be something similar to this:

Name                                                         MessagesInQueue
----                                                         ---------------
active714\private$\notify_queue$                                           0
active714\private$\deltagarloggservice\deltagarloggservi...                0
active714\private$\order_queue$                                            0
active714\private$\admin_queue$                                            0
Computer Queues                                                           27

This can also be done on remote machines:

$host = ...
$cred = get-credential
$queues = Get-WmiObject Win32_PerfFormattedData_msmq_MSMQQueue -computer $host -credential $cred
$queues | ft -property Name,MessagesInQueue

The Get-Credential Cmdlet will display a login dialog which is fine in interactive sessions but if you need to set the credentials in a non-interactive script, then the tip in this blog post might help: PowerShell – How to create a PSCredential object.

Retrieving message counts from code takes a little more coding but here’s an example in C# that searches for a given queue and returns its message count:

private static int GetMsmqMessageCount(string queuePath, string machine,
  string username, string password)
{
  var options = new ConnectionOptions
    {Username = username, Password = password};
  var path = string.Format(@"\\{0}\root\CIMv2", machine);
  var scope = new ManagementScope(path, options);
  scope.Connect();

  string queryString = 
    String.Format("SELECT * FROM Win32_PerfFormattedData_msmq_MSMQQueue WHERE Name = '{0}'",
	  queuePath);
  var query = new ObjectQuery(queryString);

  var searcher = new ManagementObjectSearcher(scope, query);

  IEnumerable<int> messageCountEnumerable = 
    from ManagementObject queue in searcher.Get()
    select (int) (UInt64) queue.GetPropertyValue("MessagesInQueue");

  return messageCountEnumerable.First();
}

This code also uses WMI but this time we do a query rather than enumerate all queues.

The above snippets has worked well for us so I felt it would be useful to post them here. Please leave a comment if you have issues with them or suggestions for improvement!

/Emil

Detecting unhandled errors in ASP.Net applications

For my own reference, here’s how to detect unhandled exceptions in Global.asax:

protected void Application_Error(object sender, EventArgs e)
{
  // Log error
  Exception ex = Server.GetLastError().GetBaseException();
  string msg = String.Format("Ohanterat fel!\r\nAdress: {0}\r\nFelmeddelande: {1}", Request.Url, ex.Message);
  LogHelper.Error(msg, ex);

  Server.ClearError();

  // Display simple error page
  Response.Write("Ett oväntat fel har inträffat...\r\n");
  Response.Write("<br/>\r\n<pre>\r\n");
  Response.Write(ex.ToString());
  Response.Write("<br/>\r\n</pre>\r\n");
  Response.End();
}

We mostly do it like this, if you have a better suggestion, please post a comment :-)

/Emil