Archive

Archive for the ‘.Net programming’ Category

Devart LINQ Insight

July 29th, 2014 No comments

I was recently approached by a representative from Devart who asked if I wanted to have a look at some of their products, so I decided to try out the LINQ Insight add-in for Visual Studio.

LINQ Insight has two main functions:

  • Profiler for LINQ expressions
  • Design-time LINQ query analyzer and editor

If you work much with LINQ queries you probably know that Visual Studio is somewhat lacking with functionality around LINQ queries by default so the functions that LINQ Insight offers should be pretty welcome for any database developer out there on the .Net platform (which should be pretty many of us these days). Let’s discuss the two main features of LINQ Insight in some more detail.

Profiling LINQ queries

If you’re using Entity Framework (LINQ Insight apparently also supports NHibernate, RavenDB, and a few others but I have not tested any of those) and LINQ it can be a little difficult to know exactly what database activity occurs during the execution of the applications. After all, the main objective of OR mappers is to abstract away the details of the database and instead let the developer focus on the domain model. But when you’re debugging errors or analyzing performance it’s crucial to analyze the database activity as well, and that’s what LINQ Insight’s profiling function helps with.

There are other tools to this of course, such as IntelliTrace in Visual Studio Ultimate, but since it’s only included in Ultimate, not many developers have access to it. LINQ Insight profiler is very easy to use and gives access to a lot of information.

To enable profiling, follow these steps:

  1. Make sure that IIS Express process is not started. Stop it if it is. (This assumes we’re using IIS Express of course. I’m not quite sure how to work with the full IIS server in conjunction with LINQ Insight.)
  2. Open the profiling window by selecting View/Other Windows/LINQ Profiler, or pressing Ctrl+W, F
  3. Press the “Start profiler session” button in the upper left corner of the window (it looks like a small “Play” icon)
  4. Start debugging your application, for example by pressing F5.
  5. Debugging information such as this should now start to fill the profiler window:
    The profiler displays all LINQ activity in the application.

    The profiler displays all LINQ activity in the application.

    As you can see, in this case we have several different contexts that have executed LINQ queries. For example, ApplicationContext is used by ASP.Net Identity and HistoryContext is used by Code First Database Migrations. Context is our application context.

  6. We can now drill down into the queries and see what triggered them and what SQL statements were executed.
    Drilling down into profiler data.

    Drilling down into profiler data.

    We can see the LINQ query that was executed, the SQL statements, duration, call stack, etc. Very useful stuff indeed.

Query debugger and editor

The other feature LINQ Insight brings into Visual Studio is to help writing LINQ queries and debug them. To debug a query, follow these steps:

  1. To open a query in the query editor, just right-click on it in the standard C# code editor window and select the “Run LINQ Query” option:

    To debug or edit a LINQ query, use the right-click menu.

    To debug or edit a LINQ query, use the right-click menu.

  2. If the query contains one or more parameters, a popup will be shown where values for the parameters can be given.
  3. Next, the query will be executed, and the results will be displayed:
    Query results are displayed in the Results tab.

    Query results are displayed in the Results tab.

  4. This is of course useful in itself, and even better is that the generated Sql statements are displayed in the SQL tab and the original LINQ query is in the LINQ tab, where it can be edited and re-executed, after which the Sql and Results tab are updated. Really, really useful!

If an error is displayed in the Results tab, then the most probably reason is that the database could not be found in the project’s config file, or that it could not be interpreted correctly. The latter is the case if using the LocalDB provider with the "|DataDirecory|" placeholder, which only can be evaluated at runtime in a ASP.Net project. To make LINQ Insight find a database MDF file in App_Data in a web project, you can follow these steps:

  1. Make sure that your DbContext sub-class (for Entity Framework, that is) has an overloaded constructor that takes a single string parameter, namely the connection string to use:
    public Context(string connString) : base(connString) {}
    

    This is required if LINQ Insight cannot deduce the connection string for the project’s config file. This is usually a problem in my projects since I like to separate domain logic into a separate project (normally a class library) from my “host application”.

  2. Double-click the MDF file in the App_Data folder to make sure it’s present in the Server Explorer panel in Visual Studio.
  3. Select the database in the Server Explorer and right-click it and select Properties. Copy its Connection String property.
  4. In the LINQ Interactive window, click the Edit Connection String button, which is only enabled if the DbContext class has a constructor overload with a connection string parameter, which we ensured in step 1.
  5. Paste the connection string to the Data/ConnectionString field in the panel:
    Use the connection string dialog to override the "guessed" connection string.

    Use the connection string dialog to override the “guessed” connection string.

    Click OK to close the dialog.

  6. Re-run the query with the Run LINQ Query button in the LINQ Interactive window, and it should now work correctly. If it doesn’t, try to Run LINQ Query command in the C# code editor again, since it re-initializes the query.

The ability to freely set the connection string should make it possible to work against any database, be it a local MDF file, a full SQL Server database or a Windows Azure database. This could be used as a simple way to try out new or modified LINQ queries against a staging or production database, right from the development enviroment. Could be very useful in some situations for debugging nasty errors and such.

Summary

All in all, I think the LINQ Insight is a very useful tool and I recommend you try it out if you find yourself writing LINQ queries from time to time.

I should also mention that if you have tried LINQ Insight before and found it be slightly unstable then I should mention that Devart have recently fixed a few errors that really makes the tool much more robust and useful. If unsure, just download the trial version and test it out.

Happy Linqing!

Emil

Grouping by feature in ASP.Net MVC

July 15th, 2014 No comments

Motivation

When I initially switched from using ASP.Net Web Forms to MVC as my standard technology for building web sites on the Microsoft stack I really liked the clean separation of the models, views and controllers. No more messy code-behind files with scattered business logic all over the place!

After a while though, I started to get kind of frustrated when editing the different parts of the code. I often find that a change in one type of file (e.g. the model) tend to result in corresponding changes in other related files (the view or the controller) and for any reasonably large project you’ll start to spend considerable time in the Solution Explorer trying to find the files that are affected by your modifications. It turns out that the default project structure of ASP.Net MVC actually does a pretty poor job of limiting change propagation between the files used by a certain feature. It’s still much better than Web Forms, but it’s by no means perfect.

The problem is that the default project structure separates the files by file type first and then by controller. This image shows the default project structure for ASP.Net MVC:

Default ASP.Net MVC project structure

Default ASP.Net MVC project structure

The interesting folders are Controllers, Models (which really should be called ViewModels) and Views. A given feature, for example the Index action on the Home controller, is implemented using files in all these folders and potentially also one or more files in the Scripts folder, if Javascript is used (which is very likely these days). Even if you need to make only a simple change, such as adding a property to the model and show it in a view, you’ll probably need to edit files in several of these directories which is fiddly since they are completely separated in the folder structure.

Wouldn’t it be much better to group files by feature instead? Yes, of course it would, when you think about it. Fortunately it’s quite easy to reconfigure ASP.Net MVC a bit to accomplish this. This is the goal:

Grouping by feature

Grouping by feature

Instead of the Controllers, Models and Views folders, we now have a Features folder. Each controller now has a separate sub-folder and each action method has a sub-folder of their own:

  • Features/
    • Controller1/
      • Action 1/
      • Action 2/
    • Controller2/
      • Action 1/
      • Action 2/

Each action folder contains the files needed for its implementation, such as the view, view model and specific Javascript files. The controller is stored on level up, in the controller folder.

What we accomplish by doing this is the following:

  1. All the files that are likely to be affected by a modification are stored together and are therefore much easier to find.
  2. It’s also easier to get an overview of the implementation of a feature and this in turn makes it easier to understand and work with the code base.
  3. It’s much easier to delete a feature and all related files. All that has to be done is to delete the action folder for the feature, and the corresponding controller action.

Implementation

After that rather long motivation, I’ll now show how to implement the group by feature structure. Luckily, it’s not very hard once you know the trick. This is what has to be done:

  1. Create a new folder called Features, and sub-folders for the controllers.
  2. Move the controller classes into their respective sub-folders. They don’t need to be changed in any way for this to work (although it might be nice to adjust their namespaces to reflect their new locations). It turns out that MVC does not assume that the controllers are located in any specific folder, they can be placed anywhere we like.
  3. Create sub-folders for each controller action and move their view files there. Rename the view files to View.html as there’s no need to reflect the action name in the file name anymore.

If you try to run your application at this point, you’ll get an error saying that the view cannot be found:

The view 'Index' or its master was not found or no view engine supports the searched locations. The following locations were searched:
~/Views/Home/Index.aspx
~/Views/Home/Index.ascx
~/Views/Shared/Index.aspx
~/Views/Shared/Index.ascx
~/Views/Home/Index.cshtml
~/Views/Home/Index.vbhtml
~/Views/Shared/Index.cshtml
~/Views/Shared/Index.vbhtml

This is exactly what we’d expect, as we just moved the files.

What we need to do is to tell MVC to look for the view files in their new locations and this can be accomplished by creating a custom view engine. That sounds much harder than it is, since we can simply inherit the standard Razor view engine and override its folder setup:

using System.Web.Mvc;

namespace RetailHouse.ePos.Web.Utils
{
    /// <summary>
    /// Modified from the suggestion at
    /// http://timgthomas.com/2013/10/feature-folders-in-asp-net-mvc/
    /// </summary>
    public class FeatureViewLocationRazorViewEngine : RazorViewEngine
    {
        public FeatureViewLocationRazorViewEngine()
        {
            var featureFolderViewLocationFormats = new[]
            {
                // First: Look in the feature folder
                "~/Features/{1}/{0}/View.cshtml",
                "~/Features/{1}/Shared/{0}.cshtml",
                "~/Features/Shared/{0}.cshtml",
                // If needed: standard  locations
                "~/Views/{1}/{0}.cshtml",
                "~/Views/Shared/{0}.cshtml"
            };

            ViewLocationFormats = featureFolderViewLocationFormats;
            MasterLocationFormats = featureFolderViewLocationFormats;
            PartialViewLocationFormats = featureFolderViewLocationFormats;
        }
    }
}

The above creates a view engine that searches the following folders in order (assuming the Url is /Foo/Index):

  1. ~/Features/Foo/Index/View.cshtml
  2. ~/Features/Foo/Shared/Index.cshtml
  3. ~/Features/Shared/Index/View.cshtml
  4. ~/Views/Foo/Index.cshtml
  5. ~/Views/Shared/Index.cshtml

The last two are just used for backward compatibility so that it isn’t necessary to refactor all controllers at once.

To use the new view engine, do the following on application startup:

ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new FeatureViewLocationRazorViewEngine());

The view will now load and the application will work as before, but with better structure.

The last step is to move view models and custom Javascript files into the action folders as well (note that the latter requires adjusting the paths in the HTML code that includes the Javascript files to reflect the new locations).

Once everything is up and running, the project becomes much easier to work with and when you get used to working like this you really start to wonder why Microsoft is not doing it by default in their Visal Studio templates. Maybe in a future version?

/Emil

Categories: .Net programming, Web Tags:

Connection strings used for logging should not enlist in transactions

April 15th, 2014 No comments

I was just reminded of an old truth when it comes to logging errors in a database: Do not enlist the error logging operation in the current transaction, if there is one! The default is to do just that, so be sure to add Enlist=false in your connection string used for logging:

Example:

  <connectionStrings>
    <add name="myDb" connectionString="..."  providerName="System.Data.SqlClient"/>
    <add name="logging" connectionString="...;Enlist=false"  providerName="System.Data.SqlClient"/>
  </connectionStrings>

If you don’t, then the error logging will be rolled back with the transaction in case of an error. Not so good…

Also note that it’s a good idea to use separate connection strings for the normal database operations (which should be done inside transactions) and the logging operations (that shouldn’t).

Now that I have written this down I will hopefully remember this the next time I do error logging in a database…

/Emil

Fixing CLR trigger error after installing .Net Framework updates

May 25th, 2012 No comments

Today I discovered an error in one of our CLR triggers in a SQL Server database:

System.Data.SqlClient.SqlException: A .NET Framework error occurred during execution of user-defined routine or aggregate "PersonalChangedTrigger": 
System.IO.FileLoadException: Could not load file or assembly 'System.Messaging, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. Assembly in host store has a different signature than assembly in GAC. (Exception from HRESULT: 0x80131050) See Microsoft Knowledge Base article 949080 for more information.

This was kind of unexpected since we didn’t modify the trigger. It turns out that it was probably a Windows Update patch that modified the System.Messaging assembly so we had to “re-import” it into the database. This is how to do that, in our case:

ALTER ASSEMBLY Messaging
FROM 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll'
WITH PERMISSION_SET = UNSAFE

/Emil

Categories: .Net programming, SQL Server Tags:

Enumerating the current user’s NT groups

April 16th, 2012 No comments

Here’s how to enumerate the currently logged in user’s NT groups (local and domain groups):

// Enumerate current user's NT groups
var i = WindowsIdentity.GetCurrent();

// Translate groups from type System.Security.Principal.SecurityIdentifier
// into System.Security.Principal.NTAccount.
var groups = i.Groups.Select(g => g.Translate(typeof(NTAccount)) as NTAccount);

/Emil

Categories: .Net programming Tags:

Queued WCF services

March 7th, 2012 2 comments

Wcf contains a great feature called queued services. This is basically a Wcf binding, netMsmqBinding, that uses MSMQ queues for message transport. It’s very easy to use (once set up) and allows for asynchronous and decoupled communication. It’s useful in several scenarios:

  • Handling load peaks
  • Robustness, if the service is down the messages are queued up in the MSMQ queue and are consumed by the service again when it’s available.

Requirements

The following is required for queued services to work

  • MSMQ must be installed
  • WAS must be enabled and running
  • All service methods must be one-way

How it works

The client adds a service reference using a normal HTTP meta data endpoint and gets a reference to endpoints that are exposed with the netMsmqBinding. The client endpoint’s address is the MSMQ queue to post to in the following Uri format:

<endpoint address="net.msmq://localhost/private/myservice/myservice.svc" binding="netMsmqBinding"  ... />

The client calls the client proxy just like for any other Wcf service:

var client = new MyService.MyServiceClient();
client.MyMethod(...);

The service is hosted in IIS and WAS (Windows Process Activation Service) is used for listening for messages in the queue. When a message is detected, WAS starts the IIS application containing the service that should receive the message. The service then consumes the message using the netMsmqBinding and processes it.

As already mentioned, the exposed service methods must be one-way:

[ServiceContract]
public interface IMyService
{
    [OperationContract(IsOneWay = true)]
    void MyMethod(...);
}

Throttling

If there are many messages in the queue then our service will process many of them in parallel. I think the default number of simultaneous requests is 12 and this may be too much for our service if the reason we’re doing queued services is to handle peak loads. Luckily this is really easy to configure:

<behaviors>
  <serviceBehaviors>
    <behavior name="MsmqWithMex">
      <!-- Expose metadata via HTTP GET. -->
      <serviceMetadata httpGetEnabled="true"/>
  
      <!-- At most 3 concurrent requests are allowed -->
      <serviceThrottling maxConcurrentCalls="3" />
    </behavior>
  </serviceBehaviors>
</behaviors>

Note: Throttling is available in all Wcf bindings, not just the netMsmqBinding.

Setting it all up

So far we have only discussed the implementation of the service, but we also have to do some configuration in the operating system and in IIS:

  • Install Messaging – if this is not already done, then install it
  • Create queue – Wcf relies on that the necessary queues exist. A good pattern is to check for their existence on application startup and create then if they’re missing.
  • Install/configure WAS for MSMQ in IIS – This is probable the hardest step so I’ll describe in detail:
    1. Enable the net.msmq protocol for the site containing the service:
      %windir%\system32\inetsrv\appcmd.exe set site "Default Web Site" -+bindings.[protocol='net.msmq',bindingInformation='localhost']
      

      The result should be that “net.msmq” is included in the list of protocols:

    2. We can also enable the same protocol in the single application containing the Wcf service:
      %windir%\system32\inetsrv\appcmd.exe set app "Default Web Site/DeltagarloggService" /enabledProtocols:http,net.msmq
      

      Make sure that the protocol is added:

  • Make sure that Net Msmq Listener Adapter service is running:

Troubleshooting
If something goes wrong, here are a few tips:

  • Are the Message Queueing and Net Msmq Listener Adapter services running?
  • net.msmq protocol enable on IIS site and application?
  • Messages still in queue? If empty: check dead-letter queues and outgoing queues.
  • Enable msmq journaling
  • Check event log
  • Restart app pool for the service
  • Restart web site
  • If queue is transactional, check the DTC
  • Browse to the service’s svc file. If that consumes the messages it’s a WAS problem.
  • If the service was hosted in Windows Server AppFabric, then see if it has any logged errors.

For more tips, Tom Hollander have written great blog posts about queued service here.

Happy queueing!

/Emil

Categories: .Net programming, WCF Tags: ,

Viewing assembly binding logs using fuslogvw.exe

March 7th, 2012 2 comments

Sometimes it’s very useful analyze how an application binds to referenced assemblies but this process is fairly hidden from us. However, Microsoft has given us a way to look into this process via the fuslogvw tool. This tool is not overly documented so this post describes how to install it on a computer that has neither Visual Studio nor the Windows SDK installed, such as a server.

Follow these steps:

Copy the following files from a computer with Visual Studio or the Windows SDK installed:

  • "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\FUSLOGVW.exe"
  • "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\1033\flogvwrc.dll"

Put them into a folder on the target computer.

Create logging folder, e.g. c:\fuslog

Start FUSLOGVW.exe. Update the following settings:

  • Log all binds to disk
  • Enable Custom log path
  • Custom lag path = c:\fuslog\

Finally, enable assembly binding logging in the registry by setting the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Fusion\ForceLog value to 1.

Click Refresh in FUSLOGVW.exe and behold! A list of assembly binding events are displayed:

The individual events contain useful details:

/Emil

Categories: .Net programming, Tools Tags:

Retrieving the message count for MSMQ queues

December 11th, 2011 5 comments

Sometimes it can be useful to retrieve the number of messages in an MSMQ queue, for example for monitoring. However, it’s not immediately apparent how to do it if you google it, so here are my thoughts on the subject.

Other blog posts suggest iterating over messages (e.g. Counting Messages in an MSMQ MessageQueue from C#) or doing it using WMI. WMI is the best alternative in my opinion and if you want a quick way of doing it then PowerShell is the easiest:

$queues = Get-WmiObject Win32_PerfFormattedData_msmq_MSMQQueue
$queues | ft -property Name,MessagesInQueue

The result will be something similar to this:

Name                                                         MessagesInQueue
----                                                         ---------------
active714\private$\notify_queue$                                           0
active714\private$\deltagarloggservice\deltagarloggservi...                0
active714\private$\order_queue$                                            0
active714\private$\admin_queue$                                            0
Computer Queues                                                           27

This can also be done on remote machines:

$host = ...
$cred = get-credential
$queues = Get-WmiObject Win32_PerfFormattedData_msmq_MSMQQueue -computer $host -credential $cred
$queues | ft -property Name,MessagesInQueue

The Get-Credential Cmdlet will display a login dialog which is fine in interactive sessions but if you need to set the credentials in a non-interactive script, then the tip in this blog post might help: PowerShell – How to create a PSCredential object.

Retrieving message counts from code takes a little more coding but here’s an example in C# that searches for a given queue and returns its message count:

private static int GetMsmqMessageCount(string queuePath, string machine,
  string username, string password)
{
  var options = new ConnectionOptions
    {Username = username, Password = password};
  var path = string.Format(@"\\{0}\root\CIMv2", machine);
  var scope = new ManagementScope(path, options);
  scope.Connect();

  string queryString = 
    String.Format("SELECT * FROM Win32_PerfFormattedData_msmq_MSMQQueue WHERE Name = '{0}'",
	  queuePath);
  var query = new ObjectQuery(queryString);

  var searcher = new ManagementObjectSearcher(scope, query);

  IEnumerable<int> messageCountEnumerable = 
    from ManagementObject queue in searcher.Get()
    select (int) (UInt64) queue.GetPropertyValue("MessagesInQueue");

  return messageCountEnumerable.First();
}

This code also uses WMI but this time we do a query rather than enumerate all queues.

The above snippets has worked well for us so I felt it would be useful to post them here. Please leave a comment if you have issues with them or suggestions for improvement!

/Emil

Detecting unhandled errors in ASP.Net applications

September 9th, 2011 No comments

For my own reference, here’s how to detect unhandled exceptions in Global.asax:

protected void Application_Error(object sender, EventArgs e)
{
  // Log error
  Exception ex = Server.GetLastError().GetBaseException();
  string msg = String.Format("Ohanterat fel!\r\nAdress: {0}\r\nFelmeddelande: {1}", Request.Url, ex.Message);
  LogHelper.Error(msg, ex);

  Server.ClearError();

  // Display simple error page
  Response.Write("Ett oväntat fel har inträffat...\r\n");
  Response.Write("<br/>\r\n<pre>\r\n");
  Response.Write(ex.ToString());
  Response.Write("<br/>\r\n</pre>\r\n");
  Response.End();
}

We mostly do it like this, if you have a better suggestion, please post a comment :-)

/Emil

log4net configuration examples

August 17th, 2011 1 comment

Almost every application requires, or at least benefits greatly from, some kind of logging of events and errors. I suspect all of us have written custom code for logging to text files, database tables, the event log and other formats. We all know that we shouldn’t do this since there are several frameworks out there to help us, such as Microsoft’s own Tracing functionality, NLog and log4net. The obstacle to using them is often that it can be difficult to know how to configure them to work properly which is why this post will describe a set of configuration examples for log4net to make that framework do it’s magic.

For those of you new to log4net it’s a really useful package but somewhat difficult to configure, which is what I will try to help with in this post. I will not try to write a tutorial for it since others have already done so. Instead I will show a set of confiuration examples I personally find useful.

To use log4net, you need to add a reference to log4net.dll. Then a new section in the application’s config file must be added:

<section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />

Log4net logging is based on a provider-like design where each type of logging is done in a so called appender. There are appenders for file system files, databases, etc. and it’s in these appenders that log4net’s power lies.

To configure which appenders to use it easiest to use the root configuration element like this:

<log4net>
  ...
  <root>
    <level value="Debug"/>
    <appender-ref ref="ColoredConsoleAppender"/>
    <appender-ref ref="RollingFileAppender"/>
  </root>
</log4net>

We haven’t configured the referenced appenders yet, so that is the next step. The remainder of the post will be about some of the appenders that comes with log4net.

APPENDERS

Colored console appender
This appender is very useful for console applications and enables different types of events (debug, info, error, etc) to be displayed in different colors.

Here’s how to configure it:

    <appender name="ColoredConsoleAppender" type="log4net.Appender.ColoredConsoleAppender">
      <mapping>
        <level value="ERROR"/>
        <foreColor value="Red, HighIntensity"/>
      </mapping>
      <mapping>
        <level value="INFO"/>
        <foreColor value="White"/>
      </mapping>
      <mapping>
        <level value="DEBUG"/>
        <foreColor value="Green"/>
      </mapping>
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%-5level - %message%newline"/>
      </layout>
    </appender>

RollingFileAppender
This appender is used to write to a file while limiting the file size. When the maximum file size is reached then a new file is created which means that no log file will be unmanagably large.

    <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender">
      <file value="d:\Logs\WorkflowAborter.log"/>
      <rollingStyle value="Size" />
      <appendToFile value="true"/>
      <maximumFileSize value="2000KB"/>
      <maxSizeRollBackups value="5"/>
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%thread] %-5level %identity %property{log4net:HostName}: %message%newline%exception"/>
      </layout>
    </appender>

The maxSizeRollBackups setting controls how many files will be created before the first one is deleted.

You can control when log4net creates a new file (rolls the file) by setting the rollingStyle attribute. It’s default setting is Composite which combines date and size limitations, whichever occurs first, to decide when a new file is created. The other possible values are Size and Date which are self-explanatory, and Once which creates a new log file for every time the application is executed.

Note however that, for some reason, the maxSizeRollBackups setting does not work when using Composite or Date so the number of files cannot be limited in when using those settings.

NetOutputDebugStringAppender

Logs events using the OutputDebugString Win32 API call whose output is shown in the Visual Studio output window. The appender is also useful in deployed applications since the output is captured by Mark Russinovich’s DebugView tool. (Just remember to enable “Capture Global Win32” setting of DebugView first, or else nothing will be displayed!)

    <appender name="NetOutputDebugStringAppender" type="log4net.Appender.OutputDebugStringAppender" >
      <layout type="log4net.Layout.PatternLayout">
        <param name="ConversionPattern" value="Butler: [%t] %-5p - %m%n" />
        <!--<conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" />-->
      </layout>
    </appender>

NetTraceAppender

This appender logs to the Microsofts standard tracing framework in .Net. In Visual Studio, output is shown in the output window by default and for deployed applications it’s possible to configure the tracing to store events in different destinations. The appender could be useful if parts of an application is using the tracing framework and other parts are using log4net since it makes it possible to channel all events through the same framework (trace) but this is probably not used very often.

A more useful side effect is that many test runners (e.g. nUnit and ReSharper’s built-in test runner) captures trace calls and stores them with the test result making test failure analysis much easier.

Screen shot of ReSharper with log4net trace output

Configuration:

    <!-- Logga till .Net trace-listeners, t.ex- Outputfönstret i Visual Studio...-->
    <!-- Hamnar i output-fönstret om körs i VS, kan annars visas i DebugView -->
    <appender name="NetTraceAppender" type="log4net.Appender.TraceAppender, log4net">
      <layout type="log4net.Layout.PatternLayout,log4net">
        <param name="ConversionPattern" value="%d [%t] %-5p - %m%n" />
      </layout>
    </appender>

SmtpAppender
This appender is often overlooked, I have at least not seen many references to using it. It’s very useful for sending mail when an error occurs:

<appender name="SmtpAppender" type="log4net.Appender.SmtpAppender">
  <to value="foo@foobar.se, bar@foobar.se" />
  <from value="helpdesk@foobar.se" />
  <subject value="Error occured in MyApp" />
  <smtpHost value="smtp.foobar.se" />
  <bufferSize value="512" />
  <lossy value="true" />
  <evaluator type="log4net.Core.LevelEvaluator">
  	<threshold value="WARN"/>
  </evaluator>
  <layout type="log4net.Layout.PatternLayout">
  	<conversionPattern value="%n%d %-5level [%t] - %m" />
  </layout>
</appender>

Using configuration similar to the above, a mail such as the following is sent when a WARN or ERROR log occurs:

2011-02-04 03:00:06,958 INFO  [1] - ###############################################
2011-02-04 03:00:07,328 INFO  [1] - Startar...
2011-02-04 03:00:07,362 INFO  [1] - ###############################################
2011-02-04 03:00:07,429 INFO  [1] - Läser upp kursdeltagare 
2011-02-04 03:00:31,265 INFO  [1] - 6437 ska bearbetas.
2011-02-04 03:21:17,153 INFO  [1] - Timmar per vecka omräknade
2011-02-04 03:21:17,263 INFO  [1] - Tidsåtgång timmar per vecka: 00:21:09
2011-02-04 03:21:17,278 INFO  [1] - ----------------------------------------------------------------------------
2011-02-04 03:21:17,479 INFO  [1] - Läser upp kursdeltagare 
2011-02-04 03:22:08,793 INFO  [1] - 26532 ska bearbetas.
2011-02-04 04:10:28,050 ERROR [1] - Kunde inte ändra preliminärt slutdatum för timpott för sfi kd 24085 (Butler kd 83237)

Note that the 10 events prior to the ERROR event that triggered the mail is also included which makes error analysis very much easier.

MemoryAppender
This appender is useful for automated tests.

<appender name="unitTestMemoryAppender" type="log4net.Appender.MemoryAppender">
  <layout type="log4net.Layout.PatternLayout">
    <conversionPattern value="%date [%thread] %-5level %identity %property{log4net:HostName} %logger: %message%newline%exception"/>
  </layout>
</appender>

To create unit tests that checks whether logging is performed correctly, code such as this can be used:


using ActiveSolution.Lernia.Utils.Logging;
using log4net;
using log4net.Appender;
using log4net.Core;
using log4net.Repository.Hierarchy;
using NUnit.Framework;

namespace UtilsTests
{
    [TestFixture]
    public class LoggingTests
    {
        [SetUp]
        public void SetUp()
        {
            var appender = GetAppender();
            if (appender != null)   // Appender kommer vara null om inet loggats ännu.
            {
                appender.Clear();
            }
        }

        [Test]
        public void Logger_ErrorFormat_WithException_ShouldLogMessage()
        {
            // Arrange -----------------------------
            var ex = new ArgumentNullException("foo");

            // Act ---------------------------------
            LogHelper.ErrorFormat(ex, "Oväntat fel för id = {0}", 42);    // Call custom wrapper function for log4net

            // Assert ------------------------------
            var loggedEvents = GetLoggedEvents();
            Assert.That(loggedEvents.Length, Is.EqualTo(1));
            Assert.That(loggedEvents[0].RenderedMessage, Contains.Substring("Oväntat fel för id = 42"));
            Assert.That(loggedEvents[0].RenderedMessage, Contains.Substring("Value cannot be null"));
            Assert.That(loggedEvents[0].RenderedMessage, Contains.Substring("Parameter name: foo"));
        }

        private static LoggingEvent[] GetLoggedEvents()
        {
            MemoryAppender ma = GetAppender();
            return ma.GetEvents();
        }

        private static MemoryAppender GetAppender()
        {
            var h = LogManager.GetRepository() as Hierarchy;
            return h.Root.GetAppender("unitTestMemoryAppender") as MemoryAppender;
        }
    }
}

AdoAppender

The AdoAppender is used to store events in a database. Here’s a configuration example for using a stored procedure to store the events:

<appender name="ErrorHandlerAppender" type="log4net.Appender.AdoNetAppender">
  <filter type="log4net.Filter.LevelRangeFilter">
    <param name="LevelMin" value="WARN" />
  </filter>

  <bufferSize value="1" />
  <connectionType value="System.Data.SqlClient.SqlConnection, System.Data, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
  <connectionString value="Data Source=XXXXXXXXX;Initial Catalog=Errorhand;User ID=XXX;Password=XXX" />
  <commandType value="StoredProcedure" />
  <commandText value="ap_errorhandler_ins" />

  <parameter>
    <parameterName value="@APPLICATIONID" />
    <dbType value="String" />
    <size value="16"/>
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="Butler.EducationSI" />
    </layout>
  </parameter>

  <parameter>
    <parameterName value="@USERID" />
    <dbType value="String" />
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="%identity" />
    </layout>
  </parameter>

  <parameter>
    <parameterName value="@SEVERITY" />
    <dbType value="String" />
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="%level" />
    </layout>
  </parameter>

  <parameter>
    <parameterName value="@DESCRIPTION" />
    <dbType value="String" />
    <size value="4000"/>
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="%message%newline%exception" />
    </layout>
  </parameter>

  <parameter>
    <parameterName value="@LOCAL_ADDR" />
    <dbType value="String" />
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="%property{log4net:HostName}" />
    </layout>
  </parameter>
  <parameter>
    <parameterName value="@REMOTE_ADDR" />
    <dbType value="String" />
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="%property{log4net:HostName}" />
    </layout>
  </parameter>
</appender>

The configuration segment for AdoAppenders tend to be rather long but the example is fairly self explaining. For more so called “conversionPattern fields” to use in parameters, look here. These are the same fields used in the other appenders to format the log strings. There’s quite a lot of interesting data that can be retrieved so make sure to have a look at the list.

FILTERING

Another powerful feature of log4net is the possibility of filtering. For example, if you’re referencing a 3rd party assembly such as NHibernate.dll that logs its events using log4net then those events can be configured separately like this:

<logger name="NHibernate">
  <level value="Error" />
</logger>

This tells log4net that DEBUG and INFO events from NHibernate should be ignored. The logger name is a string with the start of the logger names the filter should match. Other loggers can be configured in the same way and loggers that aren’t configured fall back to the root logger we defined in the beginning of this post.

TROUBLESHOOTING

Q: What do you do when having trouble with log4net logging?
A: You enable internal logging!

<appSettings>
  <add key="log4net.Internal.Debug" value="true"/>
</appSettings>

Note that these events are logged using the System.Diagnostics.Trace system so that may have to be configured as well. For more info about internal logging, see here.

That’s all for now. I’ll keep updating this post when I find more interesting settings.

/Emil

Web Analytics