Deciding whether an NHibernate object reference is a proxy

Have you ever had a need to decide whether an object reference to an NHibernate-mapped object is the real instantiated object or an NHibernate-generated proxy used for lazy loading? Here’s how to do it:

if (entityObject is NHibernate.Proxy.INHibernateProxy)
{
  ...
}

You don’t often need to make a distinction between proxies and the real objects but it may be useful when traversing object structures without triggering lazy-loading.

A related method is NHibernateUtil.IsInitialized which may come in handy in similar use cases as the above. It tells whether a proxy or persistent collection is initialized (i.e. loaded from the database) or not.

if (NHibernateUtil.IsInitialized(entityObject))
{
  ...
}

/Emil

Listing and resuming workflow instances

We recently had a need to list suspended Workflow Foundation 4 instances hosted in Windows Server AppFabric from code, and here’s how to do it:

SqlInstanceQueryProvider provider = new SqlInstanceQueryProvider();
var providerArgs = new NameValueCollection { { "connectionString", _instanceStoreConnectionString } };
provider.Initialize("WfInstanceProviderA", providerArgs);

InstanceQuery query = provider.CreateInstanceQuery();
var args = new InstanceQueryExecuteArgs { InstanceStatus = InstanceStatus.Suspended };

IAsyncResult asyncResult = query.BeginExecuteQuery(args, new TimeSpan(0, 0, 0, 30), null, null);
IEnumerable instances = query.EndExecuteQuery(asyncResult);

By the way, this requires a reference to Microsoft.ApplicationServer.StoreManagement.dll which is installed with AppFabric.

Resuming the workflow instances can be done like this:

SqlInstanceControlProvider provider = new SqlInstanceControlProvider();
var providerArgs = new NameValueCollection { { "connectionString", _instanceStoreConnectionString } };
provider.Initialize("WfInstanceProviderA", providerArgs);

InstanceControl control = provider.CreateInstanceControl();

foreach (InstanceInfo instanceInfo in instances)
{
    InstanceCommand cmd = new InstanceCommand
    {
        CommandType = CommandType.Resume,
        InstanceId = instanceInfo.InstanceId,
        ServiceIdentifier = instanceInfo.HostInfo.HostMetadata
    };

    // Not included in instanceInfo.HostInfo.HostMetadata...
    cmd.ServiceIdentifier["VirtualPath"] = 
         "/EducationWorkflowServices/AktivitetService.xamlx";

    IAsyncResult asyncResult = control.CommandSend.BeginSend(cmd,
        new TimeSpan(0, 0, 0, 30), null, null);
    control.CommandSend.EndSend(asyncResult);
}

AppFabric also has some great PowerShell cmdlets for administration that can be used for the same thing. Here are a few examples:

Import-Module ApplicationServer
Get-ASAppServiceInstance -status 'suspended'
Get-ASAppServiceInstance -status 'suspended' | Resume-ASAppServiceInstance
Get-ASAppServiceInstance -status 'suspended' | Stop-ASAppServiceInstance -Terminate
Get-ASAppServiceInstance -status 'suspended' | Remove-ASAppServiceInstance
Get-ASAppServiceInstance -InstanceId 95a25419-0d71-42c4-ab70-aa523ba603fc
Get-ASAppServiceInstance -InstanceId 95a25419-0d71-42c4-ab70-aa523ba603fc | Suspend-ASAppServiceInstance

Yet another alternative is to query the AppFabric persistance store database directly using SQL:

SELECT *
FROM [AppFabric_Application_Extensions].[System.Activities.DurableInstancing].[InstancesTable]
WHERE IsSuspended=1

Choose the alternative most to your liking πŸ™‚

/Emil

The case of the slow for-loop

We recently had a problem with a page in an MVC application that was very slow, it took minutes before the server finished creating it. It turned out that it was this for-loop that caused the problem:

for (int i = 0; i < Model.Deltagare.Count(); i++)
{
  var deltagare = Model.Deltagare.ElementAt(i);
  ...
}

where Model.Deltagare is an IEnumerable. This is a common construct when you need to track the loop index for some reason but it turns out that its use really should be discouraged. The problem is that ElementAt(i) can take a really long time depending on the underlying list type of the enumerable. If it’s an array, it’s really quick, but in our case it was a System.Linq.Enumerable.WhereSelectListIterator and apparently ElementAt(i) had to traverse the enumerator from the start each time it was called. That’s not what you want to use an iterator for…

We changed the above loop to a foreach-loop instead, so that the iterator can be used more naturally:

int i;
foreach (var deltagare in Model.Deltagare)
{
  ...
  i++;
}

The duration of our loop went from about 10 minutes to 1 second for a list of about 6000 items.

Lesson learned.

/Emil

NDepend 3

In case you missed it, there’s a very powerful tool called NDepend that helps you analyze your code structure and detect potential problems very easily. A new version, NDepend 3, was released earlier this year and the biggest new feature is probably Visual Studio integration:

It still has all the code metrics you’ll probably ever need, a custom made code query language, diagrams (some of which are still a little difficult to read) and graphs to help with analysis. See my post about the previous version for some examples.

Here are a few live examples of using NDepend: http://www.ndepend.com/Features.aspx#Tour

If you’re into code analysis you should give NDepend a try. With this new version it’s more usable than before and it still has the same very powerful analysis engine under the hood. Check out http://www.ndepend.com for more info.

/Emil

NUnit with SQLite and .Net 4.0 Beta 2

SQLite and unit testing is a great combination for testing database operations without having to manage database files. You can simply create an in-memory database in your setup code and work with that instance. Perfect in combination with NHibernate, for example.

If you want to do this in the current .Net 4.0 beta you’re out of luck though, you’ll get an exception:

System.IO.FileLoadException: Mixed mode assembly is built against version 'v2.0.50727' of the runtime
and cannot be loaded in the 4.0 runtime without additional configuration information.

The solution is pointed out by Jomo Fisher. What you do is to include this snippet in the application config file:

<startup useLegacyV2RuntimeActivationPolicy="true">
  <supportedRuntime version="v4.0"/>
</startup>

When unit testing assemblies that references System.Data.SQLite.DLL then you have to put that snippet in NUnit’s config file (C:\Program Files\NUnit 2.5.2\bin\net-2.0\nunit.exe.config).

If you combine this with the tip in my post NUnit with Visual Studio 2010 Beta 2, you should insert the following

<startup useLegacyV2RuntimeActivationPolicy="true">
  <supportedRuntime version="v4.0"/>
  <requiredRuntime version="v4.0.20506" />
</startup>

plus

<loadFromRemoteSources enabled="true" />

under the runtime tag.

This works for me, hopefully it will for you as well.

/Emil

Distributed transactions with WCF and NHibernate

I recently started working on a new project in which we wanted to use WCF services that utilized NHibernate for database work. We also wanted those services to support distributed transactions so that several calls to one or more service would be done within the same client transaction. This is possible thanks to functionality in the System.Transactions namespace and in WCF which supports transaction flowing and of course the Distributed Transaction Coordinator in the operating system (see MSDN for more info on the DTC).

Note: The code below has been tested on NHibernate 2.1.1, Windows XP and .Net 4 Beta 2. Older versions of the .Net Framework should also work, but not necessarily older versions of NHibernate. I believe distributed transaction support was introduced in 2.1.0, but it may or may not work in similar ways to what is described here in older versions since the ADO.Net supports the System.Transactions namespace.

The goal is to write code like this on the WCF client side:

// TransactionScope is from the System.Transactions namespace
using (TransactionScope tx = new TransactionScope())
{
    service1.MyMethod();
    service2.MyMethod();
    tx.Complete();
}

If all goes well, the results of both service calls are comitted to the database. If the call to service2 fails and we get an exception so that tx.Complete() is never executed, then all database updates are rolled back are rolled back and nothing is persisted, even if service1 is hosted in another process or on another machine.

Note also that we’re not limited to database updates, any resource that supports transactions and knows about System.Transactions will be able to roll back updates.

For the above to work, we have to do several things:

  • Configure the service binding to support transaction flow (on both the client and service side). Example:
    <system .serviceModel>
      <services>
        <service ... >
          <endpoint ... bindingConfiguration="TransactionalWsHttp">
             ...
          </endpoint>
          ...
        </service>
      </services>
      <bindings>
        <wshttpbinding>
          <binding name="TransactionalWsHttp" transactionFlow="true"></binding>
        </wshttpbinding>
      </bindings>
    </system>
    
  • Mark the methods that should be enlisted in the distributed transaction with the TransactionFlow attribute. Do this on the service contract interface:
    [TransactionFlow(TransactionFlowOption.Allowed)]
    void SaveMyObject(Foobar object);
    

    That setting dictates that the client’s transaction will be used, if there is one. If there isn’t, a new one will be created automatically for the method call and it will be auto-committed when the method returns.

    You can also use TransactionFlowOption.Mandatory to require the client to supply the transaction. If it doesn’t then WCF will throw an exception.

  • Mark the method implementation with the OperationBehavior attribute like this:
    [OperationBehavior(TransactionScopeRequired=true)]
    public void SaveMyObject(Foobar object)
    {
       ...
    }
    

    This will supply the method with the transaction (the other settings are not enough, they simply indicate that WCF should be configured to be prepared supply the transaction and this setting says that the method really wants it).

This is actually all that is required! NHibernate will now detect if there is a so called ambient transaction (to do this yourself, look at the System.Transactions.Transaction.Current static property, if it’s non-null there there is a transaction) and will enlist its session in it. When the transaction completes, then the saved data will be comitted to the database. If there is an exception so that transaction is never completed then all data will be rolled back.

Important notes:

  • Don’t close the NHibernate ISession explicitly when using the above pattern as NHibernate will do that itself when the distributed transaction completes or is rolled back. If you do, then you will get an exception later when the transaction tries to do it.
  • Don’t create transactions explicitly, let System.Transactions do that for you.
  • If you get this error:
    Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for
    network access in the security configuration for MSDTC using the Component Services Administrative tool.
    

    then you have to do as it says and enable network access in the DTC on the server where the WCF service is hosted (and also on the database server I would assume, but I haven’t actually checked). Here’s an instruction on how to do that:
    Network access for Distributed Transaction Manager (MSDTC) has been disabled

I think this is really cool stuff. Not only does it simplify transaction management in NHibernate, it also allows us to write much more robust distributed service-oriented application with very little effort. You also get support in the operating system, for example fΓΆr statistics:

DTC statistics

I haven’t tried with other databases than SQL Server but as NHibernate seems to support System.Transactions it is possible that it works with other DB systems as well. If you have any experience with that, please leave a comment πŸ™‚

I will continue to update this post if I do more findings on this subject. When I google about this there wasn’t much information on this subject so hopefully this post will help others with the same needs.

/Emil

Data transforms using LINQ

Here’s som simple code to transform an array or list from one type to another:

Kursdeltagare[] kdArr = ...

// Alternative 1
var kdDtos = matchingKd.Select(kd => new KursdeltagareDto()
                                    {
                                        PersonId = kd.PersonID,
                                        Fornamn = kd.Fornamn,
                                        Efternamn = kd.Efternamn
                                    }).ToList();

// Alternative 2
var kdDtos2 = (from kd in matchingKd
             select new KursdeltagareDto()
                        {
                            PersonId = kd.PersonID,
                            Fornamn = kd.Fornamn,
                            Efternamn = kd.Efternamn
                        }
            ).ToList();

Both alternatives will give the same result but the first one contains slightly less code. Which is better from a performance point of view? If you know, feel free to leave a comment…

/Emil

NDepend – first impressions

nDI was recently contacted by Patrick Smacchia, one of the developers behind NDepend and he asked me if I’d like to try it out, which I did. This is a report of my first impressions.

If you’re not familiar with NDepend, it can be described as a tool to analyze a set of .Net assemblies for code quality. I suppose its name indicates that it originally was focused on analyzing dependencies but these days that name is rather misleading, in my opinion, as it can do much more that that. I’ll go through some of its capabilities below.

When you first analyze a set of assemblies, NDepend displays a screen similar to this:

NDepend main screen

If you’re like me, then you didn’t check out any tutorials or web casts before running the tool and your first reaction is likely “Holy cow! What’s all this?” or something similar. πŸ™‚

It turns out that there is a ton of useful information in these displays but you have to know how to interpret them which takes a little learning.

Briefly, here’s what I gathered so far:

  • The Metrics display (with the gray blobs) lets you select a metric (such as the number of lines of code) and display it on the selected code level (method, field, type, namespace or assembly). The the blobs represent code entities of the given level and their sizes correspond to the metric’s value. All children blobs of the parent level are grouped together to indicate which parents have high accumulated values of the metric. This display was really bewildering to me at first but once you decode its structure then it’s really powerful.
  • The Dependency graph is more intuitive and let’s you see call dependencies between code components of different levels. Works well on small numbers of components but once you start to display larger dependency chanins, e.g. between methods, then it quickly becomes very difficult to use.
  • The Dependency matrix is also fairly unintuitive at first, but very powerful.
    NDepend depenency matrix
    It contains the so called Dependency Structure Matrix with displays dependencies between code elements. Each non-empty cell in the matrix represents a dependency but the number can have different meaning depending on user selections in the GUI. Green and blue cells indicate one-way dependencies (hover mouse over a cell to display a small arrow indicating the direction) while black cells represent bidirectional dependencies, which might be a cause for concern. This display can really show many interesting aspects of usage dependencies and it can also be used to generate dependency graphs for subsets of these dependencies (try the popup menu to see what I mean).
  • Finally, there the CQL query display where you can use the NDepend Code Query Language to further analyze your code. By default a set of standard queries are executed, listing potentional problems with naming, performance, and many others.

In addition to the displays described above, an HTML report is also generated after analyzing assemblies. It displays much of the same information as the graphical and interactive displays above, but also contains the “Abstract vs instability diagram”:

NDepend AbstractnessVSInstability diagram

Obviously you want to be in the green area with all your assemblies, but the dimensions used might not be completely clear at first sight. This seems to be a good article describing it it detail.

So, what are my first impressions? Well, I must say that NDepend really is a very powerful tool for analyzing code but it does require investing some time to use its many features properly. To me, that seems like an investment well worth doing.

/Emil

Working with cultureinfo in PowerShell

As a simple example of working with culture information in PowerShell, here’s how to list Swedish monthnames:

(new-object system.globalization.cultureinfo("sv-SE")).DateTimeFormat.MonthNames

The result is an array:

januari
februari
mars
april
maj
juni
juli
augusti
september
oktober
november
december

Try doing that in a BAT file… πŸ™‚

/Emil