Visual Studio slow? Try disabling hardware acceleration.

Ever since I got a new computer I’ve been frustrated by how slow Visual Studio 2013 has been. Granted, the computer is a little weak performance-wise, it’s of the Ultrabook type, but it’s been slow even at basic tasks such as moving the cursor around. Really strange.

But after a tip from an article at Code Project I decided to disable hardware acceleration:

Disable hardware acceleration in Visual Studio 2013

Disable hardware acceleration in Visual Studio 2013

And what do you know, it worked! Moving around with the cursor keys in source code is now much more responsive and no longer a source of frustration. Oh joy!

/Emil

Changing a dynamic disk into a basic one

I have just removed a SATA hard disk from my old desktop computer and inserted it into an external case from Deltaco so that it could be used from a laptop. Unfortunatly it did not work straight away, no disk showed up in the File Explorer.

Slighty worried, I opened the Disk Manager and there it was shown as “Dynamic” and “Invalid”. More worried, I started googling and found a solution that involved using a hex editor to modify a byte directly on the hard drive to switch it from dynamic into basic. It worked perfectly and the drive now works as expected. I’m not sure what the change means exactly but I’m very happy right now. It felt kind of hard core to use a hex editor to fix a problem like this, that does not happen every day. 🙂

/Emil

Setting the font of a PowerShell console to Lucida Console won’t work

Ever tried changing the font of a PowerShell console to Lucida Console, only to see the setting gone the next time you open the console? In that case, you’re not alone! I’ve been pulling my hair over this problem many times but today I decided to investigate it further.

There are several different solutions and none of them worked for me. For some people it helps if you set the font size to something other than 12 points, but not for me. For others it helps to start the console as administrator, but not for me. And here’s a strange thing: In a good old Command Prompt (CMD.EXE) Lucida Console works as a default font with no problem at all. It’s only in PowerShell console I can’t set it as default.A few of these tricks are discussed at superuser.com.

My problem turned out to be different as it was related to the regional settings of my Windows installations. The problem is described very briefly by a Microsoft support engineer here. Apparently Lucide Console “is not properly supported on CJK languages and other languages that are not localized by PowerShell (e.g. Arabic, Hebrew, etc.)”. It seems that Swedish belongs to the same category of languages that for some reason is not deemed compatible with Lucida Console. Strange thing is that it works perfectly when setting it on the console instance…

Anyway, to fix the problem all I had to do was to change my system locale to “English (United States)”:

Setting system locale to a language that is "supported" for Lucida Console solves the problem...

Setting system locale to a language that is “supported” for Lucida Console solves the problem…

Voila, my PowerShell prompt is pretty everytime I open it, instead of using the ugly “Raster Fonts” which it falled back to before.

The problem description that Lucida Console is not compatible with all languages makes very little sense to me, but at least my problem is solved.

/Emil

Devart LINQ Insight

I was recently approached by a representative from Devart who asked if I wanted to have a look at some of their products, so I decided to try out the LINQ Insight add-in for Visual Studio.

LINQ Insight has two main functions:

  • Profiler for LINQ expressions
  • Design-time LINQ query analyzer and editor

If you work much with LINQ queries you probably know that Visual Studio is somewhat lacking with functionality around LINQ queries by default so the functions that LINQ Insight offers should be pretty welcome for any database developer out there on the .Net platform (which should be pretty many of us these days). Let’s discuss the two main features of LINQ Insight in some more detail.

Profiling LINQ queries

If you’re using Entity Framework (LINQ Insight apparently also supports NHibernate, RavenDB, and a few others but I have not tested any of those) and LINQ it can be a little difficult to know exactly what database activity occurs during the execution of the applications. After all, the main objective of OR mappers is to abstract away the details of the database and instead let the developer focus on the domain model. But when you’re debugging errors or analyzing performance it’s crucial to analyze the database activity as well, and that’s what LINQ Insight’s profiling function helps with.

There are other tools to this of course, such as IntelliTrace in Visual Studio Ultimate, but since it’s only included in Ultimate, not many developers have access to it. LINQ Insight profiler is very easy to use and gives access to a lot of information.

To enable profiling, follow these steps:

  1. Make sure that IIS Express process is not started. Stop it if it is. (This assumes we’re using IIS Express of course. I’m not quite sure how to work with the full IIS server in conjunction with LINQ Insight.)
  2. Open the profiling window by selecting View/Other Windows/LINQ Profiler, or pressing Ctrl+W, F
  3. Press the “Start profiler session” button in the upper left corner of the window (it looks like a small “Play” icon)
  4. Start debugging your application, for example by pressing F5.
  5. Debugging information such as this should now start to fill the profiler window:
    The profiler displays all LINQ activity in the application.

    The profiler displays all LINQ activity in the application.

    As you can see, in this case we have several different contexts that have executed LINQ queries. For example, ApplicationContext is used by ASP.Net Identity and HistoryContext is used by Code First Database Migrations. Context is our application context.

  6. We can now drill down into the queries and see what triggered them and what SQL statements were executed.
    Drilling down into profiler data.

    Drilling down into profiler data.

    We can see the LINQ query that was executed, the SQL statements, duration, call stack, etc. Very useful stuff indeed.

Query debugger and editor

The other feature LINQ Insight brings into Visual Studio is to help writing LINQ queries and debug them. To debug a query, follow these steps:

  1. To open a query in the query editor, just right-click on it in the standard C# code editor window and select the “Run LINQ Query” option:

    To debug or edit a LINQ query, use the right-click menu.

    To debug or edit a LINQ query, use the right-click menu.

  2. If the query contains one or more parameters, a popup will be shown where values for the parameters can be given.
  3. Next, the query will be executed, and the results will be displayed:

    Query results are displayed in the Results tab.

    Query results are displayed in the Results tab.

  4. This is of course useful in itself, and even better is that the generated Sql statements are displayed in the SQL tab and the original LINQ query is in the LINQ tab, where it can be edited and re-executed, after which the Sql and Results tab are updated. Really, really useful!

If an error is displayed in the Results tab, then the most probably reason is that the database could not be found in the project’s config file, or that it could not be interpreted correctly. The latter is the case if using the LocalDB provider with the "|DataDirecory|" placeholder, which only can be evaluated at runtime in a ASP.Net project. To make LINQ Insight find a database MDF file in App_Data in a web project, you can follow these steps:

  1. Make sure that your DbContext sub-class (for Entity Framework, that is) has an overloaded constructor that takes a single string parameter, namely the connection string to use:
    public Context(string connString) : base(connString) {}
    

    This is required if LINQ Insight cannot deduce the connection string for the project’s config file. This is usually a problem in my projects since I like to separate domain logic into a separate project (normally a class library) from my “host application”.

  2. Double-click the MDF file in the App_Data folder to make sure it’s present in the Server Explorer panel in Visual Studio.
  3. Select the database in the Server Explorer and right-click it and select Properties. Copy its Connection String property.
  4. In the LINQ Interactive window, click the Edit Connection String button, which is only enabled if the DbContext class has a constructor overload with a connection string parameter, which we ensured in step 1.
  5. Paste the connection string to the Data/ConnectionString field in the panel:
    Use the connection string dialog to override the "guessed" connection string.

    Use the connection string dialog to override the “guessed” connection string.

    Click OK to close the dialog.

  6. Re-run the query with the Run LINQ Query button in the LINQ Interactive window, and it should now work correctly. If it doesn’t, try to Run LINQ Query command in the C# code editor again, since it re-initializes the query.

The ability to freely set the connection string should make it possible to work against any database, be it a local MDF file, a full SQL Server database or a Windows Azure database. This could be used as a simple way to try out new or modified LINQ queries against a staging or production database, right from the development enviroment. Could be very useful in some situations for debugging nasty errors and such.

Summary

All in all, I think the LINQ Insight is a very useful tool and I recommend you try it out if you find yourself writing LINQ queries from time to time.

I should also mention that if you have tried LINQ Insight before and found it be slightly unstable then I should mention that Devart have recently fixed a few errors that really makes the tool much more robust and useful. If unsure, just download the trial version and test it out.

Happy Linqing!

Emil

Grouping by feature in ASP.Net MVC

Motivation

When I initially switched from using ASP.Net Web Forms to MVC as my standard technology for building web sites on the Microsoft stack I really liked the clean separation of the models, views and controllers. No more messy code-behind files with scattered business logic all over the place!

After a while though, I started to get kind of frustrated when editing the different parts of the code. I often find that a change in one type of file (e.g. the model) tend to result in corresponding changes in other related files (the view or the controller) and for any reasonably large project you’ll start to spend considerable time in the Solution Explorer trying to find the files that are affected by your modifications. It turns out that the default project structure of ASP.Net MVC actually does a pretty poor job of limiting change propagation between the files used by a certain feature. It’s still much better than Web Forms, but it’s by no means perfect.

The problem is that the default project structure separates the files by file type first and then by controller. This image shows the default project structure for ASP.Net MVC:

Default ASP.Net MVC project structure

Default ASP.Net MVC project structure

The interesting folders are Controllers, Models (which really should be called ViewModels) and Views. A given feature, for example the Index action on the Account controller, is implemented using files in all these folders and potentially also one or more files in the Scripts folder, if Javascript is used (which is very likely these days). Even if you need to make only a simple change, such as adding a property to the model and show it in a view, you’ll probably need to edit files in several of these directories which is fiddly since they are completely separated in the folder structure.

Wouldn’t it be much better to group files by feature instead? Yes, of course it would, when you think about it. Fortunately it’s quite easy to reconfigure ASP.Net MVC a bit to accomplish this. This is the goal:

Grouping by feature

Grouping by feature

 

Instead of the Controllers, Models and Views folders, we now have a Features folder. Each controller now has a separate sub-folder and each action method has a sub-folder of their own:

  • Features/
    • Controller1/
      • Action 1/
      • Action 2/
    • Controller2/
      • Action 1/
      • Action 2/

Each action folder contains the files needed for its implementation, such as the view, view model and specific Javascript files. The controller is stored on level up, in the controller folder.

What we accomplish by doing this is the following:

  1. All the files that are likely to be affected by a modification are stored together and are therefore much easier to find.
  2. It’s also easier to get an overview of the implementation of a feature and this in turn makes it easier to understand and work with the code base.
  3. It’s much easier to delete a feature and all related files. All that has to be done is to delete the action folder for the feature, and the corresponding controller action.

Implementation

After that rather long motivation, I’ll now show how to implement the group by feature structure. Luckily, it’s not very hard once you know the trick. This is what has to be done:

  1. Create a new folder called Features, and sub-folders for the controllers.
  2. Move the controller classes into their respective sub-folders. They don’t need to be changed in any way for this to work (although it might be nice to adjust their namespaces to reflect their new locations). It turns out that MVC does not assume that the controllers are located in any specific folder, they can be placed anywhere we like.
  3. Create sub-folders for each controller action and move their view files there. Rename the view files to View.html as there’s no need to reflect the action name in the file name anymore.

If you try to run your application at this point, you’ll get an error saying that the view cannot be found:

The view 'Index' or its master was not found or no view engine supports the searched locations. The following locations were searched:
~/Views/Home/Index.aspx
~/Views/Home/Index.ascx
~/Views/Shared/Index.aspx
~/Views/Shared/Index.ascx
~/Views/Home/Index.cshtml
~/Views/Home/Index.vbhtml
~/Views/Shared/Index.cshtml
~/Views/Shared/Index.vbhtml

This is exactly what we’d expect, as we just moved the files.

What we need to do is to tell MVC to look for the view files in their new locations and this can be accomplished by creating a custom view engine. That sounds much harder than it is, since we can simply inherit the standard Razor view engine and override its folder setup:

using System.Web.Mvc;

namespace RetailHouse.ePos.Web.Utils
{
    /// <summary>
    /// Modified from the suggestion at
    /// http://timgthomas.com/2013/10/feature-folders-in-asp-net-mvc/
    /// </summary>
    public class FeatureViewLocationRazorViewEngine : RazorViewEngine
    {
        public FeatureViewLocationRazorViewEngine()
        {
            var featureFolderViewLocationFormats = new[]
            {
                // First: Look in the feature folder
                "~/Features/{1}/{0}/View.cshtml",
                "~/Features/{1}/Shared/{0}.cshtml",
                "~/Features/Shared/{0}.cshtml",
                // If needed: standard  locations
                "~/Views/{1}/{0}.cshtml",
                "~/Views/Shared/{0}.cshtml"
            };

            ViewLocationFormats = featureFolderViewLocationFormats;
            MasterLocationFormats = featureFolderViewLocationFormats;
            PartialViewLocationFormats = featureFolderViewLocationFormats;
        }
    }
}

The above creates a view engine that searches the following folders in order (assuming the Url is /Foo/Index):

  1. ~/Features/Foo/Index/View.cshtml
  2. ~/Features/Foo/Shared/Index.cshtml
  3. ~/Features/Shared/Index/View.cshtml
  4. ~/Views/Foo/Index.cshtml
  5. ~/Views/Shared/Index.cshtml

The last two are just used for backward compatibility so that it isn’t necessary to refactor all controllers at once.

To use the new view engine, do the following on application startup:

ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new FeatureViewLocationRazorViewEngine());

The view will now load and the application will work as before, but with better structure.

The last step is to move view models and custom Javascript files into the action folders as well (note that the latter requires adjusting the paths in the HTML code that includes the Javascript files to reflect the new locations).

Once everything is up and running, the project becomes much easier to work with and when you get used to working like this you really start to wonder why Microsoft is not doing it by default in their Visual Studio templates. Maybe in a future version?

/Emil

Updates
2014-11-21 Updated the images to clarify the concept

Overriding the placement of validation error messages in knockout-validation

By default, knockout-validation places error messages just after the input element containing the error. This may not be what you want, and you can create UI elements of your own and bind them to validation error messages but this means writing a lot of boilerplate markup and it messes up the HTML. An alternative to this is to override the knockout-validation function that creates the error messages on the page. For example:

ko.validation.insertValidationMessage = function(element) {
    var span = document.createElement('SPAN');
    span.className = "myErrorClass";

    if ($(element).hasClass("error-before"))
        element.parentNode.insertBefore(span, element);
    else
        element.parentNode.insertBefore(span, element.nextSibling);

    return span;
};  

In this example I check whether the input element contains the custom class “error-before”, and if so, I move the error message to before rather than after the input field.

Another example, suitable for Bootstrap when using input groups:

ko.validation.insertValidationMessage = function(element) {
    var span = document.createElement('SPAN');
    span.className = "myErrorClass";

    var inputGroups = $(element).closest(".input-group");
    if (inputGroups.length > 0) {
        // We're in an input-group so we place the message after
        // the group rather than inside it in order to not break the design
        $(span).insertAfter(inputGroups);
    } else {
        // The default in knockout-validation
        element.parentNode.insertBefore(span, element.nextSibling);
    }
    return span;
};

One disadvantage of this is that we have no access to the knockout-validation options such as “errorMessageClass” so we have to hard code that value here (“myErrorClass” in the examples above.

/Emil

Connection strings used for logging should not enlist in transactions

I was just reminded of an old truth when it comes to logging errors in a database: Do not enlist the error logging operation in the current transaction, if there is one! The default is to do just that, so be sure to add Enlist=false in your connection string used for logging:

Example:

  <connectionStrings>
    <add name="myDb" connectionString="..."  providerName="System.Data.SqlClient"/>
    <add name="logging" connectionString="...;Enlist=false"  providerName="System.Data.SqlClient"/>
  </connectionStrings>

If you don’t, then the error logging will be rolled back with the transaction in case of an error. Not so good…

Also note that it’s a good idea to use separate connection strings for the normal database operations (which should be done inside transactions) and the logging operations (that shouldn’t).

Now that I have written this down I will hopefully remember this the next time I do error logging in a database…

/Emil

AutoHotkey – the essential tool for all “automate-it-all” practioners

AutoHotkey logo

Introduction

This post is about a tool I always feel very crippled without, namely the incredibly useful AutoHotkey. I always install this onto new machines and I rely on it heavily every day. The strange thing is that rather few people I meet know about it so I thought I’d remedy that ignorance somewhat by posting an overview here on my blog.

So what is it? AutoHotkey allows you to map keypresses to custom macros.

This key press mapping works in any application by default and the macros that are possible are very flexible.

Some examples

The most common use is probably to start applications for a given key combination. Here’s one I use:

!#n::Run Notepad++

This starts Notepad++ when I press Ctrl-Alt-N

You can also write a little more complex macro code (the macro language is a custom one for AutoHotkey, here’s some documentation)

^!Enter::
	FormatTime, CurrentDateTime,, yyyy-MM-dd
	EnvGet, UserName, UserName
	SendInput /%CurrentDateTime% %UserName%
	return

This inserts a timestamp with my username when I press Ctrl-Alt-Enter.
Example: /2013-11-02 emila

Note that this works in any application since AutoHotkey sends standard Windows messages simulating keypresses for the text. So there’s no problem to use this macro in Notepad, for example.

To take this one step further, the key code trigger does not have to be based on qualifier keys such as Ctrl, Alt etc, it can also be a sequence of standard characters. Here’s one example of this:

; Today
::tdy::
	FormatTime, CurrentDateTime,, yyyy-MM-dd
	SendInput %CurrentDateTime%
	return

This creates a macro that inserts the current date every time I type the sequence “tdy” followed by a word delimiter (space, tab, dot, comma, etc). This simple macro is probably the one I use the most, it’s so incredibly useful to have easy access to the current date when taking notes, creating folders, etc.

I also have a few code snippets I use a lot when programming:

::isne::String.IsNullOrEmpty
::isnw::String.IsNullOrWhiteSpace
::sfm::String.Format 

This way I never feel tempted to write string comparisons such as if (s == null) { ... }, it’s just as easy to write if (String.IsNullOrEmpty(s) { ... } using my snippet. And this kind of snippet works even in Notepad 🙂

This ability to replace character sequences is also very useful for correcting my common spelling errors:

::coh::och
::elelr::eller
::perosn::person
::teh::the

I try to detect common operations that I often perform that can be automated and try to write an AutoHotkey macro for them. An example of this is that I have noted I often have to write a valid Swedish social security number (personnummer) when testing applications I write. This can be a pain since the number has to end with a correct checksum digit, so I wrote a simple web service that creates a random SSN and returns it. This service can be called from AutoHotkey like this:

; Anropa web service för att skapa personnummer
::pnr::
	EnvGet tmpDir,TEMP
	UrlDownloadToFile http://kreverautils.azurewebsites.net/api/testdata/personnummer,%tmpDir%\random.pnr.txt
	FileRead, pnrRaw, %tmpDir%\random.pnr.txt
	StringReplace, pnrClean, pnrRaw, ", , All
	SendInput %pnrClean%
	return

This is really convenient, I just type “pnr” and it’s replaced with a valid SSN. This really lowers the mental barrier when testing applications where this data is required, resulting in better testing. (Mental barriers when testing applications are very interesting and is perhaps worth a separate blog post some time…)

Summing it up

The above examples absolutely just scratch the surface of what you can do with AutoHotkey, so why not give it a try? It’s free, so it’s just a matter of downloading it and start experimenting. Download it from its home page.

Final tip

I like to have all my computers share the same AuoHotkey setup so I have created my main macro file (general.ahk) in a DropBox folder. I also create a shortcut in my startup folder (find it by “running” shell:startup using Win + R) with this target string:

C:\Users\emila\Dropbox\Utils\AutoHotKey\general.ahk

Since AutoHotkey associates itself to the “.ahk” file extension, this is enough start start the script on startup. Any change I make to the macro is automatically propagated to all my computers.

Good luck with your automations!

/Emil

Tablet platform overview – iPad, Android and Windows 8

This post will be an opiniated discussion about the advantages and disadvantages of the different tablet platforms out there. This is not the kind of post I normally write but for once I feel the need to express my personal opinion about a subject, rather than solving a particular programming problem which is my normal kind of blog post. The reason for this is the continuing bashing of Windows 8 tablets (especially the RT variant) that occurs online and in printed media. As it happens I have personal experience of Android tablets, iPad and both Windows RT and Windows 8 tablets so I’m in a good position to compare these systems to each other.

Much of this post is obviously subjective opinions but hopefully one or two of you will learn something new about these platforms.

Android

Acer Iconia Tab A500
My first tablet was an Acer Iconia A500 that I bought a few years ago. I was really happy with this tablet from the beginning and I choose it over an iPad for several reasons:

  • I could configure it and tweak it so that it worked exactly the way I wanted
  • Widgets were very useful
  • There were plenty of apps, even though not as many as for the iPad. There are several kinds of Android apps that simply do not exist for iPad, such as custom keyboards (the one I like best is the amazing SwiftKey) or utilities such as the really cool Tasker app that triggers actions on events such as geographical location, currently running app, etc.
  • It had a full size USB port enabling it to read files directly from a USB memory or an external hard drive
  • It had a micro sd card slot making it easy to extend the builtin memory
  • It was cheaper than the iPad

My Android home screen

My Android home screen, configured they way I want it with folders and a calendar widget.


All these are valid reasons and the tablet did indeed live up to my expectations in these areas. It’s pretty cool to be able to connect a PlayStation 3 hand controller and play games on it, for example.

There are downsides to this platform of course, as there always are. The main reason that I don’t use this tablet very much these days is mainly one thing – it has a really laggy user interface. The GUI wasn’t very smooth even when it was new, there was always some “stuttering” in animated transitions. It was disturbing but I could live with it. The problem is that it got worse over time. I’m not sure about the reason for this but I have experienced the exact same thing on all my Android deviced I have owned (the HTC Desire and even the Samsung Galaxy S3 have the same problem). The reason might be that the accumulated amount of installed apps requires more and more resource but it’s not very evident what’s happening when the GUI locks up for a second or two sometimes and there is no apparent activity. CPU monitors don’t indicate extensive CPU usage either so I’m not sure about what the problem is.

Also, the standard web browser is not so good either. It displays most pages correctly but is rather slow. Chrome is the new standard browser and that’s even worse in that respect, it takes ages to download enough of a page to start display it and when you scroll down a page it’s incredibly laggy when downloading new content coming into view. Other browsers such as Dolphin does a much better job, making web browsing bearable.

For these reasons I was tempted at buying an iPad and when I found an iPad Mini on sale I took the plunge.

iPad

iPad Mini

What immediately strikes you when you buy an “iDevice” is that it feels really classy, from the box design and packing to the actual design of the device. Apple products have a reputation for “just working” without hassle and I think that the philosophy of good looking devices that you simply turn on and start using is a really neat one.

You can’t configure an iPad in particularly many ways, but that was fine by me. This time I bought a tablet, I was tired of having to troubleshoot performance problems and stuff like that. I bought the iPad to use it on the commuter train and I wanted it to be easy to handle, quick to start up and have stable apps. I have mostly used it for watching video courses using the PluralSight app, watch movies or play the odd game. The iPad do these things really well. And there is never any unexplainable delays in the user interface! The reason for this is likely the iPad’s simpler architecture and APIs, it doesn’t for example allow background processes as freely as Android, but for me the user experience easily outweighs the limitations.

This is where you see the magazines you subscribe to.

This is where you see the magazines you subscribe to.

There are many magazine available...

There are many magazines available…


The other thing that Apple has really been successful at is integrating a lot of apps and content in the App Store. I was aware of the large amount of very polished and stable apps that was available (and I wasn’t disappointed) but I did not know of the excellent integration of magazines and podcasts. Essentially all you need to do to have easy access to this content is to select the ones you’re interested in, pay, and you’re done. No other platform makes it this easy, you often have to install a special app, Google for feed Urls, etc, but for iOS devices all this is already prepared for you.

The worst problem with the iPad in my opinion is it’s reliance on iTunes for importing media to the device. This is really incredibly limiting as it supports very few video formats and if you try to import non-supported files you don’t even get an error message, it’s just silent. Took me a while to understand what the problem was. The builtin video player is good for supported files and it supports sub titles in m4v files, for example. But if you need to play files on your home network you need a more powerful media player

The best media player I have found is nPlayer. It supports many formats and features for network files (in file shares and DLNA servers for example). Recently I also learned that that it actually supports copying non-iTunes-compatible video files directly to the iPad by using the App tab of iTunes when my iPad is connected to the computer. That removes the need to convert all files to specific formats as nPlayer supports most file types. The files are only available to nPlayer, not any other app, though. It’s a little clumsy, but it works.

Windows 8

Surface Pro

Surface Pro

Surface RT

Surface RT

In spite of the video annoyances I was rather happy with the iPad, but when I attended a Microsoft conference (TechEd Europe to be precise) there was an offer to purchase the Microsoft Surface RT tablet and the Surface Pro tablet rather cheaply. Being a technology buff, I couldn’t resist the temptation so I got both of them, mostly out of curiosity. I have been using Windows 8 at my home computer for some time and haven’t really seen much point in this Windows version for a desktop computer but I though it would be nice to try it on devices that have been developed from the start to work well with touch and the “Modern” UI interface.

The Surface Pro is a real PC with an Intel Core I5 CPU and Windows 8 Pro. You can install all Modern UI apps and any old Windows program on it and it just works. I have even tried to install Oblivion, a 3D role-playing game a few years old, and it played really well. The unit is a little heavier and thicker than a normal tablet but that can be forgiven when you consider that it’s actually a full PC.

The device comes with a pressure sensitive pen that feels very natural and is more useful than I expected it to be, both for taking notes and drawing in OneNote and for operating the desktop UI, which sports rather tiny UI items because of the full HD screen.

The Surface RT is in many ways the opposite of the Surface Pro. It has a really neat design, and it’s thin and light. It uses much weaker ARM-based hardware than the Surface Pro and it’s a wide-screen device which makes it much better suited for watching movies than the iPad. The battery life is much better than the Surface Pro.

The main objections people seem to have about the Surface RT are

  1. There are too few good apps in the Windows Store
  2. You can’t run desktop apps on it

The first point is absolutely valid, the really good and useful apps in the Windows Store are really few and the ones that are there don’t seem to get much attention from their developers. Wordfeud, for eample, is several versions behind its iOS and Android siblings.

The second point is not so much of a problem since running desktop apps do indeed require more powerful hardware and worse battery life. If you really need that, go with the Surface Pro, otherwise get an Surface RT. The criticism is also unfair when comparing the Surface RT to Android tablets and iPads – they can’t run Windows desktop applications either! And besides, Microsoft Office is included with the Surface RT and it works really well. I haven’t had any problems with opening documents or spreadsheets.

Actually, apart from the meagre app supply I can’t really see many problems with the Surface RT. What surprised me was that the deskop mode is actually available and it looks almost identical to a “real” desktop computer. There is the normal File Explorer and it supports network shares just as well as a normal laptop/desktop. This is a really powerful feature no other tablet OS can match. It just works! Also, if you have a Modurn UI app, they also will support file shares etc automatically as that support is built into the OS. On other platforms, each app has to rely on a third party library to cope with this and it seldom works robustly.

Movie file format is limited in the standard player, so I use PressPlay Video instead, which is great. Moving files from other computers, memory sticks or the Internet is done in the same way as for a “normal” computer using the File Explorer or saving directly from the web browser.

Being a Windows computer, my Surface RT tablet automatically discovered my Canon network printer and installed the correct printer driver. I can now print web pages from Internet Explorer or documents from Office and it didn’t take any configuration at all. “It just works.” 🙂 No other platform can do this as far as I know.

Windows 8.1: Clicking a link in a mail now displays the web browser side by side with the mail app

Windows 8.1: Clicking a link in a mail now displays the web browser side by side with the mail app

So I’m actually rather happy with my Surface tablets, and when I installed the Windows 8.1 Preview on my Surface RT, it’s even better. There are numerous small improvements, such as much better side-by-side running of apps (see image on the right). Again, no other tablet can do this (Samsung has something similar for their tablets but it’s a more manual process).

I have used both my Surface tablets more than I expected. On the Surface Pro, Netflix in the Chrome browser with MediaHint browser add-on has been awesome for watching US NetFlix content. I have also started to appreciate Microsoft Office OneNote (I think the pen made it feel very natural to use that application). The Surface RT has also been used a lot, mainly for web browsing and watching movies (from my NAS or the standard NetFlix Windows 8 app). Windows 8.1 makes the Windows 8 experience much more streamlined and I think it’s a shame that Windows 8 wasn’t like that from the first version.

Summary

Of the three kinds of tablets I own, the Android is used the least because of UI lagging. Google promises that Android 4.3 adresses this problem, but I’ll believe that when I see it. It’s also not likely to be an update that Acer will support on an older tablet like mine.

The iPad is actively used in exactly the ways I expected, mainly when commuting and also by my kids as it has a bunch of great games that they like.

My Windows 8 tablets are also actively used and I like them both more than I thought I would. They have more features that are usable to me than both the Android tablet and my iPad.

The main problem with Windows 8 is, as everybody already knows, the lack of useful and robust apps. There are apps like that, but they’re too few. I don’t think that’s going to change either, iOS and Android have gained so much momentum that it’s too late for Microsoft to catch up with them. Microsoft have one unique absolute killer feature however, in that Surface Pro and it’s brethren can run Windows desktops applications. When that kind of tablet becomes smaller and with better battery life I think that much will happen in the tablet market. My Chrome + NetFlix + MediaHint use case is an example of the power that this platform brings to the tablet format.

To sum it up, I think the Windows RT and Windows 8 Pro tablets have taken too much and exaggerated criticism so far. They’re not perfect but then the competetion is not fault-free either. The problem is not that Microsoft and it’s allies deliver bad products, it’s that they’re too late so that the other app markets have had time to grow too large.

Just my two cents…

/Emil

TechEd Europe 2013 – Friday

The last day of my TechEd event was cut in half because of an early flight home (I had a wedding to attend to on Saturday) so this post will be rather short on the sessions of the day and then I’ll conclude with a summary of my conference experience.

I attended three sessions, the first was about 3D DirectX development with Visual Studio 2012. That’s a kind of unusual subject for TechEd but was interesting for me as a general developer. I’d like to see more of this kind in future conferences. The second was a session about news in Entity Framework 6 and that was also good. This was actually a repeated session from earlier in the week since the first one was full.

Mark Russinovich

The last session I attended in this year’s TechEd was this annual edition of Mark Russinovich’s Case of the Unexplained session series: Windows Troubleshooting. These sessions are truly amazing and if you ever get a chance to attend one of them I can warmly recommend them. Mark is one of the best speakers I’ve seen in our industry (together with Scott Hanselman), entertaining and incredibly knowledgeable. A short description of this session that he describes real-world use of the SysInternal utilities (of which he is the creator) to find and solve Windows problems such as crashes, performance problems, hangs, etc. As Windows users we all experience these kind of problems so these sessions are both useful for getting insights to problems that can occur and also to get tips of how to use the SysInternals tools. I’ve experienced a few of these sessions before and they’re always very useful. This was the best session in this edition of TechEd so it was a great way to end this year’s edition for me.

With that it’s time to wrap up TechEd Europe 2013 with some thoughts about the experience. This year, there was about 5 000 delegates according to officials, which can be compared to the 10 000 count of TechEd North America. It’s a much smaller conference than TechEd NA but there were still an abundance of sessions to attend, generally about 15-20 in parallel for each time slot. That’s lot indeed, but if you look in more detail about the contents of these sessions you find that most were targeted at the IT pro community. There were also one or two Business Intelligence sessions for each time slot leaving perhaps room for 2-4 developer sessions. Most of these remaining developer sessions fall into two categories:

  • Windows Azure technologies
  • Microsoft SaaS offerings such as Team Foundation Services, BizTalk Services, and the like

This makes for a very heavy cloud focus on the developer parts of the conference. There wasn’t much about Windows 8 development, Asp.NET web development, Javascript (which is otherwise mentioned everywhere in the industry these days), etc. Also, there wasn’t much on software architecture or best practises (except a little about Azure, of course). To me, being a an architect and developer, this makes TechEd a less relevant conference than I’d hoped it to be. There was some consensus (at least among some of us delegates) that the IT pro side of things have been given more priority at TechEd than before. I don’t know if this is deliberate from Microsoft or if it just turned out this way, but in either case I will probably choose another conference the next time I get the chance to attend one.

This year the Build conference took place at the same time as TechEd Europe. Build is very focused on the developer and it’s starting to feel like a better match for my work role than TechEd. The planning seems very strange to schedule two Microsoft conferences of this large scale at the same time but I suspect there are explanations we aren’t informed of. Earlier editions of Build have taken place in the autumn but I think that Microsoft wanted to announce Windows 8.1 before the summer in order to not loose more ground than they already have. Therefore they needed to squeeze in Build before the summer. Build was also announced much, much later than TechEd, indicating some urgency from Microsoft.

The problem for me as a TechEd delegate, is that it definitely made TechEd feel a bit like a second class conference for developers, as I mentioned in one of my earlier posts. That’s not a feeling you want for your delegates if you’re organizing world-class conferences…

However, I attended some quite useful sessions and it was interesting to see what’s happening with Windows Azure and other online services. TechEd is still a useful conference for developers, especially if you have some interest in the IT pro side of things and for me, having some architectural responsibilities and also an interest in systems integration, the conference scope would be a quite good match if only there was a little more general developer content.

That about sums it up, I guess. That’s the final post on this mini series of Tech Europe 2013 reports. Thank you for reading.

/Emil