16 December 2010

F#, xUnit theories and InlineData

How do you know you haven’t blogged much lately? When you want to write a blog post and you forgot the name of the thing you’re actually writing it with. “Wait, it’s called Window Live something something…what did that icon look like again?”. Also, Windows Live Writer now has a ribbon! Good stuff.

Anyway, just a short heads up to save people some time: in the xUnit extensions, there is a Theory attribute which in combination with the InlineData attribute lets you specify a parameterized xUnit test. The InlineData attribute lets you specify the values the test should run with (you can’t test everything randomly with FsCheck, you know). That and other xUnit extension goodies are explained here.

My point is – this attribute does not work in F# (or managed C++), because the AttributeUsage attribute on the InlineData attribute is defined on its parent class, DataAttribute. It is correctly defined as Inherited = true; but only the C# compiler seems to honour this. More detailed explanation in this stackoverflow post. If  you care about this, please vote for the bug on the xUnit site!

Luckily, the workaround is not too bad – just use the PropertyData attribute instead:

let symbolTestData = 
    [ "an1 rest",           "an1"
      "?test bla",          "?test"
      "?+_est bla",         "?+_est"
      "+_123 bla",          "?+_123"
      "+._1.2.3 bla",       "+._1.2.3"
      "+_q1r-2g3! bla",     "+_q1r-2g3!"
      "abc.def.feg/q bla",  "abc.def.feg/q"
      "ab/cd? bla",         "ab/cd?"
    |> Seq.map (fun (a,b) -> [|a; b|])

let ``should parse symbol``(toParse:string, result:string) =
    Assert.Equal(result, run symbol result)

Note that xUnit actually expects a sequence of arrays, but I think the list of tuples looks better, at the small cost of an extra conversion step.

Technorati Tags: ,,
Share this post :

10 July 2010

F# and Code Contracts: not quite there yet

In this post I’ll show how you can get Code Contracts sort of working for F#. From my very initial explorations, I would conclude that they seem basically usable for F# programming – but you’ll need some glue and tape, and not everything works as you’d expect.

Code Contracts: the short story

Code contracts are a way to express pre- and postconditions and invariants for (theoretically) any CLR based programming language. This is a Very Good Thing, as such contracts are a great way to specify the expected behaviour of your program (i.e. your intent).

Pragmatically, contracts are usually compiled into a debug build of a program, and checked at runtime. That’s how Eiffel does it, where this whole contract thing got started. So every time your program runs, it’s actually being tested in a very deep way. That’s why a tool like Eiffel’s ARTOO, which basically randomly constructs objects and calls method sequences on them, works well: you can find bugs by just executing your program randomly! Of course contracts are also good for documentation: no more looking in the code to find out what happens when you pass in 0 for a particular argument – the pre- and postcondition should tell you.

In the .NET world Code Contracts delivers this kind of functionality by means of a rewriter. That is, you write your actual contracts in a contract language using the System.Diagnostics.Contracts namespace in .NET 4.0, but the methods you’re calling do not actually do much: instead the rewriter runs after the compilation phase on the resulting assembly, recognizes the calls and writes the actual contract checking code in the assembly. This is necessary for example because it’s very useful in a contract to be able to refer to the old value of a parameter, or to the result of the method. Without a rewriter, you’d basically just have something like Debug.Assert, which is not very expressive.

MS research also has made a static checker for contracts, which tries to statically determine whether your contracts hold, and if so, is able to feed that information back to the rewriter so that statically checked contracts do not need to be checked at runtime. Personally I don’t think that is useful because it seems very imprecise. Time will tell – happy to be proven wrong.

Since the rewriter works on the assemblies, you can just use the Contracts API from any .NET language, at least in theory. In practice, different compilers tend to output different representations of types or whatever construct they offer, so it’s not unthinkable that the rewriter gets confused. In fact, in the case of F# it gets confused when you try to write contracts in constructors, as we’ll see.

So while nice in theory, this turns out to be another case of “It works for all .NET languages, as long as they’re exactly like C# or VB.” At least until the contracts folks get round to actually supporting F#, I guess.

The nuts and bolts

‘Nuff complaining. Let’s see how we get code contracts to work with F#.When you install the code contracts add-on, in C# you get a nice extra tab in your project properties. Not so in F# – but that’s the easy part. The whole rewriter thing is actually nicely abstracted away in some custom target file that the Code Contract folks have kindly provided, so we’ll just need some manual editing of the F# project file to get it to work. Here’s an example:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
    <CodeContractsCustomRewriterAssembly />
    <CodeContractsCustomRewriterClass />
    <CodeContractsLibPaths />
    <CodeContractsExtraRewriteOptions />
    <CodeContractsExtraAnalysisOptions />
    <CodeContractsBaseLineFile />
    <Reference Include="mscorlib" />
    <Reference Include="FSharp.Core" />
    <Reference Include="System" />
    <Reference Include="System.Core" />
    <Reference Include="System.Numerics" />
    <Compile Include="Module1.fs" />
    <None Include="Script.fsx" />
  <Import Project="$(MSBuildExtensionsPath32)\FSharp\1.0\Microsoft.FSharp.Targets" Condition="!Exists('$(MSBuildBinPath)\Microsoft.Build.Tasks.v4.0.dll')" />
  <Import Project="$(MSBuildExtensionsPath32)\..\Microsoft F#\v4.0\Microsoft.FSharp.Targets" Condition=" Exists('$(MSBuildBinPath)\Microsoft.Build.Tasks.v4.0.dll')" />

I’ve put the things to change from a vanilla F# project file in bold. I won’t go over all of them – you can experiment for yourself by playing with the UI in a C# project and checking what the effect is. The three important bits are:

  • CodeContractsAssemblyMode: per assembly, code contracts allows you to set it in either a compatibility mode (0) or the standard mode (1). For new assemblies, 1 is what you want.
  • CODECONTRACTS_FULL compile constant: if you don’t define this, the rewriter won’t write the contract checking methods in your assembly. So define this in you Debug configuration (or in a special Contracts configuration)
  • CodeContractsEnableRuntimeChecking: Otherwise the rewriter doesn’t kick in. Not sure why you’d need this and the constant, but there you go.

Add this to your F# project and you should see the rewriter kick in after the build – it’s called ccrewrite.exe.

Let’s write some contracts

That was the easy part. Now, let’s try to convert the program given in the Code Contracts documentation to F#:

type Rational(numerator,denominator) =
    do Contract.Requires( denominator <> 0 )
    let ObjectInvariant() = 
        Contract.Invariant ( denominator <> 0 )
    member x.Denominator =
        Contract.Ensures( Contract.Result<int>() <> 0 )

So this is a simple rational type. It shows you basically how contracts work – you call methods on the Contract static class:

  • Contract.Requires to impose a precondition on arguments – in this case an argument of the constructor.
  • Contract.Ensures to express a postcondition – in this case a postcondition on the result of a getter, which is nicely accessible using Contract.Result.
  • ObjectInvariantMethodAttribute and Contract.Invariant to impose an invariant on a class – something that should hold after every execution of a method.

So far so good. Alas, when we try to build this, we get:

warning CC1041: Invariant method must be private
error CC1011: This/Me cannot be used in Requires of a constructor
error CC1011: This/Me cannot be used in Requires of a constructor
error CC1011: This/Me cannot be used in Requires of a constructor
error CC1011: This/Me cannot be used in Requires of a constructor
error CC1038: Member 'Module1+Rational.denominator' has less visibility than the enclosing method 'Module1+Rational.#ctor(System.Object,System.Int32)'.: error CC1069: Detected expression statement evaluated for potential side-effect in contracts of method 'Module1+Rational.#ctor(System.Object,System.Int32)'. (Did you mean to put the expression into a Requires, Ensures, or Invariant call?) error CC1004: Malformed contract. Found Requires after assignment in method 'Module1+Rational.#ctor(System.Object,System.Int32)'.


And it turns out that the rewriter is confused by F#’s representation of constructors. When we comment out the Contract.Requires in the constructor, everything works fine.

The warning about the invariant method is because F# actually compiles private members as internal. The rewriter flags this using this warning, but it’s otherwise not much to worry about, I guess, though slightly annoying if you’re – like me – fairly obsessive about getting your code to compile without warnings.


I’ve also checked code contracts with various pre-and postconditions to functions, and that seems to work fine. So overall it’s not a bad story, but it would be nice to see some better integration and to fix the bug(s).

I haven’t checked other features of code contracts, like contract inheritance or expressing code contracts on interfaces, which needs a few tricks as well, so there is a chance that you’ll run into problems there.

Finally, a tip: if you don’t like the C#-ish feel of the Contracts API, you can define your own inline functions that call into the actual API – given that they’re inlined, the rewriter will not see the difference.

So close, but not quite there yet. I’ve asked a year ago if the situation was going to improve, and again a few weeks back, but it’s basically unchanged since then. So F#-ers: head over to the Code Contracts forum and ask for F# support! I’m feeling a bit alone in the desert there at the moment…

Technorati Tags: ,

Share this post : MSDN! Technet! Del.icio.us! Digg! Dotnetkicks! Reddit! Technorati!

09 June 2010

Thinking outside the Visual Studio box

Visual Studio and .NET development go hand in hand. As far as I know, every .NET developer uses it. Compared to the Java IDE landscape, there isn’t even competition. And what little competition there is, is pretty much a carbon copy of the approach Visual Studio takes to development. Visual Studio is incontournable, and there is little on the horizon that will change this situation.

What toothpicks and Visual Studio have in common

I did some Java development in Eclipse over 3 years ago, and as I remember it, it worked better than Visual Studio 2010. Let me repeat that:

Eclipse 2007 (for Java) is a better IDE than Visual Studio 2010 (for C#).

I don’t want to go into a feature by feature comparison. No doubt Visual Studio would win. I’m saying that for actual development, you know the thing you’re doing 95% of the time in an IDE, it’s just better. I believe these activities are compiling, testing, refactoring, browsing code and source control.

Compiling. The most important feature of compilation is that you don’t see it. The incremental compilation in Eclipse is just way better. Even for medium sized projects, press Ctrl+S and your code is ready to run. Instantly. This is different from the incremental “compilation” Visual Studio does – while this updates intellisense info on the fly, it does not actually build your code. So if you want to test or run, you’ll have to wait.

Testing. Eclipse comes with excellent JUnit integration out of the box. I guess there’s MSTest for Visual Studio, which is ok, but it doesn’t have that Red-Green-Refactor feel. Open source testing frameworks as far as I know don’t reach the same level of integration. If you want code coverage, be prepared pay some money for at least the Premium edition.

Refactoring. The refactorings in Visual Studio pale in comparison to those in Eclipse. Hands down.

Browsing Code. I think Visual Studio actually has many features in this area, but the UI doesn’t make them particularly discoverable. I’ve found that following references, finding overrides and such in Eclipse is faster and easier than in Visual Studio.

Source control. If you fork out some money, you get integration with Team Foundation with Visual Studio. My feeling is that integration with other source control systems like mercurial and subversion is just not as good as the integration in Eclipse.

Products like Resharper and Testdriven.NET mitigate the refactoring, browsing and testing issues for Visual Studio. But Eclipse comes with these things out of the box. It’s just there when you download it, which is first of all free and second, convenient.

Then, Visual Studio just doesn’t have a good UI. I’m not excited that I can finally make code windows free floating. The solution explorer naturally makes you focus on the files and the file system – while this actually matters little for your day to day development activities. You want to focus on modules, classes, namespace instead (yes, I know about the class browser, but somehow I keep getting pulled back to the solution explorer by Visual Studio.) And then that annoyance among annoyances – the much hated Add Reference dialog. In 2010 they made it asynchronous, so we can now pretend that it is not as slow and painful as amputating your arm with a toothpick, except that it’s worse! At least before you knew when you could start searching for a reference. Now you’re left to wonder – is my reference not there or is it still loading in the background?

What rocks and Visual Studio have in common

Both SharpDevelop and MonoDevelop seem to focus on conforming with Visual Studio. I guess the argument is that this makes it easier for people to migrate from Visual Studio. Wake up people: if it looks like Visual Studio, nobody is going to bother anyway.

I don’t want to talk down to the developers of these projects. I’m sure they’re passionate about what they’re doing. But let’s face it – they’re not going to be able to compete on features, as long as Microsoft is throwing money at Visual Studio. On marketing alone, they’re overwhelmed. As far as I can see there is no buzz surrounding these projects at all. Nobody particularly likes or hates them. They’re boring. Because Visual Studio is. It’s just there. Like a rock. A boring, grey rock.

A call to arms

So, dear SharpDevelop or MonoDevelop developer, or whoever is looking to develop a new software project: here’s my call to arms for you.

Dare to innovate! Forget about Visual Studio. It’s there, and it’s going to stay – but give use a ray of hope. Give us something to look forward to. Give us something that we can love, or hate – something that at least provokes some feelings.

Need some ideas?

Stop bothering with trying to be compatible with Visual Studio. Sure, the assemblies need to be compatible with .NET, but who really cares about MSBuild, and project files, and why even duplicate the solution-project-file based organization of Visual Studio - is there a better way? Can you finally free us from the tyranny of the file system and let us focus on the abstractions provided by the language – it’s called software development, not file shuffling.

Can we make better compilers for C# or VB or F# that are incremental? Note I am talking about the compiler here, not the build system. The Eclipse Java compiler proves that incremental compilation can be pretty much as fast as no-compilation (as in dynamic languages).

Source control – can we go beyond superficial integration with popular SCMs and really integrate change tracking in the language and IDE? With the disk space currently available, is there even a use for the distinction between in memory undo/redo, saving a file to disk and committing a change in source control? Can’t I just type in my code and expect it to be saved, and undoable, and in source control? In addition, can we make “diff” tools language aware please – it’s silly that conflict resolution is line-based, while pretty much every coherent change to code spans multiple files and multiple lines. I’ve never heard of a line-based programming language. Let’s talk about modules and functions and objects and interfaces all the way through please.

Have a look at a Smalltalk IDEs. They have a browser oriented perspective on development, and have had another view on code organization and source control for ages now. There are lots of good ideas there, and it’s about time they are finally acknowledged and brought to the mainstream. The Hopscotch browser for Newspeak looks interesting too.

As far as UI goes, Code Bubbles (built on Eclipse for Java) seems to have some very well thought out UI ideas on browsing, cooperating and debugging.

I believe that any project that does just some of the above has the potential to become huge in the .NET space. Even with respect to mainstream Java IDEs which are nowhere near revolutionary, the .NET community seems to be behind the curve IDE-wise. Developers can see how computing devices all around them are becoming smarter, better integrated and user-oriented. Mainstream IDEs are lagging way behind.

Time for a revolution?

Share this post : Technet! del.icio.us it! del.iri.ous! digg it! dotnetkicks it! reddit! technorati!

04 June 2010

FsCheck 0.7: Pleasure and Pain

I just released FsCheck 0.7  on the codeplex page. Whereas previous releases have been mostly incremental, this one is quite different. With the official release of F# 2.0 and all that it entails, I’ve refactored the FsCheck API from a usability perspective.

The previous API was inherited from Haskell QuickCheck, which means that it wasn’t very .NET like. For example, once you opened the FsCheck namespace, all the relevant functions just came into scope. So you, dear user, had to rely mostly on your memory to find the function you need, and also you were not empowered to discover new functions. The new API is a much needed improvement on that situation: it is consistent with .NET idioms, works well with intellisense and the documentation should tell you all you need to know.

I’ve also tried to improve FsCheck’s output, tried to improve the story with registering generators by type (which was a real pain before, and I hope it has now got at least a bit better), removed concepts that nobody seemed to understand (Hi, coarbitrary!) and generally improved the documentation and names of functions.

That’s the good news – the bad news is that this release is not backwards compatible. I’m afraid that was not possible given the breadth and depth of the refactoring that I wanted to do. I really believe that FsCheck is in a much better place now, also as far as plans for the future are concerned.

Short attention span?

Get a visually appealing (but much less detailed) overview in this prezi.

The whole truth

Well, at least as far as I can remember.

  • Factored API into three type and  module pairs – cfr e.g. the collection types in FSharp.Core. So there is now:
    • Gen<’a> type and the Gen module that contains utility functions to define your own random generators. This is essentially the same as before, except there have been many renames to make everything consistent both internally and with .NET, and understandable (e.g. vectorOf has been renamed to listOfLength, and fmapGen has been renamed to Gen.map). Also, where functions used to take a list as argument, they now take a more general and .NET-like seq instead.
    • Arbitrary<’a> type and Arb module. Arbitrary, as before, packages up a generator and a shrinker. Coarbitrary was removed (without removing FsCheck’s functionality to generate functions). The Arb module is new, and plays a fairly central role in FsCheck now – a few useful functions that operate on a generator and a shrinker at the same time were added.
      • Arb.Default type contains the default generators and shrinkers that ship with FsCheck. I’ve cleaned up most of the mess, but am not completely finished yet. Anyway, you should find many useful Arbitrary instances here, and they are now easily accessible (I believe in the previous version you had to type something like Arbitrary.Arbitrary.String().Arbitrary – now it is Arb.Default.String)
    • Property type and Prop module. Here is where all the functions to build the actual properties live. Not much changes here, except that Prop.forAll now takes an Arbitrary instance instead of a generator – I believe this is a first step in making FsCheck more easily extendable to other kinds of generators (e.g. file based or exhaustive generators) by abstracting away the Gen type from Property. (This is currently only skin-deep though)
  • Similar refactoring with running tests.
    • Instead of quickCheck, verboseCheck, and all the other functions, there is now a Check type with relevant static methods that do the same thing. This made it possible to overload method names, which was annoying about the functions – I felt like I had to make up meaningless names. Also, the various checking types that you can do are now discoverable using intellisense by typing Check.<intellisense>.
    • Similarly, the default configurations have now been added as static members on the Config record type, which is were you would expect them to be.
    • The Runner module has had some cleanup as well, with some new names for old friends, and the IRunner interface has been extended a bit. Like the other modules, it should only expose what is useful to you, so it should be fairly easy to figure out what you need from intellisense alone.
  • Output has been improved in various places.
    • When registering generators, FsCheck now automatically overrides the previous generators, and also produces to some output where you can see what the new types that have been registered are, as well as what types were overridden. Hopefully this will help you fix those frustrating issues where types are not registered properly more easily.
    • When checking all the members on a type or module, and an exception is thrown, the TargetInvocationExcception is now removed and you just get to see the actual exception, which reduces much of the noise.
    • Various small output improvements.
  • Size control has been changed in the Config. Instead of a function, you can now give a begin and end size and FsCheck will increase the size linearly between those two values. I did this to make the size independent of the number of test runs that you choose – the final size is always guaranteed to be the end size you ask for. So it is easy to change the number of tests while keeping the same size. I believe this is the least surprising result, as changing the number of tests should not directly influence the test size.
  • Added many more tests, especially for the generators. I’m still not happy with the coverage, but we’re getting there.
  • Wrote a simple documentation generator. I’m not very happy with it, but at least I reviewed all the documentation and at least the output shown in the documentation is now consistent with the actual output, and it is easy to keep that way.
  • Cleaned up the examples a bit. The examples used in the documentation are now moved to the FsCheck.Documentation project (which also generated the codeplex formatted output).
  • Typeclass simulation has had a pretty much bottom up rewrite. Hopefully this cleans up some nasty issues with static state I’ve seen, and also the API is I think much improved. Various bug fixes.

That’s pretty much it. I’ve got more in store still, but not sure when it will materialize into something that I can release. My spare time is getting scarce…but do post a comment on the forum or on my blog if you get stuck in moving your old FsCheck code to the new API, or any other feedback you may have.

Happy FsChecking!

Share this post : Technet! del.icio.us it! del.iri.ous! digg it! dotnetkicks it! reddit! technorati!

15 May 2010

Accessing Visual Studio’s Automation API from F# Interactive

Because why anyone would want to write VBA in the Visual Studio macro editor is beyond me. All the bits and pieces below are pretty much known, but I think the overall result is new. For the bewildered (I don’t blame you), what you’ll be able to do is, from F# Interactive, for example enumerating the current projects in the open solution:

> myDTE.Solution.Projects |> Seq.cast<Project> |> Seq.map (fun p -> p.FullName);;
val it : seq<string> =
     "C:\Users\Kurt\Projects\FsCheck\doc\FsCheck.Examples\FsCheck.Examples.fsproj";     "";
     "C:\Users\Kurt\Projects\FsCheck\doc\FsCheck.Checker\FsCheck.Checker.fsproj";     ...]

You get access to the DTE top level automation object of the Visual Studio instance the current FSI is running under. This DTE is a COM interface that basically allows you to do programmatically what you can do using the Visual Studio UI interface – it’s the thing you get access to in the Macro IDE as well. So you can do pretty cool things – like add files, generate code, listen to build events, find dependencies of projects and such.


The Macro IDE gives access to the DTE object automagically. But fsi is actually a separate process, that just happens to input and output through Visual Studio. So basically we need to get access to a Visual Studio instance’s DTE object from outside of Visual Studio. Luckily, each instance of Visual Studio registers its DTE object in the Running Object Table (hereafter called ROT) with two keys:

  • As a string of the form “!VisualStudio.<VS version>.0:<processid>”. E.g. for VS2008 with process id 1234: !VisualStudio.9.0:1234
  • As the path to an open solution file, if any.

We’ll use the first option. That means we need to find out the process id of the Visual Studio instance the current FSI instance is running under. FSI is started as a child process of Visual Studio – each Visual Studio has its own FSI.

Let’s tackle each of these in turn – first finding the process id of the current Visual Studio instance, then its DTE object via the ROT.

Step 1: Finding the Visual Studio process id

Not as straightforward as it sounds. System.Diagnostics.Process does not give you direct access to a process’ parent. There are at least two ways to go about it – using the Windows API or, maybe surprisingly, through performance counters. I used the latter because that didn’t require me to do interop.

The code is pretty straightforward – remember the point is that this is being executed in F# interactive:

open System
open System.Diagnostics

let getParentProcess (processId:int) =
    let proc = Process.GetProcessById processId
    let myProcID = new PerformanceCounter("Process", "ID Process", proc.ProcessName)
    let myParentID = new PerformanceCounter("Process", "Creating Process ID", proc.ProcessName)
    myParentID.NextValue() |> int
let currentProcess = Process.GetCurrentProcess().Id

let myVS = getParentProcess currentProcess

You should just be able to paste that into an F# Interactive window and myVS will contain the process id of the Visual Studio instance it’s running under. (No doubt this will crash spectacularly when you’re running FSI standalone or something.)

Step 2: Getting the DTE object

This was a bit harder, as we’re now going to have to do some interop with OLE32. Code for this is floating around on the internet if you know where to look – I just translated it to F#. Here’s what we need:

#r "EnvDTE"
#r "EnvDTE80.dll" 
#r "EnvDTE90.dll" 

open EnvDTE
open EnvDTE80
open EnvDTE90
open System.Runtime.InteropServices
open System.Runtime.InteropServices.ComTypes

module Msdev =
    extern int GetRunningObjectTable([<In>]int reserved, [<Out>] IRunningObjectTable& prot)
    extern int CreateBindCtx([<In>]int reserved,  [<Out>]IBindCtx& ppbc)

These two interop functions allow us to enumerate the ROT and get the registered object with the key we want:

let tryFindInRunningObjectTable (name:string) =
    //let result = new Dictionary<_,_>()
    let mutable rot = null
    if Msdev.GetRunningObjectTable(0,&rot) <> 0 then failwith "GetRunningObjectTable failed."
    let mutable monikerEnumerator = null
    let mutable numFetched = IntPtr.Zero
    let monikers = Array.init<ComTypes.IMoniker> 1 (fun _ -> null)
    let mutable result = None
    while result.IsNone && (monikerEnumerator.Next(1, monikers, numFetched) = 0) do
        let mutable ctx = null
        if Msdev.CreateBindCtx(0, &ctx) <> 0 then failwith "CreateBindCtx failed"
        let mutable runningObjectName = null
        monikers.[0].GetDisplayName(ctx, null, &runningObjectName)
        if runningObjectName = name then
            let mutable runningObjectVal = null
            if rot.GetObject( monikers.[0], &runningObjectVal) <> 0 then failwith "GetObject failed"
            result <- Some runningObjectVal
        //result.[runningObjectName] <- runningObjectVal

I won’t go into the virtues of this particular piece of code. Suffice to say it’s an acquired taste. (Bonus points for anyone who can write a function that returns the ROT as a nice, lazy seq<string*obj>. Yes I know it seems easy – but please try and prepare to be frustrated.). So we now have a function that returns the COM object in the ROT based on the name. If you put in the code in comments and change a few bits and pieces, it should be easy to get a function that outputs the ROT completely.

Putting two and two together

Armed with these functions, here’s the DTE object that you can use to call the Automation API:

let getVS2008ROTName id = 
    sprintf "!VisualStudio.DTE.9.0:%i" id
let myDTE = (tryFindInRunningObjectTable (getVS2008ROTName myVS) |> Option.get) :?> DTE2

This is for Visual Studio 2008. For 2010, it should be as easy as changing the 9 to 10 in the ROT key string. It’s possible to make this more general if you can come up with a way to check whether the current process is .NET 4 or .NET 3.5 – the former would indicate an FSI running in VS2010, the latter in VS2008.

And that’s it. Have a look around using Intellisense to see what is possible. Also check out the Macro IDE – it has some code examples in VBA that involve the DTE. Some ideas:

  • myDTE.ExecuteCommand lets you execute any command – the things you can bind keyboard shortcuts to in the Options. For example, use ProjectandSolutionContextMenus.Item.MoveDown to move the currently selected item one place down.
  • myDTE.Events to register event handlers for all sort of Visual Studio events. For example, myDTE.Events.BuildEvents.add_OnBuildDone notifies you when a build is done. Nice for post-build steps that need to run after a solution is built (as opposed to after a particular project is built).
  • myDTE.Quit(). The l33t hax0r way to exit Visual Studio. You’ll be the envy of your less tech-savvy colleagues.

If you find an imaginative use for this, please let me know.

Share this post :

09 April 2010

Moving FsCheck to Mercurial

I’ve been using Mercurial as my source control system of choice for a few years now, and I’ve been very  happy with it. The only problem I had was that codeplex, where FsCheck is hosted, only supported SVN. I used Mercurial for daily development, and when a release was imminent I just copied the final snapshot to my local SVN checkout and updated codeplex from there. Needless to say, the SVN repository was out of date and didn’t represent the actual history.

Well, no more! Codeplex now supports Mercurial as well. One email to the kind folks at codeplex support, and they wiped my SVN repository from codeplex and provided me with a shiny new hg repository. Great.

That left me with a problem: since I was used to the hg repository being private, I also dumped some, well, private stuff in it. Before you start imagining things, there is nothing special there – just a few pdfs of papers and such, but nonetheless things I didn’t want to put in a public repository. If only because some authors prefer that they alone control dissemination of their work.

And obviously Mercurial isn’t keen on letting you delete history.

Luckily it has the excellent convert extension. This is usually used to convert SVN, CVS or other repositories to a Mercurial repository. But you can also convert a Mercurial repository to a new one, and rewrite and update history along the way. One of the things the convert extension lets you do is to specify files and folders that should not be migrated. It also lets you remove branches, and even splice a certain set of changesets on a parent in the new repository.

I used the first two features to remove some superfluous history and files. I used the last because I’m used to clone my main repository to a bunch of feature repositories to work on specific features in relative isolation. I have a few of those in progress, but of course since I was converting my main repository to a new one the feature repositories couldn’t be used to push changes back to the new main. This makes them pretty much useless, unless they are converted as well.

So here’s what I did:

  1. Convert the old main repository to the new main, stripping out files and branches along the way.
  2. Clone the new repository for each feature I’m currently working on. Now I have a mirror image of my old folder where the old main and the old feature repositories are – except the changesets in the old feature repositories need to be migrated to the new feature repositories.
  3. Using hg convert’s splice feature, and the excellent TortoiseHg repository explorer to find out the changeset numbers, I then additionally converted only the extra changesets from each old feature repository to its corresponding new feature repository. One thing to keep in mind is that hg convert uses the .hg/shamap file in the new repository to keep track of which changesets from the old repository it has already converted. This file is however not automatically copied when cloning – so I had to manually copy that over from the new main to the new feature repositories – otherwise convert would try to convert all of the changesets again.

Confused yet? Well, here’s my first Prezi to explain it all…

You can check out the result in FsCheck’s source tab – although the main result is that there is now much less friction in my development process.

Share this post : Technet! del.icio.us it! del.iri.ous! digg it! dotnetkicks it! reddit! technorati!

15 February 2010

FsCheck 0.6.3 for Visual Studio 2010 RC and F# February 2010 CTP

A new release of F# means a new release of FsCheck.

  • Removed the dependency on the F# PowerPack. This makes FsCheck easier to use, especially since the PowerPack is now  separately distributed. Besides, it was only used to pretty print functions.
  • Cleaned up reflective generators a bit.
  • All projects are upgraded to build with Visual Studio 2010 now. For backwards compatibility, I’ve added an FsCheck2008.sln solution which allows you to build in Visual Studio 2008 with the February 2010 CTP. You’ll get a warning from MSBuild saying that ToolsVersion 4.0 is not supported and that it is treating the build as a 3.5 build. Good – that’s exactly what you want. FsCheck is still targeting .NET3.5 until .NET4 is out.


11 February 2010

Visug presentation in Leuven, Belgium on 8 Feb 2010

Thanks to everyone who attended – hope you got something out of it (besides the free food and drinks…).

Here are the slides I used, as well as the (completed) demo project. There are some extra slides and some extra parts to the demo which I didn’t present. Apparently the whole thing will also appear on MSDN chopsticks, at which point I’ll update this post with the link.