07 May 2012

Making F# reflection faster

I don’t really believe I’m the first one to do this, but I couldn’t find anything that’s publically available. If you know about a mature implementation, please leave a comment!

I put up some code on https://bitbucket.org/kurt/fsreflect/wiki/Home that has a small API, mirroring FSharpValue’s PreComputeXXX reflective construction and deconstruction methods for unions and records. It does the exact same thing as the original methods, only faster.

As explained on the project page, the code uses two techniques. The first is on-the-fly IL code generation using DynamicMethod. This is used for the fast record and union construction code. The second is using delegates instead of MethodInfo.Invoke for the record and union readers, using a great trick introduced by Jon Skeet. The former is explained in a lot of places – the latter is explained perfectly by Mr Skeet already.

Anyway the code is pretty short and sweet, so if you’re interested please do have a browse.

Here are some gratuitous micro-benchmarks.

> repeat (fun i -> fastRecordCtor [| "2"; i; 3. |] :?> MyRecord)
Real: 00:00:00.198, CPU: 00:00:00.202, GC gen0: 73, gen1: 2, gen2: 1
val it : unit = ()
> repeat (fun i -> standardRecordCtor [| "2"; i; 3. |] :?> MyRecord) 
Real: 00:00:02.811, CPU: 00:00:02.808, GC gen0: 115, gen1: 0, gen2: 0
val it : unit = ()
> repeat (fun i -> fastUnionCtor [| "3"; i |] :?> MyUnion) 
Real: 00:00:00.150, CPU: 00:00:00.156, GC gen0: 50, gen1: 0, gen2: 0
val it : unit = ()
> repeat (fun i -> standardUnionCtor [| "3"; i |] :?> MyUnion) 
Real: 00:00:02.551, CPU: 00:00:02.542, GC gen0: 72, gen1: 0, gen2: 0
val it : unit = ()
> repeat (fun i -> fastRecordReader { S = "2"; i = i; f = 3.0 }) 
Real: 00:00:00.209, CPU: 00:00:00.218, GC gen0: 76, gen1: 0, gen2: 0
val it : unit = ()
> repeat (fun i -> standardRecordReader { S = "2"; i = i; f = 3.0 }) 
Real: 00:00:05.390, CPU: 00:00:05.397, GC gen0: 77, gen1: 1, gen2: 0
val it : unit = ()
> repeat (fun i -> fastUnionReader (Two ("s",i))) 
Real: 00:00:00.160, CPU: 00:00:00.171, GC gen0: 50, gen1: 0, gen2: 0
val it : unit = ()
> repeat (fun i -> standardUnionReader (Two ("s",i))) 
Real: 00:00:03.477, CPU: 00:00:03.478, GC gen0: 49, gen1: 0, gen2: 0
val it : unit = ()
Technorati Tags: ,,
Share this post : MSDN! Technet! Del.icio.us! Digg! Dotnetkicks! Reddit! Technorati!

04 January 2012

API design: Record types and backwards compatibility

If you’re designing an API in F#, be very careful when exposing any of the public types as record types. Record types, as they stand in F# 2.0, by default are impossible to change while keeping backwards binary compatibility.

Let’s look at a couple of changes that you might want to make to a record type:

  • Adding a new field.
  • Changing the type of a field.
  • Changing the order of the fields.

None of these changes are backwards compatible. A record type is compiled by F# to a normal .NET class, with a constructor that takes the fields as arguments. This means order of the fields matters, and also the number and types of fields of course. Accessing the record fields is accomplished through a getter for each of the fields. So if your clients are using a very limited usage pattern – basically only accessing an existing record type using a getter – you may be alright with 1 and 3, and even 2 if they don’t happen to access the particular field whose type you’re changing. Anything else, including with syntax, is a no-no.

To make this abundantly clear – a record type definition and usage:

type MyRecord =
    { Field : int
      SecondField : string }

let instance = { Field = 3; SecondField = "3" }

Is translated by the F# compiler to:

type MyRecordTranslated(field:int,secondfield:string) =
    member this.Field = field
    member this.SecondField = secondfield

let instanceTranslated = new MyRecordTranslated(3,"3")

Now in the translated case, it’s intuitively clear that changing the order of the arguments in the constructor is not backwards compatible. With the record syntax however, the F# syntax makes the record type look like a bag of fields. However, the compiler looks up the one and only constructor for the type, and explicitly calls that. So, if you change the constructor in any way, clients are going to fail at runtime if they are not recompiled first.

Two solutions (sort of)

The first solution is not to use record types as part of a public API that needs to be backwards compatible – use class types instead.

If record types are still handy, say because they have automatic value-based comparison and equality, then with some planning you can still use them – but to your clients they won’t look much like record types anymore, because we’re going to prevent clients from accessing the constructor and getters (unfortunately there’s no way to set accessibility on those two separately). And there is a lot of tedious code involved. Here’s an example – here’s the first version of a “record type” that can be kept backwards compatible:

module A_v0 =
    type MyRecord =
        internal { _Field : int } with
        static member Create(field:int) = { _Field = field }
        member this.Field = this._Field
        member this.With(?Field:int) = { this with _Field = defaultArg Field this.Field }  

Note that the most important change is that we made the constructor internal (private does not make much sense, as everything in the assembly of the record type itself should be trivial to update in concert with any change to the record type). To create the record type from outside of the assembly there is the factory method ‘Create’. Note that using F# method call syntax, we can still make this look much like a record type: ‘MyRecord.Create(field=3)’ for example.

Then we need to provide a getter for each field ourselves (because we’ve made them internal...). I’ve chosen to start the actual field names with an underscore above, just to allow the explicit getters.

Finally, using optional fields, we can regain some sort of with syntax. Here’s an example of usage:

let a = A_v0.MyRecord.Create(2).With(Field = 3).Field

Now, suppose we want to add a new field. We can do that as follows, without breaking any clients:

module A_v1 =
    type MyRecord =
        internal { _Field : int 
                   _Foo   : string } with
        static member Create(field:int) = { _Field = field; _Foo = "default" }
        static member Create(field:int,foo:string) = { _Field = field; _Foo = foo }
        member this.Field = this._Field
        member this.Foo = this._Foo
        member this.With(?Field:int) = { this with _Field = defaultArg Field this.Field }                                   
        member this.With(?Field:int,?Foo:string) = { this with _Field = defaultArg Field this.Field
                                                               _Foo = defaultArg Foo this.Foo }

Note how we can now use overloading of both Create and With to our advantage.

In this way, we prevent clients from using the record constructor directly, but pay the cost of re-implementing most of the useful functionality for record types ourselves. It’s not much less work than basically doing the same with a class type.

Can F# v_next solve this?

The  problem is that the syntax is deceiving: it looks like the order of arguments does not matter – both in the definition syntax, and in construction syntax. Also, if you use ‘with’ it looks like the call is resilient to adding a new field. And in fact, it is as long as you recompile – which makes the problem worse in some sense as a programmer will see her intuitions confirmed with every compilation.

Given that the syntax is pretty much set in stone at this point, I don’t see a good way around it. If you want to keep the illusion of order-independence of the fields, one thing to do is make the compiler generate some kind of discovery phase at runtime, which would incur an unreasonable performance cost. Another approach would be to have Set method per field that returns a new record instance, and make the constructor private – but then setting many fields would involve as many object creations – again unreasonable for performance reasons.

In fact, I don’t think this is such a big problem at all, as 95% of users will probably not encounter it, and for them the abstraction is valid. What v next could address though, is a way for the other 5% to control backwards compatibility of record types better.

A first idea would be to allow the declaration of “optional fields”, and compile those as overloaded constructors – say you could write:

type MyRecord =
    { Field : int
      SecondField : string
      ?OptionalField = 5 }

which would be compiled to:

type MyRecordTranslated(field:int,secondfield:string, optionalField:int) =
        new(field:int,secondfield:string) = MyRecordTranslated(field,secondfield, 5)
        member this.Field = field
        member this.SecondField = secondfield
        member this.OptionalField = optionalField

I.e. make an overloaded constructor per optional field, and the F# compiler can enforce that optional fields should always be last in the definition. This shouldn’t be too much of a surprise, as the same restriction holds for optional arguments, and also helps somewhat to counter the wrong intuition that order of record fields does not matter. This would at least allow people to add new optional fields without breaking backwards compatibility.

Another option is to explicitly allow overloaded constructors – similar to the implicit class definition syntax, it could be enforced that all the overloads call into the same constructor. Syntax may be a bit of a pain, but I’m sure something can be worked out.

Finally, it would be good to be able to control the visibility of the constructor and the getters separately. In fact it would be nice for other reasons too to control accessibility of the getter for each field separately anyway. internal/private could be allowed in front of the field definitions to control visibility of the getter, while the visibility in front of the curly brace, as now, would only influence visibility of the constructor.

In retrospect, I think there’s something to be said for having the compiler emit a warning or even error when constructing a record type with the fields in a different order from the record type definition, if only to counter the wrong intuition.

I believe the best option here is the second one, allowing overloaded constructors. 3 and 4 are not backwards compatible as far as the F# compiler is concerned, and although the optional fields are nice, this is probably best left as some syntactic sugar over real overloaded constructors as the latter are more flexible.

Conclusion

Binary compatibility may not be a big issue for you. It’s certainly not an issue if you don’t expose any programmatic API as part of your F# projects. In that case, live happily ever after.

On the other hand, you may want to think about giving yourself some flexibility in keeping your API backwards compatible. In that case, hopefully this post has given you some tools to come up with an appropriate strategy. Note that if you can reasonably expect that your clients will recompile whenever you release a new version, the whole problem is moot too.

Overall many F# programmers don’t need to consider this at all. However it deserves a bit more attention than it’s getting, and might catch some people unaware (it certainly caught me out at some point...).

Share this post : MSDN! Technet! Del.icio.us! Digg! Dotnetkicks! Reddit! Technorati!

29 May 2011

FsCheck 0.7.1: NuGet packaging

I’ve released a minor update to FsCheck, mostly so it can now be downloaded as a NuGet package. It also comes with source server support, courtesy of SymbolSource.org.

Furthermore, there are a couple of bug fixes.

Enjoy.

Technorati Tags: ,
Share this post : MSDN! Technet! Del.icio.us! Digg! Dotnetkicks! Reddit! Technorati!

02 January 2011

F# projects someone should start

Some ideas for projects that would scratch a few itches I have. However it’s unlikely that I’ll ever have to time to actually get round to doing them myself, so by blogging about them I hope someone who is looking for ideas can pick one of these up and make them real. Hope there is some room after New Year’s resolutions! The difficulty and work involved varies widely. Look at it this way – if you pick one of these up, you can be assured you’ll have at least one user.

A usable documentation system

I’m not talking about xml doc comments here (although that is a part of it). It should be possible to come up with a good domain specific language to write documentation of any kind – be it straightforward API documentation, a tutorial or a cookbook, that is easy to keep up to date with the current API you’re documenting, and that can export to a variety of different formats.

My main inspiration here is Scribble, for Racket (formerly PLT Scheme). From the scribble paper’s abstract:

Scribble is a system for writing library documentation, user guides, and tutorials. It builds on PLT Scheme’s technology for language extension, and at its heart is a new approach to connecting prose references with library bindings. Besides the base system, we have built Scribble libraries for JavaDoc-style API documentation, literate programming, and conference papers. We have used Scribble to produce thousands of pages of documentation for PLT Scheme; the new documentation is more complete, more accessible, and better organized, thanks in large part to Scribble’s flexibility and the ease with which we cross-reference information across levels. This paper reports on the use of Scribble and on its design as both an extension and an extensible part of PLT Scheme.

The cool thing here is that you write your documentation in Scheme itself – you write a function or member in your text, and it uses the default name resolution rules to look up what you’re referring to, and if you say export to HTML, adds a hyperlink to the actual API documentation for that particular function. Also you can write a short program, have it typeset in place, along with the results of running the program. That would be hugely useful for cookbook style documentation.

I realize it is probably not possible to do this in F# in the same way due to the lack of macros. Maybe ClojureCLR would be a better choice. On the other hand, now that the F# compiler is open source – how about a compiler extension? For some inspiration closer to home, have a look at BumbleBee. Bonus points for Visual Studio integration.

Difficulty: Medium. The hard part is figuring out how to do this cleanly – but maybe a fairly straightforward pre-processor is enough to make people happy.

Work: Lots. Getting all this stuff to work together reliably and preferably inside Visual Studio...that’ll take some time.

Mirror-based reflection library

Let’s face it. The reflection APIs in .NET suck. First of all, because I have to write APIs, and not API – I mean there’s System.Reflection, System.Reflection.Emit, CodeDOM, the Debugger API (what else is debugging if not reflecting over your code?), a separate API for F# reflection (and probably for any language that’s not C# or VB), Cecil, ReflectionOnlyLoad, Metadata reading in the F# PowerPack, and so on.

This indicates insufficient abstraction of System.Reflection: if I’m playing with F# types, I  cannot use GetType() – this exposes internal implementation details! The reflection API by itself cannot cope with other languages, or other types of reflection – hence the need for e.g. a separate debugging API, as it needs to cope with distributed debugging. Or for things like Cecil, to cope with situations where you don’t want to load an assembly and all its dependencies when you’re reflecting over it. Sadly, this sometimes makes it look like I'm learning the same old reflection API over and over again, except someone’s changed all the names. And don’t get me started on how many times Type is overloaded. High cohesion, anyone?

Enter mirror-based reflection, which is an idea that might solve these problems. Except of course, .NET does not have a library that uses this type of reflection. This is where you come in...for inspiration, Newspeak has the beginnings of such an API. It should be possible to have a specific API that is clear in the limits of what it can do, while still sharing much of the concepts with very similar systems – imagine being able to use Cecil and System.Reflection but with the same names for everything possible. And if you ever need to reflect over a running program, you can use the debugging API, except you can leverage your existing knowledge. Obviously, the library should be pluggable so that people can plug their own reflection libraries in there.

Difficulty: This may be a research problem. On the other hand, there are things to build on, and a motivated person or team could build, say, an introspection only API in a reasonable amount of time.

Work: lots and lots, obviously. However, a proof of concept API that just does introspection of structure (not behaviour) would be enough to get more people on board and would already be useful.

Contribute to NuGet so it knows about F#

NuGet is a package manager for .NET . The current release does not work with F# – boo! However looks like David Fowler actually already went ahead and contributed some code to make this work! Excellent. However, keep voting the issue up so the NuGet team adds it to the next release :)

Difficulty: shouldn’t be all too hard – I gleaned at the source code and it looks like they’re using the Visual Studio Automation API. I know the F# project system’s implementation here is kind of shaky, so probably a bunch of workarounds is needed.

Work: Not too much – mostly finding your way around the NuGet codebase and finding what works and what doesn’t in the F# language service..

Add snippets and organize usings support to the F# project system

I miss snippets. Sure, everything in F# is nice and short so the official story is that this isn’t needed as much as in C#, but still...oh, and while you’re at it, and now the compiler is open source (I love saying that – it makes everything look so easy) can we have organize usings too please? Bonus points for adding Refactor –> Rename.

Difficulty: Straightforward. Only roadblock could be that the current F# language service is just not easily extensible for these kinds of things.

Work: Reasonable. Finding your way around the Visual Studio language service extensibility model, and the F# compiler. Knowledge about this seems very useful to have, but I can’t deny it would take some time, obviously.

 

That should give you enough to pass the time in 2011, I hope!

Share this post : MSDN! Technet! Del.icio.us! Digg! Dotnetkicks! Reddit! Technorati!

16 December 2010

F#, xUnit theories and InlineData

How do you know you haven’t blogged much lately? When you want to write a blog post and you forgot the name of the thing you’re actually writing it with. “Wait, it’s called Window Live something something…what did that icon look like again?”. Also, Windows Live Writer now has a ribbon! Good stuff.

Anyway, just a short heads up to save people some time: in the xUnit extensions, there is a Theory attribute which in combination with the InlineData attribute lets you specify a parameterized xUnit test. The InlineData attribute lets you specify the values the test should run with (you can’t test everything randomly with FsCheck, you know). That and other xUnit extension goodies are explained here.

My point is – this attribute does not work in F# (or managed C++), because the AttributeUsage attribute on the InlineData attribute is defined on its parent class, DataAttribute. It is correctly defined as Inherited = true; but only the C# compiler seems to honour this. More detailed explanation in this stackoverflow post. If  you care about this, please vote for the bug on the xUnit site!

Luckily, the workaround is not too bad – just use the PropertyData attribute instead:

let symbolTestData = 
    [ "an1 rest",           "an1"
      "?test bla",          "?test"
      "?+_est bla",         "?+_est"
      "+_123 bla",          "?+_123"
      "+._1.2.3 bla",       "+._1.2.3"
      "+_q1r-2g3! bla",     "+_q1r-2g3!"
      "abc.def.feg/q bla",  "abc.def.feg/q"
      "ab/cd? bla",         "ab/cd?"
    ]
    |> Seq.map (fun (a,b) -> [|a; b|])

[<Theory>]
[<PropertyData("symbolTestData")>]
let ``should parse symbol``(toParse:string, result:string) =
    Assert.Equal(result, run symbol result)

Note that xUnit actually expects a sequence of arrays, but I think the list of tuples looks better, at the small cost of an extra conversion step.

Technorati Tags: ,,
Share this post :

10 July 2010

F# and Code Contracts: not quite there yet

In this post I’ll show how you can get Code Contracts sort of working for F#. From my very initial explorations, I would conclude that they seem basically usable for F# programming – but you’ll need some glue and tape, and not everything works as you’d expect.

Code Contracts: the short story

Code contracts are a way to express pre- and postconditions and invariants for (theoretically) any CLR based programming language. This is a Very Good Thing, as such contracts are a great way to specify the expected behaviour of your program (i.e. your intent).

Pragmatically, contracts are usually compiled into a debug build of a program, and checked at runtime. That’s how Eiffel does it, where this whole contract thing got started. So every time your program runs, it’s actually being tested in a very deep way. That’s why a tool like Eiffel’s ARTOO, which basically randomly constructs objects and calls method sequences on them, works well: you can find bugs by just executing your program randomly! Of course contracts are also good for documentation: no more looking in the code to find out what happens when you pass in 0 for a particular argument – the pre- and postcondition should tell you.

In the .NET world Code Contracts delivers this kind of functionality by means of a rewriter. That is, you write your actual contracts in a contract language using the System.Diagnostics.Contracts namespace in .NET 4.0, but the methods you’re calling do not actually do much: instead the rewriter runs after the compilation phase on the resulting assembly, recognizes the calls and writes the actual contract checking code in the assembly. This is necessary for example because it’s very useful in a contract to be able to refer to the old value of a parameter, or to the result of the method. Without a rewriter, you’d basically just have something like Debug.Assert, which is not very expressive.

MS research also has made a static checker for contracts, which tries to statically determine whether your contracts hold, and if so, is able to feed that information back to the rewriter so that statically checked contracts do not need to be checked at runtime. Personally I don’t think that is useful because it seems very imprecise. Time will tell – happy to be proven wrong.

Since the rewriter works on the assemblies, you can just use the Contracts API from any .NET language, at least in theory. In practice, different compilers tend to output different representations of types or whatever construct they offer, so it’s not unthinkable that the rewriter gets confused. In fact, in the case of F# it gets confused when you try to write contracts in constructors, as we’ll see.

So while nice in theory, this turns out to be another case of “It works for all .NET languages, as long as they’re exactly like C# or VB.” At least until the contracts folks get round to actually supporting F#, I guess.

The nuts and bolts

‘Nuff complaining. Let’s see how we get code contracts to work with F#.When you install the code contracts add-on, in C# you get a nice extra tab in your project properties. Not so in F# – but that’s the easy part. The whole rewriter thing is actually nicely abstracted away in some custom target file that the Code Contract folks have kindly provided, so we’ll just need some manual editing of the F# project file to get it to work. Here’s an example:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
    <ProductVersion>8.0.30703</ProductVersion>
    <SchemaVersion>2.0</SchemaVersion>
    <ProjectGuid>{dd086c3a-9cbd-4dc9-89d2-4386df7ee986}</ProjectGuid>
    <OutputType>Exe</OutputType>
    <RootNamespace>CodeContractsFs</RootNamespace>
    <AssemblyName>CodeContractsFs</AssemblyName>
    <TargetFrameworkVersion>v4.0</TargetFrameworkVersion>
    <Name>CodeContractsFs</Name>
    <CodeContractsAssemblyMode>1</CodeContractsAssemblyMode>
  </PropertyGroup>
  <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
    <DebugSymbols>true</DebugSymbols>
    <DebugType>full</DebugType>
    <Optimize>false</Optimize>
    <Tailcalls>false</Tailcalls>
    <OutputPath>bin\Debug\</OutputPath>
    <DefineConstants>TRACE;DEBUG;CONTRACTS_FULL</DefineConstants>
    <WarningLevel>3</WarningLevel>
    <DocumentationFile>bin\Debug\CodeContractsFs.XML</DocumentationFile>
    <CodeContractsEnableRuntimeChecking>True</CodeContractsEnableRuntimeChecking>
    <CodeContractsRuntimeOnlyPublicSurface>False</CodeContractsRuntimeOnlyPublicSurface>
    <CodeContractsRuntimeThrowOnFailure>True</CodeContractsRuntimeThrowOnFailure>
    <CodeContractsRuntimeCallSiteRequires>False</CodeContractsRuntimeCallSiteRequires>
    <CodeContractsRunCodeAnalysis>False</CodeContractsRunCodeAnalysis>
    <CodeContractsNonNullObligations>False</CodeContractsNonNullObligations>
    <CodeContractsBoundsObligations>False</CodeContractsBoundsObligations>
    <CodeContractsArithmeticObligations>False</CodeContractsArithmeticObligations>
    <CodeContractsRedundantAssumptions>False</CodeContractsRedundantAssumptions>
    <CodeContractsRunInBackground>True</CodeContractsRunInBackground>
    <CodeContractsShowSquigglies>False</CodeContractsShowSquigglies>
    <CodeContractsUseBaseLine>False</CodeContractsUseBaseLine>
    <CodeContractsEmitXMLDocs>False</CodeContractsEmitXMLDocs>
    <CodeContractsCustomRewriterAssembly />
    <CodeContractsCustomRewriterClass />
    <CodeContractsLibPaths />
    <CodeContractsExtraRewriteOptions />
    <CodeContractsExtraAnalysisOptions />
    <CodeContractsBaseLineFile />
    <CodeContractsRuntimeCheckingLevel>Full</CodeContractsRuntimeCheckingLevel>
    <CodeContractsReferenceAssembly>%28none%29</CodeContractsReferenceAssembly>
  </PropertyGroup>
  <ItemGroup>
    <Reference Include="mscorlib" />
    <Reference Include="FSharp.Core" />
    <Reference Include="System" />
    <Reference Include="System.Core" />
    <Reference Include="System.Numerics" />
  </ItemGroup>
  <ItemGroup>
    <Compile Include="Module1.fs" />
    <None Include="Script.fsx" />
  </ItemGroup>
  <Import Project="$(MSBuildExtensionsPath32)\FSharp\1.0\Microsoft.FSharp.Targets" Condition="!Exists('$(MSBuildBinPath)\Microsoft.Build.Tasks.v4.0.dll')" />
  <Import Project="$(MSBuildExtensionsPath32)\..\Microsoft F#\v4.0\Microsoft.FSharp.Targets" Condition=" Exists('$(MSBuildBinPath)\Microsoft.Build.Tasks.v4.0.dll')" />
</Project>

I’ve put the things to change from a vanilla F# project file in bold. I won’t go over all of them – you can experiment for yourself by playing with the UI in a C# project and checking what the effect is. The three important bits are:

  • CodeContractsAssemblyMode: per assembly, code contracts allows you to set it in either a compatibility mode (0) or the standard mode (1). For new assemblies, 1 is what you want.
  • CODECONTRACTS_FULL compile constant: if you don’t define this, the rewriter won’t write the contract checking methods in your assembly. So define this in you Debug configuration (or in a special Contracts configuration)
  • CodeContractsEnableRuntimeChecking: Otherwise the rewriter doesn’t kick in. Not sure why you’d need this and the constant, but there you go.

Add this to your F# project and you should see the rewriter kick in after the build – it’s called ccrewrite.exe.

Let’s write some contracts

That was the easy part. Now, let’s try to convert the program given in the Code Contracts documentation to F#:

type Rational(numerator,denominator) =
    do Contract.Requires( denominator <> 0 )
    [<ContractInvariantMethod>]
    let ObjectInvariant() = 
        Contract.Invariant ( denominator <> 0 )
    member x.Denominator =
        Contract.Ensures( Contract.Result<int>() <> 0 )
        denominator

So this is a simple rational type. It shows you basically how contracts work – you call methods on the Contract static class:

  • Contract.Requires to impose a precondition on arguments – in this case an argument of the constructor.
  • Contract.Ensures to express a postcondition – in this case a postcondition on the result of a getter, which is nicely accessible using Contract.Result.
  • ObjectInvariantMethodAttribute and Contract.Invariant to impose an invariant on a class – something that should hold after every execution of a method.

So far so good. Alas, when we try to build this, we get:

warning CC1041: Invariant method must be private
error CC1011: This/Me cannot be used in Requires of a constructor
error CC1011: This/Me cannot be used in Requires of a constructor
error CC1011: This/Me cannot be used in Requires of a constructor
error CC1011: This/Me cannot be used in Requires of a constructor
error CC1038: Member 'Module1+Rational.denominator' has less visibility than the enclosing method 'Module1+Rational.#ctor(System.Object,System.Int32)'.: error CC1069: Detected expression statement evaluated for potential side-effect in contracts of method 'Module1+Rational.#ctor(System.Object,System.Int32)'. (Did you mean to put the expression into a Requires, Ensures, or Invariant call?) error CC1004: Malformed contract. Found Requires after assignment in method 'Module1+Rational.#ctor(System.Object,System.Int32)'.

 

And it turns out that the rewriter is confused by F#’s representation of constructors. When we comment out the Contract.Requires in the constructor, everything works fine.

The warning about the invariant method is because F# actually compiles private members as internal. The rewriter flags this using this warning, but it’s otherwise not much to worry about, I guess, though slightly annoying if you’re – like me – fairly obsessive about getting your code to compile without warnings.

Conclusion

I’ve also checked code contracts with various pre-and postconditions to functions, and that seems to work fine. So overall it’s not a bad story, but it would be nice to see some better integration and to fix the bug(s).

I haven’t checked other features of code contracts, like contract inheritance or expressing code contracts on interfaces, which needs a few tricks as well, so there is a chance that you’ll run into problems there.

Finally, a tip: if you don’t like the C#-ish feel of the Contracts API, you can define your own inline functions that call into the actual API – given that they’re inlined, the rewriter will not see the difference.

So close, but not quite there yet. I’ve asked a year ago if the situation was going to improve, and again a few weeks back, but it’s basically unchanged since then. So F#-ers: head over to the Code Contracts forum and ask for F# support! I’m feeling a bit alone in the desert there at the moment…

Technorati Tags: ,

Share this post : MSDN! Technet! Del.icio.us! Digg! Dotnetkicks! Reddit! Technorati!

09 June 2010

Thinking outside the Visual Studio box

Visual Studio and .NET development go hand in hand. As far as I know, every .NET developer uses it. Compared to the Java IDE landscape, there isn’t even competition. And what little competition there is, is pretty much a carbon copy of the approach Visual Studio takes to development. Visual Studio is incontournable, and there is little on the horizon that will change this situation.

What toothpicks and Visual Studio have in common

I did some Java development in Eclipse over 3 years ago, and as I remember it, it worked better than Visual Studio 2010. Let me repeat that:

Eclipse 2007 (for Java) is a better IDE than Visual Studio 2010 (for C#).

I don’t want to go into a feature by feature comparison. No doubt Visual Studio would win. I’m saying that for actual development, you know the thing you’re doing 95% of the time in an IDE, it’s just better. I believe these activities are compiling, testing, refactoring, browsing code and source control.

Compiling. The most important feature of compilation is that you don’t see it. The incremental compilation in Eclipse is just way better. Even for medium sized projects, press Ctrl+S and your code is ready to run. Instantly. This is different from the incremental “compilation” Visual Studio does – while this updates intellisense info on the fly, it does not actually build your code. So if you want to test or run, you’ll have to wait.

Testing. Eclipse comes with excellent JUnit integration out of the box. I guess there’s MSTest for Visual Studio, which is ok, but it doesn’t have that Red-Green-Refactor feel. Open source testing frameworks as far as I know don’t reach the same level of integration. If you want code coverage, be prepared pay some money for at least the Premium edition.

Refactoring. The refactorings in Visual Studio pale in comparison to those in Eclipse. Hands down.

Browsing Code. I think Visual Studio actually has many features in this area, but the UI doesn’t make them particularly discoverable. I’ve found that following references, finding overrides and such in Eclipse is faster and easier than in Visual Studio.

Source control. If you fork out some money, you get integration with Team Foundation with Visual Studio. My feeling is that integration with other source control systems like mercurial and subversion is just not as good as the integration in Eclipse.

Products like Resharper and Testdriven.NET mitigate the refactoring, browsing and testing issues for Visual Studio. But Eclipse comes with these things out of the box. It’s just there when you download it, which is first of all free and second, convenient.

Then, Visual Studio just doesn’t have a good UI. I’m not excited that I can finally make code windows free floating. The solution explorer naturally makes you focus on the files and the file system – while this actually matters little for your day to day development activities. You want to focus on modules, classes, namespace instead (yes, I know about the class browser, but somehow I keep getting pulled back to the solution explorer by Visual Studio.) And then that annoyance among annoyances – the much hated Add Reference dialog. In 2010 they made it asynchronous, so we can now pretend that it is not as slow and painful as amputating your arm with a toothpick, except that it’s worse! At least before you knew when you could start searching for a reference. Now you’re left to wonder – is my reference not there or is it still loading in the background?

What rocks and Visual Studio have in common

Both SharpDevelop and MonoDevelop seem to focus on conforming with Visual Studio. I guess the argument is that this makes it easier for people to migrate from Visual Studio. Wake up people: if it looks like Visual Studio, nobody is going to bother anyway.

I don’t want to talk down to the developers of these projects. I’m sure they’re passionate about what they’re doing. But let’s face it – they’re not going to be able to compete on features, as long as Microsoft is throwing money at Visual Studio. On marketing alone, they’re overwhelmed. As far as I can see there is no buzz surrounding these projects at all. Nobody particularly likes or hates them. They’re boring. Because Visual Studio is. It’s just there. Like a rock. A boring, grey rock.

A call to arms

So, dear SharpDevelop or MonoDevelop developer, or whoever is looking to develop a new software project: here’s my call to arms for you.

Dare to innovate! Forget about Visual Studio. It’s there, and it’s going to stay – but give use a ray of hope. Give us something to look forward to. Give us something that we can love, or hate – something that at least provokes some feelings.

Need some ideas?

Stop bothering with trying to be compatible with Visual Studio. Sure, the assemblies need to be compatible with .NET, but who really cares about MSBuild, and project files, and why even duplicate the solution-project-file based organization of Visual Studio - is there a better way? Can you finally free us from the tyranny of the file system and let us focus on the abstractions provided by the language – it’s called software development, not file shuffling.

Can we make better compilers for C# or VB or F# that are incremental? Note I am talking about the compiler here, not the build system. The Eclipse Java compiler proves that incremental compilation can be pretty much as fast as no-compilation (as in dynamic languages).

Source control – can we go beyond superficial integration with popular SCMs and really integrate change tracking in the language and IDE? With the disk space currently available, is there even a use for the distinction between in memory undo/redo, saving a file to disk and committing a change in source control? Can’t I just type in my code and expect it to be saved, and undoable, and in source control? In addition, can we make “diff” tools language aware please – it’s silly that conflict resolution is line-based, while pretty much every coherent change to code spans multiple files and multiple lines. I’ve never heard of a line-based programming language. Let’s talk about modules and functions and objects and interfaces all the way through please.

Have a look at a Smalltalk IDEs. They have a browser oriented perspective on development, and have had another view on code organization and source control for ages now. There are lots of good ideas there, and it’s about time they are finally acknowledged and brought to the mainstream. The Hopscotch browser for Newspeak looks interesting too.

As far as UI goes, Code Bubbles (built on Eclipse for Java) seems to have some very well thought out UI ideas on browsing, cooperating and debugging.

I believe that any project that does just some of the above has the potential to become huge in the .NET space. Even with respect to mainstream Java IDEs which are nowhere near revolutionary, the .NET community seems to be behind the curve IDE-wise. Developers can see how computing devices all around them are becoming smarter, better integrated and user-oriented. Mainstream IDEs are lagging way behind.

Time for a revolution?

Share this post : Technet! del.icio.us it! del.iri.ous! digg it! dotnetkicks it! reddit! technorati!