29 November 2008

aspnet_compiler and missing .compiled files


This week, I've been facing a rather annoying issue. A developer told me that some global.asax events no longer fired after he precompiled his ASP.NET application using aspnet_compiler on his computer. He noticed that this behavior was due to a missing App_global_asax.dll.compiled file in the output bin directory of the precompiled application. Strangely, when he just published the web site using the Visual Studio publish web site command, everything went well. When aspnet_compiler was invoked through our NAnt build script, .compiled files were missing.

I spent a few hours on this one…I discovered that this behavior was reproducible on my computer too, but not consistently. Starting aspnet_compiler directly from a command prompt, or invoking MSBuild on the solution file containing the web site did sometimes produce the required .compiled files, but rarely. This was driving me crazy. How could a compiler not be consistent, "forget" about some files, without even producing any kind of errors? I monitored the entire compilation process using Process Monitor, and I did not notice any suspect. When everything went well, aspnet_compiler just created and copied the .compiled files, otherwise, it seemed to simply not even attempt to do it. No access denied whatsoever, not a single file system error…After a few hours, a colleague and I had a stupid idea (well, it appeared to us that it was not so stupid): we killed all the antivirus processes using task manager. And guess what? All of a sudden, aspnet_compiler systematically created the required .compiled files, at every build and with whatever method we used to launch it (VS publish, MSBuild, NAnt, command-line)…We did the same test on the developer's computer (which of course ran the same antivirus), and it worked like a charm.

Lesson learned: antivirus programs are not the developer's friends.

We already experienced some annoying AppDomain restarts while debugging ASP.NET applications, due to the antivirus touching web.config and other monitored files. And now, it interferes even with a compiler, and this without causing any actual build failure! I admit that I do not understand what really happens at the filesystem level: why do I not see anything in Process Monitor (including no antivirus activity on .compiled files)? Do these antivirus programs sit even lower in the OS architecture than Process Monitor so that they can "swallow" events? Maybe…Anyway, whenever I'll experience some strange behavior of my development tools in the future, one of the first thing I'll do is turn off the antivirus. And, by the way, I'm very happy that no AV is running on our build server, otherwise…

Last thing: our machines were running the CA eTrust antivirus. On my second laptop, where I have the free edition of AVG, I never encountered any issue of this kind.

24 September 2008

Are you certified?

Today, I just passed the Microsoft exam 70-536 App Development Foundation. I succeeded with a fairly reasonable score, but I was a little bit disappointed, so to say. It was back in 2004 when I passed such an exam for the last time and I had hoped that the actual skills assessed by the exam would better reflect someone's proficiency in a software engineering job. Unfortunately, the questions I was asked were still in the same style as 4 years ago…The key skill you must possess to pass the exam is that you must know the API of the BCL more or less by heart! What a funny thing! This is not all what I expect from a decent software engineer. Such an exam should somehow measure the real understanding the candidate has of the platform, and its ability to apply this understanding to solve new problems. If I don't know the exact syntax and parameter order/signification for a method, I have these little things called Intellisense, MSDN help and F1 in my IDE, no?

Anyway, I'm going to continue my certification path towards the MCPD: Enterprise Developer, and see if the next exams on WCF, ASP.NET and Windows Forms are like this one. Frankly, I don't expect much difference. But at the end, I'll have my "marketing" certification, at least. Because the value I see in such a certification from a technical viewpoint is, well, close to zero…The only Microsoft certification that I think would really reflect that you are indeed very smart and skilled in your job is the Microsoft Certified Architect program. But that's a 10000$ story…

31 July 2008

It’s all about coupling…

When designing a framework or reusable components, a basic design principle is to keep an eye on the component's dependencies. These dependencies towards other components should always be minimized. Having external dependencies will force the user of the component to depend on these as well, which is generally not desirable for many reasons: sensitivity to change, release cycles and versioning, conflicts with other components… This is the well-known high cohesion/loose coupling story.

I was recently (today) confronted with a component whose design does not quite follow that principle: the Enterprise Library Exception Handling Application Block. While I find that the component does what it has to do rather well, I don't appreciate at all its dependencies. My idea was to write an exception handling aspect using PostSharp and the Exception Handling block. Rather simple, no? Well, no, not so simple, as I have a number of constraints: my architecture uses Dependency Injection, and I don't want any instrumentation for now to name two. These constraints have the effect to make the block in its current design useless for me, as it has direct dependencies on ObjectBuilder2 and Unity and instrumentation primitives. In other words, it is tightly coupled to its surrounding runtime environment. Of course, my DI container is not Unity (I use Spring.NET), and I do not want in any way to depend on it!

I think that when you just want to implement a consistent exception handling across an application (using a component that handles this task well), that should not force you to use DI, and certainly not a specific DI container. This goes against some good design principles.

What I could admit is a dependency towards an abstract assembly that expresses the need of the block for a DI infrastructure (e.g. through attributes), but not that the block depends directly on a particular implementation of such an important infrastructure component as a DI container. Now, I'm stuck with two possibilities: modify the block code to extract the depending code in its own separate assembly (let's hope it is well isolated, I'll have to check this), or redevelop my own block implementation (sad because I like how the core concern of the block is handled…).

And by the way, how do I get rid of those instrumentation calls? It's not that I do not recognize there is value in that stuff, but I do not need it in a (unit) test environment, and I don't want to install any perf counters or configure WMI on my laptop just to handle exceptions. In previous versions of the Enterprise Library, we could recompile it and exclude the instrumentation using conditional compilation, but this does not seem to be possible any more.

Conclusion: always actively manage dependencies (minimize them), and when designing a framework or a reusable component, always think that the user must only pay for what he actually uses.

I would be curious to let NDepend run on the whole Enterprise Library and see what the results are. Maybe if I have spare time ;-)…

12 February 2008

About IDisposable, Close(), Streams and StreamReader-Writer

Last week, we discovered a bug in a SOAP extension our team wrote. An ObjectDisposedException occurred during the extension invocation on the client side.

The call stack ended like this:

System.ObjectDisposedException: Cannot access a closed Stream.


at System.IO.__Error.StreamIsClosed()
at System.IO.MemoryStream.set_Position(Int64 value)
…(upper in call chain)

Oops, we're trying to use a closed Stream…How can that be? Well, the code that actually uses the Stream is the following:


public void ProcessMessage()
{
// ...
CopyStream(_oldStream, _newStream);
_newStream.Position = 0;// <= this is where the exception occurs
// ...
}

private static void CopyStream(Stream source, Stream target)
{
using (TextReader reader = new StreamReader(source))
using (TextWriter writer = new StreamWriter(target))
{
writer.Write(reader.ReadToEnd());
}
}


It looks like _newStream is being closed inside CopyStream, and when I look at the method, the only place where this could happen is the Dispose call on the StreamWriter, when leaving the using block. So, it looks like the StreamWriter is effectively becoming the owner of the underlying Stream it uses! Let's look at what really happens in StreamWriter.Dispose, thanks to our friend Reflector.


public abstract class TextWriter : MarshalByRefObject, IDisposable
{
public virtual void Close()
{
this.Dispose(true);
GC.SuppressFinalize(this);
}

public void Dispose()
{
this.Dispose(true);
GC.SuppressFinalize(this);
}

protected virtual void Dispose(bool disposing)
{
}

// Other members...
}

public class StreamWriter : TextWriter
{
private Stream stream;
private bool closable;

internal bool Closable
{
get
{
return this.closable;
}
}

public override void Close()
{
this.Dispose(true);
GC.SuppressFinalize(this);
}

protected override void Dispose(bool disposing)
{
try
{
// some code omitted here...
}
finally
{
if (this.Closable && (this.stream != null))
{
try
{
if (disposing)
{
this.stream.Close();
}
}
finally
{
this.stream = null;
// setting other members to null here…
base.Dispose(disposing);
}
}
}
}
}

Ah aah, OK, I understand. Indeed when disposing of the StreamWriter, it calls Stream.Close. But I noticed that there is this Closable property, that when set to true, could avoid the closing of the Stream. Unfortunately, it's internal, and it happens to be used by the framework only (for Console streams apparently). Frankly, I don't like this design. I think it would have been better to make Stream ownership by the StreamWriter optional, and that's what the Closable property seems to have as purpose, so why isn't there a way to set it? The only StreamWriter constructor that allows setting it is also internal. Moreover, why does TextWriter.Dispose call GC.SuppressFinalize(this) while there is no finalizer defined in the class? And there is also the override of the Close method which just…repeats the base implementation (I'll discuss this implementation further, together with Stream).

Hmm, I'm not happy with what I discovered in the framework, but maybe I would not have had this bug if I had read the doc for the StreamWriter constructor? So I read it, and guess what? It is nowhere mentioned that the lifetime of my Stream object is now tightly coupled to the StreamWriter. Not a single word about it. But there's a nice sample of code illustrating the usage of the different constructors of StreamWriter. Here it is:


public void CreateTextFile(string fileName, string textToAdd)
{
string logFile = DateTime.Now.ToShortDateString()
.Replace(@"/", @"-").Replace(@"\", @"-") + ".log";

FileStream fs = new FileStream(fileName,
FileMode.CreateNew, FileAccess.Write, FileShare.None);

StreamWriter swFromFile = new StreamWriter(logFile);
swFromFile.Write(textToAdd);
swFromFile.Flush();
swFromFile.Close();

StreamWriter swFromFileStream = new StreamWriter(fs); // <= look at this
swFromFileStream.Write(textToAdd);
swFromFileStream.Flush();
swFromFileStream.Close(); // <= and also at this…

StreamWriter swFromFileStreamDefaultEnc =
new System.IO.StreamWriter(fs, // <= and finally at this…
System.Text.Encoding.Default);
swFromFileStreamDefaultEnc.Write(textToAdd);
swFromFileStreamDefaultEnc.Flush();
swFromFileStreamDefaultEnc.Close();

// more code omitted…
}

If you try to run the sample, it will of course miserably fail…Because closing the StreamWriter also closes the underlying Stream, so it can't be used again on the next line.

But that's not all…What I also found a little bit strange when looking at the above code, is the fact that it's Stream.Close that's being called by StreamWriter.Dispose, and not Stream.Dispose. Why? Shouldn't owned IDisposable objects be Disposed of during Dispose? Well, it happens that for Stream-derived classes, calling Dispose or Close has strictly the same effect. Let's look at the code:


public abstract class Stream : MarshalByRefObject, IDisposable
{
public void Dispose()
{
this.Close();
}

public virtual void Close()
{
this.Dispose(true);
GC.SuppressFinalize(this);
}

protected virtual void Dispose(bool disposing)
{
if (disposing && (this._asyncActiveEvent != null))
{
this._CloseAsyncActiveEvent(Interlocked.Decrement(ref this._asyncActiveCount));
}
}

// Other members...
}

public class MemoryStream : Stream
{
protected override void Dispose(bool disposing)
{
try
{
if (disposing)
{
this._isOpen = false;
this._writable = false;
this._expandable = false;
}
}
finally
{
base.Dispose(disposing);
}
}

// Other members...
}

So Dispose calls Close, which calls Dispose(true) in the Stream base class, and MemoryStream.Dispose(bool) does nothing interesting but calling the base implementation. Wow, what's that? Again, I don't like that design. Why have a Dispose and a Close when both are doing the same? IMO, I think something is flawed in Stream (and StreamWriter/Reader…). There's a semantic difference between disposing of an object and closing it: in the former case, I don't need it anymore and it's gone, while in the latter, I might to re-open it and use it again. I'm thinking for example of a magazine: when I read it in its entirety, I put it in the recycle bin (that's the Dispose), and when I just read some articles and I plan to read it further later, I close it and leave it on my desk.

To Close or to Dispose?

I think having Close call Dispose brings some confusion. Why have the two? Anyway the contract for a user of a class that implements IDisposable is that it is mandatory to call Dispose after use. Any call to Close is then superfluous. If Close would have allowed me to temporarily release some resource and reuse it later through a call to some Open method, then I think it would make sense. There's another place in the framework 2.0 where this Dispose(),
Dispose(bool), Finalize(), Open(), Close() stuff is IMO almost correctly implemented: System.Data.SqlClient.SqlConnection. I say almost, because trying to re-Open a disposed connection yields an InvalidOperationException and not an ObjectDisposedException, as I would expect. For the rest, it is quite possible to call Open and Close multiple times without any problem.

A simple solution to the Stream ownership issue

The solution I found to the above issue (I'm actually talking about the Stream being closed by the StreamWriter) is rather simple (and I guess I'm not the first one to implement it): use a Decorator around the Stream to prevent it from being closed or disposed. That way, the Stream object is still usable even after the StreamWriter/Reader has been closed.


public class NonClosingStreamDecorator : Stream
{
private readonly Stream _innerStream;

public NonClosingStreamDecorator(Stream stream)
{
if (stream == null)
throw new ArgumentNullException("stream");
_innerStream = stream;
}

public override void Close()
{
// do not delegate !!
//_innerStream.Close();
}

protected override void Dispose(bool disposing)
{
// do not delegate !!
//_innerStream.Dispose();
}

// The rest of the overriden methods simply delegate to InnerStream
}

Using the Decorator is straightforward:


[Test]
public void TestClosingStreamWriterWithNonClosingStream()
{
TextWriter writer = new StreamWriter(new NonClosingStreamDecorator(_stream));
writer.WriteLine("I like this");
writer.Close();
// enjoy using the Stream further
_stream.Position = 0;
}


18 January 2008

Verifying mock expectations in TearDown

I'm currently in a unit testing-oriented period. I've had various discussions with different project teams and colleagues about the subject. One of the things I noticed in a piece of unit test code was that some people were verifying mock expectations in the fixture TearDown. I also found some examples of this practice on the net. The test code was looking like this:


   1:      [TestFixture]
   2:      public class SomeTestFixture
   3:      {
   4:          private MockRepository _mocks;
   5:   
   6:          [SetUp]
   7:          public void Setup()
   8:          {
   9:              _mocks = new MockRepository();
  10:          }
  11:   
  12:          [TearDown]
  13:          public void TearDown()
  14:          {
  15:              _mocks.VerifyAll();
  16:          }
  17:   
  18:          [Test]
  19:          public void SomeTest()
  20:          {
  21:              ISomeInterface interfaceMock = _mocks.CreateMock<ISomeInterface>();
  22:              interfaceMock.SomeMethod();
  23:              _mocks.ReplayAll();
  24:   
  25:              TestedObject sut = new TestedObject();
  26:              sut.SomeInterface = interfaceMock;
  27:              sut.InvokeSomeMethod();
  28:          }
  29:      }

Personally, I don't find it is a good practice, and I would not recommend it. There are I think various reasons for this.

When we look at the basics of automated unit testing, each test case execution happens in four distinct phases, as described in xUnit Patterns: 1- Setup, 2- Exercise, 3- Verify, 4- Tear down. Clearly, the unit testing framework foresees a nice place to put the code pertaining to each of steps: the Setup, Teardown and Test methods. As you might have guessed, the test method should contain steps 2 and 3. When you don't follow that practice, you're in fact breaking the nice organization that your framework brings to your test code, as well as the semantics of the Teardown method (which is only supposed to contain step 4 code).

Also, when we look at the test code (lines 21-27), it is not obvious what we're in fact trying to verify. Did the developer forget to call VerifyAll? Did he forget to make some assertions? It is not intuitive to look at Teardown to actually find the end of the test case logic.

But that's not all. Suppose that you have many test cases in a fixture, and one of the test breaks because of unmet expectations. The stack trace would look like this:

Rhino.Mocks.Exceptions.ExpectationViolationException: ISomeInterface.SomeMethod(); Expected #1, Actual #0.

at Rhino.Mocks.MockRepository.VerifyAll()
at Be.Aprico.Test.SomeTestFixture.TearDown() in SomeTestFixture.cs:line 24


The problem here is that is not obvious from the stack trace which test case failed. Of course, the test runner will tell you which test actually failed, but nevertheless, I prefer to have a stack trace that has its top where the problem is actually located, and not at some unrelated place.

Finally, what if your fixture has real teardown code that must be executed? Should you in this case verify the expectations before or after the teardown code?


        // Should I do this ?
[TearDown]
public void TearDown()
{
_mocks.VerifyAll();

Cleanup();
}

// Or this ?
[TearDown]
public void TearDown()
{
Cleanup();

_mocks.VerifyAll();
}

In the first case, you run the risk that when expectations are not met in one of the test cases, the Cleanup call is in fact never executed due to the thrown exception. This can have bad consequences (mainly interacting, erratic tests, causing unrelated tests to fail because of the previous failed test leftovers). You can correct this by encapsulating the VerifyAll call with a try…finally block, but this in my opinion makes the teardown logic more complicated than it should.

In the second case, what we're really doing is swapping steps 3 and 4 of our 4-phases test. This goes against standard well-known practices, and doesn't help understanding the test code.

To conclude, I would recommend to always put the verification code close to the exercise code. That way, you ensure the test case is readable and complete. My personal preferred syntax is the following (using Rhino Mocks):


        [Test]
public void SomeTest()
{
ISomeInterface interfaceMock = _mocks.CreateMock<ISomeInterface>();
using (_mocks.Record())
{
interfaceMock.SomeMethod();
}
using (_mocks.Playback())
{
TestedObject sut = new TestedObject();
sut.SomeInterface = interfaceMock;
sut.InvokeSomeMethod();
}
}

I find the using a very elegant, clear way to delimit expectations definition and actual exercise. This makes the test code very readable. But this is only a matter of personal preferences, there are other equally valid syntaxes!

13 January 2008

Disposable mocks

I was writing some unit tests for a class that had a dependency on an interface that inherited from IDisposable. I'm using Rhino Mocks (one of the best mock objects library I know) to mock the dependencies of the class being tested.

The class itself implemented IDisposable also, to dispose correctly of its resources. The code for the class looked approximately like this:


using System;


namespace DisposableMock
{

public interface ISomeInterface : IDisposable
{
void DoStuff();
}

public class SomeDisposableClass : IDisposable
{
private ISomeInterface _someInterface;

public ISomeInterface SomeInterface
{
set { _someInterface = value; }
}

public void DoSomeOtherStuff()
{
_someInterface.DoStuff();
}

#region IDisposable Members

public void Dispose()
{
_someInterface.Dispose();
}

#endregion
}
}

The test fixture looked like this:


using NUnit.Framework;
using Rhino.Mocks;

namespace DisposableMock.Test
{
/// <summary>
/// A test fixture for <see cref="SomeDisposableClass"/>
/// </summary>
[TestFixture]
public class SomeDisposableClassFixture
{
private MockRepository _mocks;
private SomeDisposableClass _testee;
private ISomeInterface _someInterface;

[SetUp]
public void Setup()
{
_mocks = new MockRepository();
_testee = new SomeDisposableClass();
_someInterface = _mocks.DynamicMock<ISomeInterface>();
}

[TearDown]
public void TearDown()
{
if (_testee != null)
_testee.Dispose();
}

[Test]
public void Test()
{
// Expectations setup
_someInterface.DoStuff();
_mocks.ReplayAll();

// Replay
_testee.SomeInterface = _someInterface;
_testee.DoSomeOtherStuff();

// Expectations verification
_mocks.VerifyAll();
}
}
}

What's interesting here is the TearDown method. As a good .NET citizen, the test fixture is cleaning up after itself, and it calls the Dispose method of the object under test. Unfortunately, this makes the test fail miserably with an exception in Teardown.

The stack trace looks like this:

System.InvalidOperationException: This action is invalid when the mock object is in verified state.


 

at Rhino.Mocks.Impl.VerifiedMockState.MethodCall(IInvocation invocation, MethodInfo method, Object[] args)

at Rhino.Mocks.MockRepository.MethodCall(IInvocation invocation, Object proxy, MethodInfo method, Object[] args)

at Rhino.Mocks.Impl.RhinoInterceptor.Intercept(IInvocation invocation)

at Castle.DynamicProxy.AbstractInvocation.Proceed()

at ISomeInterfaceProxy71b077222e754186ac7f599256a49a48.Dispose()

at DisposableMock.SomeDisposableClass.Dispose() in SomeDisposableClass.cs:line 29

at DisposableMock.Test.SomeDisposableClassFixture.TearDown() in SomeDisposableClassFixture.cs:line 28


 

Indeed, the fact that I'm calling Dispose on the tested object in Teardown (where expectations were already verified on the mock as this occurs in the test method body), which calls Dispose on the mocked interface is a problem. You can't use a mock in Rhino Mocks after its expectations were verified, even if you're calling a method that does not make part of your expectations. So, I was thinking about how to solve this…

I must call Dispose on the tested object, and I found that Teardown was the right place to do it. I thought I could inject a null reference on _testee.SomeInterface, but this is really not a good thing as it could have side effects on _testee. But maybe Teardown is after all not the right place to Dispose of _testee. So I came with this solution:


 


        [Test]
public void Test()
{
// Expectations setup
_someInterface.DoStuff();
_mocks.ReplayAll();

// Replay
using (_testee)
{
_testee.SomeInterface = _someInterface;
_testee.DoSomeOtherStuff();
}
_mocks.VerifyAll();
}

But frankly, I don't like that. Dispose is really part of cleanup and so should logically go to Teardown, and not pollute every test method. By looking on the MockRepository class, I noticed a method that sounds as it could maybe help me: BackToRecordAll. Let's give it a try…


        [TearDown]
public void TearDown()
{
_mocks.BackToRecordAll();
if (_testee != null)
_testee.Dispose();
}

[Test]
public void Test()
{
// Expectations setup
_someInterface.DoStuff();
_mocks.ReplayAll();

// Replay
_testee.SomeInterface = _someInterface;
_testee.DoSomeOtherStuff();
_mocks.VerifyAll();
}


Aaah much better! This time I can again call Dispose on the mocks without any exception being thrown. Of course, going back to record mode in Teardown is not that intuitive, but still I find it better than having to write cleanup code in my test cases.

05 January 2008

Visual Studio 2008 multi-targeting: using C# 3.0 for your .NET framework 2.0 apps

A friend recently told me he was curious about how it would be possible to use the C# 3.0 compiler to compile applications targeting the .NET 2.0 framework, and this while leveraging the new C# 3.0 features. Indeed, there are no reasons why the 2.0 framework and runtime would have any issues to run C# 3.0 applications, as long as they reference only assemblies from the 2.0 framework. This is because all new C# 3.0 language features were implemented at the compiler level, thus not requiring any change to the runtime (there is no newer runtime than the 2.0).


So I did a quick test with VS 2008, and I was pleased to notice that this indeed works like a charm. At first, I thought that multi-targeting also meant using the compiler corresponding to the target framework, but apparently, VS 2008 always compiles your source files using the C# 3.0 compiler, even when you're targeting another framework than 3.5.


A quick example: the following code


using System;
using System.Collections.Generic;

namespace ConsoleApplication
{
class Program
{
static void Main(string[] args)
{
var foo = 2;
Console.WriteLine(foo);
List<int> fooList = new List<int>() { foo, foo++, foo++ };
foreach (var x in fooList)
Console.WriteLine(x);
Console.ReadLine();
}
}
}


gets compiled into IL, and Reflector shows the equivalent C# 2.0 syntax of the Main method:



private static void Main(string[] args)
{
int foo = 2;
Console.WriteLine(foo);
List<int> <>g__initLocal0 = new List<int>();
<>g__initLocal0.Add(foo);
<>g__initLocal0.Add(foo++);
<>g__initLocal0.Add(foo++);
List<int> fooList = <>g__initLocal0;
foreach (int x in fooList)
{
Console.WriteLine(x);
}
Console.ReadLine();
}


Of course, the above example only uses some of the new C# 3.0 features (type inference and collection initializers), and using LINQ will require adding references to.NET framework 3.5 assemblies (System.Core.dll and System.Data.Linq.dll). Even if you decide to recreate LINQ yourself from scratch, which is perfectly possible, you'll need at least a reference to System.Core.dll because of extension methods that require the System.Runtime.CompilerServices.ExtensionAttribute.


Here's a summary table of new C# 3.0 features that can actually be used in .NET 2.0 applications.


C# 3.0 Feature

Supported on .NET Framework 2.0?

Local Variable Type Inference

Yes

Object Initializers

Yes

Collection Initializers

Yes

Anonymous Types

Yes

Auto-Implemented Properties

Yes

Extension Methods

No (needs ExtensionAttribute in System.Core.dll)

Query Expressions

No (needs Extension Methods)

Expression Trees

No (needs IQueryable in System.Core.dll)

Implicitly-Typed Arrays

Yes

Lambda Expressions

Yes

Partial Methods

Yes



As a conclusion, even if you're still stuck with .NET 2.0 as your runtime environment, you can already leverage some C# 3.0 features in your code. And VS 2008 supports it!

Visual Studio 2008 Project Creation Failed

If you're playing with Guidance Automation eXtensions and you run across this annoying issue where you can no longer create any project in VS 2008, check out this post. That solved the problem on my machine.

03 January 2008

Is ObjectSpaces not quite dead yet?

This morning I was browsing through the .NET Framework 2.0 directory (C:\Windows\Microsoft.NET\Framework\v2.0.50727) on my XP SP2 laptop. And I noticed a very surprising file there…

Coming back home this evening, I checked my second laptop that runs Vista, and yes, it is there too…

Here's what I saw:



It seems that some ObjectSpaces bits finally made it to RTM ;-)

Oh, and by the way, I don't know what caused the apparition of this file because it is not on my XP SP2 desktop…Some hotfix?

01 January 2008

My first post…

So, that's it, as this new year is starting, I've decided to blog a little bit. The idea was already in the air for a few months now, but I always thought I wouldn't have enough time to do it.


Some changes in my personal life make me now think that I probably will have more free time in 2008 than before, so maybe I can bring my very modest contribution to the community.


I expect to be mostly blogging about software engineering topics, but who knows, there could be other subjects also from time to time.


So let's keep things short this time, I have to prepare my next post which will hopefully be more interesting than this one!