Tag Archives: .Net

ANTS Memory Profiler

Well, I’m speaking about a commercial product again. This time it’s about ANTS Memory Profiler (by Red Gate software, the ones that made .Net reflector). This product will help you identify any .Net memory leak you could have.

The truth is, .Net never leaks but you can sometimes make stupid conception mistakes, like forgetting to remove references of some objects (that may contains heavy references themselves).

The tool allows you to take snapshots of your running application and compare different snapshots. You can see the difference of memory consumption by some objets or the difference of class instance count.

I’ve used it at work to solve a Sharepoint memory leak. And I finally discovered that the memory leak is really coming from the .Net Sharepoint object model. I’ll talk about this later. Sharepoint is amazing both ways.

Loading plugins assemblies in .Net

(first post of the year)

This might seem like a quite complex thing to do, but it’s in fact very simple. Thank you .Net for being so well built.

Note : With .Net 3.5, there is a much more advanced method called Add-In. But it’s also much more complex. You should use it on long-term projects with some evolutions of the plugins API (and no possibility to change the plugins). I’ve used it for a project and that really made us lose a lot of time.

So here is the code for a simple plugins class loading system :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
public List<Type> GetPlugins<T>( string folder ) {
	var files = Directory.GetFiles( folder, "*.dll" );
	var tList = new List<Type>();
	foreach ( string file in files ) {
		try {
			var assembly = Assembly.LoadFile( file );
			foreach ( Type type in assembly.GetTypes() ) {
				if ( !type.IsClass || type.IsNotPublic )
					continue;
				var interfaces = type.GetInterfaces();
				if ( ( (IList) interfaces ).Contains( typeof( T ) ) )
					tList.Add( type );
			}
		}
		catch ( Exception ex ) {
			Logger.LogException( ex );
		}
	}
	return tList;
}
 
public List<T> InstanciatePlugins<T>( List<Type> types, object[] args ) where T:class {
	var list = new List<T>();
 
	foreach ( var t in types )
		if ( ( t.GetInterfaces() as IList ).Contains( typeof( T ) ) )
			list.Add( Activator.CreateInstance( t, args ) as T );
 
	return list;
}

Our project can be organized like this :
Project.Project : The application that will load plugins
Project.Common.Plugins : The common types used by the core and the plugins
Project.Plugin.Test1 : One stupid test plugin

In Project.Common.Plugins, we will declare an interface :

1
2
3
4
5
6
namespace Project.Common.Plugins {
	public interface IPlugin {
		String Name { get; }
		void DoSomeStuff();
	}
}

In Project.Plugin.Test1, we will declare a class :

1
2
3
4
5
6
7
8
9
namespace Project.Plugin.Test1 {
	public class PluginTest1 {
		public String Name { get { return "PluginTest1"; } }
		public void DoSomeStuff() {
			for( int i = 0; i < 100; i++ )
				Console.WriteLine("I only count ({0})", i );
		}
	}
}

This assembly has to be generated in a “plugins” directory.

Then, in your project, you will just have to use the methods given in the beginning and do something like that :

1
2
3
4
5
6
var types = GetPlugins<IPlugin>( "plugins" );
var pluginInstances = InstanciatePlugins( types, null );
 
Console.WriteLine("Plugins are :");
foreach( var pi in pluginInstances )
	Console.WriteLine("* {0}", pi.Name );

If you’re worried that you’re stuck with these created objects, you should take a look on the AppDomain (I think I will talk about them pretty soon). This allows to load .Net assemblies and types and then unload them when ever you want. But as it can be easily adapted to some existing code, you should start without it and then add it when you feel your application could benefit from it.

NDepend

Patrick Smacchia gave me a professional license of NDepend v2.12 so that I could write some stuff about it if I liked it. As it was a gift (yes it is), I decided to force myself to look into this software. And after having looked much of its functionalities, I kind of like it. It’s not THE ultimate tool you have to use, but it’s a little bit like the other (resharper, reflector, etc.), it gives you a better insight of what you have in hand.

It’s a little bit like Resharper, it helps you see what you might have done wrong. Except Resharper tells it when you’re editing the code and NDepend helps you make a full review of the code.

Everything in this software goes around the Code Query Language (CQL). I thought this was some sort of commercial concept to sell more but it turns out that we can do a lot of things with it. It seems like a sensible idea, it’s SQL for the code. And the CQL textbox supports for auto-completion makes the CQL writing process pretty easy.

Don’t make it your judgment tool
I guess the most risky think to do would be to give NDepend to dumb people (like code quality reviewers that don’t understand much of what they do). They would end up getting on everybody’s nevers. Because I don’t think that all the code creating query warnings could be changed. I’ve taken the simple stripped down method example of the StreamRipper app :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
private static Boolean ParseArgs( String[] args, Parameters parameters ) {
	try {
		for ( int i = 0; i < args.Length; i++ ) {
			switch ( args[ i ] ) {
				case "--url":
				case "-u":
					parameters.Urls.Add( args[ ++i ] );
					break;
 
				case "--user-agent":
				case "-a":
					parameters.UserAgent = args[ ++i ];
					break;
 
				case "--reconnect-attempts":
				case "-r":
					parameters.MaxReconnectionTries = int.Parse( args[ ++i ] );
					break;
			}
		}
		return true;
	}
	catch ( Exception ) {
		Console.Error.WriteLine( "Arguments error ! " );
		return false;
	}
}

This code will create a cyclomatic complexity warning (as it would in Resharper 5.0 [I should talk about this someday] with the cyclomatic complexity addin). So, for me, it’s clear you have to accept to have a lot of warnings. It’s not like resharper where you can solve all the warnings by accepting every little things he tells you to do (sometimes I’m resharper’s slave but I like it).

BUT, if you really want to make it your judgment call tool, you should really stick to this diagram :

You can see that in this project some of the projets are in the zone of pain and the zone of uselessness. Don’t worry. In the zone of uselessness we have the WCF contracts. It’s quite normal they are totally abstract. And close to the zone of pain, we have low level libraries. So, nothing to worry about.

If you WANT a quick and dirty rule : If your business logic core code is in one of these two red zones, you have a problem.

What I think it does well
I think it gives you a quick view of the software you’re working on : What are the main projects, classes and methods. What quality of work you should expect from it. The graphical view isn’t pretty but it gives you a good simplified view :

And anytime you have a doubt, you just double-click on the method and it opens it in Visual Studio.

You have the dependency graphical view :

It doesn’t look very useful like this. But within Visual NDepend, it displays the parent and child assemblies when you move your mouse over a project :

Evolution of your project
NDepend also saves the results of all previous analysis and allows you to show the evolution of your product. You can see easily what methods have been modified / added / deleted from one analysis to an other. Each analysis you do is saved and can be compared to any other later. This can be done on simple .Net assembly. This will allow you to see what has been changed between two version of an assembly. And with the help of reflector, you can see precisely what has been fixed/improved.

You can see a pretty good example by Patrick Smacchia : Comparison of .Net BĂȘta 1 and 2 with NDepend.

My thoughts
I think it gives you a quick and simplified view of the organization and size of a project. It’s great .Net tool, I would recommend to any company having big .Net projects. But you shouldn’t spend too much time on trying to comply with all its default CQL query checks as they are a little bit constrictive. If you do, you might want to increase the threshold values. And please take extra care before making judgment calls (it might be tempting).

Inside Sharepoint

I recently took the time to take a look inside the Microsoft.Sharepoint.dll using reflector. I’m not sure I have the right to do that. And I’m pretty sure I don’t have the right to publish any code extracted from it, so I won’t show any.

Using SPSite(s) and SPWeb(s)
If you do some timing on the SPWeb creation call (SPSite.OpenWeb), you will find out that it’s freaking fast (less then 1 ms on my server). The reason is that the most heavy object, the SPRequest class, is shared among SPWebs of a SPSite. The Dispose call only “Invalidate” the SPWeb, and if this SPWeb is the owner of the SPRequest (which is SPContext.Current.Web object in most of the cases), it releases it.

Personnaly, I like to have something like that when I use a particular SPWeb along my code in a WebPart :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
private SPWeb _rootWeb;
public SPWeb RootWeb {
    get {
        if ( _rootWeb == null ) {
            _rootWeb = SPContext.Current.Site.RootWeb;
            if ( _rootWeb != SPContext.Current.Web )
                _toDispose.Add( _rootWeb );  
        }
        return _rootWeb;
    }
}
 
private List<IDisposable> _toDispose = new List<IDisposable>();
protected override void OnUnLoad() {
    foreach( var disp in _toDispose )
        disp.Dispose();
}

But the code above won’t make you gain a 1 ms compared to this code (which is shorter and potentially safer, you don’t need to dispose everything) :

1
2
3
4
5
6
7
8
var rootWeb = SPContext.Current.Site;
try {
    // Your code
}
finally {
     if ( rootWeb != SPContext.Current.Web )
         rootWeb.Dispose();
}

If you had to access some more indirect objects, you certainly should keep the code showed earlier. For instance to use a SuperToken Web :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
private SPSite _stSite;
SPSite STSite {
    get {
        if ( _stSite == null ) {
            _stSite = new SPSite( SPContext.Current.Site.Url, SPContext.Current.Site.SystemAccount.UserToken );
            if ( _stSite != SPContext.Current.Site )
                _toDispose.Add( _stSite );
        }
        return _stSite;
    }
}
 
private SPWeb _stWeb;
SPWeb STWeb {
    get {
        if ( _stWeb == null ) {
            _stWeb = STSite.OpenWeb( SPContext.Current.Web.Url );
            if ( _stWeb != SPContext.Current.Web )
                _toDispose.Add( _stWeb );
        }
        return _stWeb;
    }
}
 
private List<IDisposable> _toDispose = new List<IDisposable>();
protected override void OnUnLoad() {
    foreach( var disp in _toDispose )
        disp.Dispose();
}

Here, opening these new SPSite and SPWeb takes 200 ms on my server. Making sure this only happen once per webpart (or better, per page) can really boost your performances.

The SPRequest object
Well… I was very disapointed, the SPRequest object references an SPRequestInternalClass from the Microsoft.Sharepoint.Library assembly which only uses interrop COM+ methods. So, it’s basically wrapping COM+ methods. The SPRequest does a lot of exception handling and it keeps track of where (with the stacktrace) it has been created and the size of the “unmanaged stack”.

Optimization
I like how they did their code. It’s pretty optimized (they even use a bunch of gotos). But sometimes there is some weird things and I don’t know if it’s the compiler’s or the developer’s fault. If you look at the code, they have an SPRequestManager.SPRequestsPerThreadWarning property, which gets (in the Registry, at “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\HeapSettings\LocalSPRequestWarnCount”) the max number of opened SPRequest objects opened before logging it (or 8 if this entry doesn’t exist), that takes twice the amount of code it would normaly require.

The limit of disassembling
The most frustrating part is there are tons of really important methods that are obfuscated (and that .Net reflector doesn’t disassemble, it could at least give the IL code in comments). I just have :

1
2
3
private static void xxx() {
    // This item is obfuscated and can not be translated.
}

For instance, I really would have liked to see how works the SPList.Update() method, but it’s also obfuscated.

You should explore it too
In the Microsoft.Office.Server.Search assembly, you will also find some pretty interesting things. If you look how works the standard search webparts, well you will be pretty disapointed. The advanced search uses an internal (in the C# meaning) shared object.

Before doing your own webpart, you should take a look at the overriden methods of some sharepoint webparts. It can show you, for instance, how to create your own toolparts. In my last project, I used a stupid text property instead of using a DropDownList in a ToolPart, I was very disappointed with myself when I discovered this.

Mono Tools for Visual Studio : I have tested it !

Yes, I have tested MonoVS with the version 0.2.2641 (on both client and server). I installed OpenSuse 11.1 and added the MonoVS software repository, and everything worked ! I would have prefer to get it from SVN in order to use it in my Debian hosts but the mono development team seems to have removed it from their SVN repository.

So, the Mono Remote Debugger for Visual Studio works, but there still some bugs. Deployment is super fast and it copies all the required DLL.

Remote debugging can be used (launching it is really fast too), but has some bugs, here are the ones I could find :
– On the classical mouse hover debugging popup, the expandable element “Non-public members” is always empty if available. If not available, every private member is displayed like any other public member variable.
– In my first tests, the “watch” window wouldn’t allow any variable to be expanded.
– If you have a variable not yet declared (but available) in the watch window and try to expand it, debugging just stops without any warning
– Sometimes, when I stop debugging I get a message saying something went wrong and the debugger might be unstable
– Once, after making a pause and then a stop, it totally crashed Visual Studio (but it only happened once).

And this is not really a bug, but unhandled exception are displayed in the dirty popup. This isn’t pretty.

If do “Run remotely in Mono”, it will display the Console output in the server’s console. If you do “Debug remotely in Mono”, the Console output is redirect the Debug output window.

This tool is still in private beta (I guess anyone has a good chance to be accepted like I was), but it can already help a lot of people. Even if you just use the Remote Running (which includes deploying the assembly), it still worth using this tool.

.Net Reflector + File Disassembler

.Net reflector is a really good tool. You can see the content of any assembly very easily. But it’s not really easy to see the full content of a class, or a library with it.

The File Disassembler add-in is a totally crazy stuff. You can take any assembly and totally disassemble it. It even creates the .csproj so that you just have to open the project in Visual Studio. But don’t get too excited if the code is obfuscated you will get some “empty” methods with just this comment :

1
// This item is obfuscated and can not be translated.

By the way. As you can see from the .net reflector video, you can register in the context menu for any DLL and EXE assembly by executing it with the /register parameter.

Mono Tools for Visual Studio

Just a little post for all these people who seem to think Mono is just an other short-term open-source software.

I’ve used it for quite some time with a production “real time” network server, which is running for something like 6 months now, and it performs very well. I do everything on my Windows host and then copy and launch the final app on the Linux host. But there are still two problems :

  • Not all .Net classes are supported. WCF (the most powerfull to do two-way async/sync communication) isn’t one of them.
  • You can’t use the powerfull Visual Studio debugger and you can’t take advantage of the PDB files (as they are not compatibles with mono).

Well, the Mono team has solved this second problem with their Mono Tools for Visual Studio. I have already applied twice and haven’t received any inivitation for the private mono tools tryout. But I guess it will be released to the public pretty soon (within 6 months). The mono guys are working really fast (but not as fast as the Microsoft .Net development team).

Sometimes people should just consider using Mono for their (web) applications. In my opinion, an ASP.Net + DataBase Linux server is faster to manage than a equivalent Windows Server. It doesn’t slow down with uptime, it doesn’t have dozens of useless services, it doesn’t require to restart for updates and real problems are way easier to diagnose.
The real limitation for me are the super Microsoft APIs and tools like WCF, LinQ, SQL Server 2008 (with its Integration and Analysis services) that you can only run on Windows.

Debugging on Sharepoint 2007

Sharepoint debugging isn’t fully automated. So you should really know how to debug and diagnose your assemblies in any given situation.

1. Attaching to the process
It only applies to a debugging environnement.

This is the one that everybody knows (or should at least). You deploy your DLL in the GAC, restart your application pool, access your sharepoint web application in order to load the application pool and the DLL and then attach to the right w3wp.exe process (or every w3wp.exe process if you don’t really know which one to choose).

2. Displaying where the exception happens
It should be used everywhere.

Just after deploying your DLL into the GAC, you should deploy the PDB file with it. In your exception management code, you have the exact line where the exception was thrown. Wether your users report it (with the exact line number), you see it in the logs or you have an automatic reporting system, the point is : You will know exactly where it fails.

If you have a WSP deployment method, you will have :

1
2
rem This WSP File contains the MyCorp.MyApp.MyLib library with the 0x123456789 public key token
stsadm -o addsolution -filename %WSPFILE%

If you have a DLL deployment method, you will have :

1
gacutil /if GAC\MyCorp.MyApp.MyLib.dll

Either way, you need to add the PDB with this command :

1
2
subst X: c:\windows\assembly\gac_msil
copy GAC\MyCorp.MyApp.MyLib.pdb X:\MyCorp.MyApp.MyLib\1.0.0.0_123456789\

If you’re not willing to give away your PDB file (it contains you complete code source and consumes space), you can find out where you app exactly failed just from the offset of the stacktrace reported by Sharepoint (with the CustomError=”Off” and StackTrace=”true” in the web.config). Some people explain how to do it here. Answer “3” allows you to get the IL offset like ASP.Net does in its (non customized) error page.

3. Launching the debugger from the code
This is very useful for features deactivation/uninstallation/installation/activation code.

You just have to add this line when you want to ask the debugger to attach to the assembly.

1
Debugger.Launch();

4. Other options
This article focuses on hardcore problems : Problems that occur inside sharepoint or weird problem that only appear on your production servers.

The WinDBG method seems a little bit overkill to me. Mostly because you still can’t analyze the state of the local variables with our current tools (but I hope it will be made available in a short future).

Sharepoint : SPWebConfigModification

I’ve seen lots of Sharepoint software having an installation manual of at least 20 pages (sometimes 60). Most of the operations they describe could be fully automated. And these software were made by freaking big companies. They should be ashamed of themselves. Maybe they just forgot that computer science is all about saving some time (and not only making money).

One good example is MOSS Faceted search 2.5 (I haven’t tested the 3.0). It takes at least 40 minutes to uninstall this crap. Why isn’t it just ONE WSP or at least one BAT file launching the WSP installation and the other steps ? Is there any real reason for that ?

The SPWebConfigModification solves this web.config modification problem. It’s a pretty interesting feature of Sharepoint. You can edit the web.config file without any complex XML parsing. It doesn’t even matter that you add XML or not. The SPWebConfigModification class manages your add/mod/del operations easily. The only restriction is that you have to add your first configuration elements using the SPWebConfigModification. You cannot modify existing elements this way.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
// Source : http://sharethelearning.blogspot.com/2008/01/adding-bindingredirect-to-webconfig.html
public static void AddBindingRedirect( SPWebApplication webApp, string libraryName, string libraryPublicToken, string oldVersion, string newVersion ) {
	var ownerName = String.Format( "BindingRedirect.{0}", libraryName );
 
	{ // We delete last bindingRedirect
 
		var list = new List<SPWebConfigModification>();
		foreach ( SPWebConfigModification mod in webApp.WebConfigModifications ) {
			list.Add( mod );
		}
 
		foreach ( var mod in list ) {
			if ( mod.Owner == ownerName ) {
				LoggerCommon.LogVerbose( String.Format( "Deleting: \"{0}\"", mod.Value ) );
				webApp.WebConfigModifications.Remove( mod );
			}
		}
	}
 
	{ // We add our redirection
		String path = "configuration/runtime/*[namespace-uri()='urn:schemas-microsoft-com:asm.v1' and local-name()='assemblyBinding']";
		String name = String.Format( "*[namespace-uri()='urn:schemas-microsoft-com:asm.v1' and local-name()='dependentAssembly']/*[namespace-uri()='urn:schemas-microsoft-com:asm.v1' and local-name()='assemblyIdentity'][@name='{0}']/parent::*", libraryName );
		String webConfigValue = String.Format( @"
	<dependentAssembly>
		<!-- Added automatically at {4} -->
		<assemblyIdentity name='{0}' publicKeyToken='{1}' culture='neutral' />
		<bindingRedirect oldVersion='{2}' newVersion='{3}' />
	</dependentAssembly>
", libraryName, libraryPublicToken, oldVersion, newVersion, DateTime.Now );
 
		SPWebConfigModification mod = new SPWebConfigModification( name, path );
		mod.Value = webConfigValue;
		mod.Owner = ownerName;
		mod.Sequence = 0;
		mod.Type = SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode;
 
		webApp.WebConfigModifications.Add( mod );
 
	}
 
	{ // We save our changes
		webApp.Update();
		SPFarm.Local.Services.GetValue<SPWebService>().ApplyWebConfigModifications();
	}
}

If you do a binding redirect from 1.0.0.0 to 1.0.1.0 and your .webpart file references the 1.0.0.0 version, sharepoint will store your webpart as referencing the 1.0.1.0 assembly (and not 1.0.0.0 as you told him). So if you then chose to change the binding redirect from 1.0.0.0 to 1.0.2.0, without redirecting 1.0.1.0 to 1.0.2.0, your webpart will still be the 1.0.1.0 version.

I haven’t tested this for event receivers, but the way they are registered (Sharepoint doesn’t check the assembly you add to the event receivers of a list), I would guess Sharepoint doesn’t change the assembly version.

To solve this webpart updating problem, you can use ranged binding redirect (.Net rules) :

1
2
var site = new SPSite("http://localhost");
AddBindingRedirect( site.WebApplication, "MyCorp.MyApp.MyLib", "0x123456789", "1.0.0.0-1.0.3.5", "1.0.3.5" );

That means that any webpart using a previous version of the “MyCorp.MyApp.MyLib” assembly between 1.0.0.0 and 1.0.3.5 will be redirected to the 1.0.3.5 version.

If your assembly contains page code-behind classes, you should take care of updating the aspx files as well.

Reference :

GAC Download Cache

There’s one little feature that you must have totally forgotten in the .Net framework, but it is great.

We can tell our apps to download automatically some DLL we would expect to be in the GAC and that are not. This is one freaking great feature. Instead of forcing your users to install the librairies in their GAC or including the libraries with your applications, you can specify the URL(s) of the DLL(s) your software application depends on. When you launch your program, the .Net framework program will download them automatically if they’re not already in the GAC download cache.

You just have to add something like that in the file “yourapp.exe.config” in the same directory of your “yourapp.exe” application.

1
2
3
4
5
6
7
8
9
10
11
12
13
<configuration>
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity
          name="Lib"
          publicKeyToken="9b52b2ba78ecf379"
          culture="" />
        <codeBase version="1.0.0.0" href="http://www.yourserver.com/dw-assemblies/Lib.dll" />
      </dependentAssembly>
    </assemblyBinding>
  </runtime>
</configuration>

This avoids to :

  • Install something in the GAC
  • Package the required assembly with your software
  • Download or copy the required assemblys for each of your software
  • Clean your old assemblys once your don’t require them

You can see the content of your download cache by typing :

# gacutil.exe /ldl

And you can clear your download cache by typing :

# gacutil.exe /cdl