Tag Archives: Sharepoint 2007

Inside Sharepoint

I recently took the time to take a look inside the Microsoft.Sharepoint.dll using reflector. I’m not sure I have the right to do that. And I’m pretty sure I don’t have the right to publish any code extracted from it, so I won’t show any.

Using SPSite(s) and SPWeb(s)
If you do some timing on the SPWeb creation call (SPSite.OpenWeb), you will find out that it’s freaking fast (less then 1 ms on my server). The reason is that the most heavy object, the SPRequest class, is shared among SPWebs of a SPSite. The Dispose call only “Invalidate” the SPWeb, and if this SPWeb is the owner of the SPRequest (which is SPContext.Current.Web object in most of the cases), it releases it.

Personnaly, I like to have something like that when I use a particular SPWeb along my code in a WebPart :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
private SPWeb _rootWeb;
public SPWeb RootWeb {
    get {
        if ( _rootWeb == null ) {
            _rootWeb = SPContext.Current.Site.RootWeb;
            if ( _rootWeb != SPContext.Current.Web )
                _toDispose.Add( _rootWeb );  
        }
        return _rootWeb;
    }
}
 
private List<IDisposable> _toDispose = new List<IDisposable>();
protected override void OnUnLoad() {
    foreach( var disp in _toDispose )
        disp.Dispose();
}

But the code above won’t make you gain a 1 ms compared to this code (which is shorter and potentially safer, you don’t need to dispose everything) :

1
2
3
4
5
6
7
8
var rootWeb = SPContext.Current.Site;
try {
    // Your code
}
finally {
     if ( rootWeb != SPContext.Current.Web )
         rootWeb.Dispose();
}

If you had to access some more indirect objects, you certainly should keep the code showed earlier. For instance to use a SuperToken Web :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
private SPSite _stSite;
SPSite STSite {
    get {
        if ( _stSite == null ) {
            _stSite = new SPSite( SPContext.Current.Site.Url, SPContext.Current.Site.SystemAccount.UserToken );
            if ( _stSite != SPContext.Current.Site )
                _toDispose.Add( _stSite );
        }
        return _stSite;
    }
}
 
private SPWeb _stWeb;
SPWeb STWeb {
    get {
        if ( _stWeb == null ) {
            _stWeb = STSite.OpenWeb( SPContext.Current.Web.Url );
            if ( _stWeb != SPContext.Current.Web )
                _toDispose.Add( _stWeb );
        }
        return _stWeb;
    }
}
 
private List<IDisposable> _toDispose = new List<IDisposable>();
protected override void OnUnLoad() {
    foreach( var disp in _toDispose )
        disp.Dispose();
}

Here, opening these new SPSite and SPWeb takes 200 ms on my server. Making sure this only happen once per webpart (or better, per page) can really boost your performances.

The SPRequest object
Well… I was very disapointed, the SPRequest object references an SPRequestInternalClass from the Microsoft.Sharepoint.Library assembly which only uses interrop COM+ methods. So, it’s basically wrapping COM+ methods. The SPRequest does a lot of exception handling and it keeps track of where (with the stacktrace) it has been created and the size of the “unmanaged stack”.

Optimization
I like how they did their code. It’s pretty optimized (they even use a bunch of gotos). But sometimes there is some weird things and I don’t know if it’s the compiler’s or the developer’s fault. If you look at the code, they have an SPRequestManager.SPRequestsPerThreadWarning property, which gets (in the Registry, at “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\HeapSettings\LocalSPRequestWarnCount”) the max number of opened SPRequest objects opened before logging it (or 8 if this entry doesn’t exist), that takes twice the amount of code it would normaly require.

The limit of disassembling
The most frustrating part is there are tons of really important methods that are obfuscated (and that .Net reflector doesn’t disassemble, it could at least give the IL code in comments). I just have :

1
2
3
private static void xxx() {
    // This item is obfuscated and can not be translated.
}

For instance, I really would have liked to see how works the SPList.Update() method, but it’s also obfuscated.

You should explore it too
In the Microsoft.Office.Server.Search assembly, you will also find some pretty interesting things. If you look how works the standard search webparts, well you will be pretty disapointed. The advanced search uses an internal (in the C# meaning) shared object.

Before doing your own webpart, you should take a look at the overriden methods of some sharepoint webparts. It can show you, for instance, how to create your own toolparts. In my last project, I used a stupid text property instead of using a DropDownList in a ToolPart, I was very disappointed with myself when I discovered this.

Sharepoint : Using BaseFieldControl

What for ?
Sharepoint’s API provides some standard form controls to render each column. This is the controls used to render the standard add and edit forms. And they all inherit the BaseFieldControl class.

In other word : In any SPList, you have some SPField fields and each of these SPField has the super power to create a BaseFieldControl. Each BaseFieldControl is a CompositeControl containing ASP.Net controls.

For a single line field, you will just have a wrapped TextBox. But for some more advanced fields like a multi-select lookup field (SPFieldLookup) or rich text field, it can generate some “complex” controls and their related javascript code.

The BaseFieldControl can be directly connected to your Sharepoint’s SPListItem and the BaseFieldControl.Value will match the format required to fill the SPListItem.

You can create your own BaseFieldControl controls, and you can directly mess up the BaseFieldControl.Controls property if you like to (that can be useful). I personally had to create a variation of the MultipleLookupField (that’s the BaseFieldControl used for a SPLookupField with the AllowMultipleValues property enabled) to create a CheckBoxList of selected items (that what the client wanted).

How to use them to display data ?
You can get it by accessing the SPField.FieldRenderingControl property. It gives you everything you need. Each BaseFieldControl has a ControlMode property (SPControlMode enum) which can be set to New, Edit, Display or Invalid (I don’t know why this third one exists).

Here is a stripped down version of the necessary code :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
protected override CreateChildControls() {
 
     // The web used
     var web = SPContext.Current.Web;
 
     // This is the code you could have easily guessed :
     var list = web.Lists["MyList"];
     var item = list.GetItemById( 1 );
     var field = item["DateOfEvent"];
     var bfc = field.FieldRenderingControl;
 
     // This is when it becomes tricky :
     var renderContext = SPContext.GetContext( this.Context, 0, list.ID, web );
 
     bfc.ListId = list.ID;
     bfc.FieldName = field.InternalName;
     bfc.ID = field.InternalName;
     bfc.ControlMode = SPControlMode.Edit;
     bfc.RenderContext = renderContext;
     bfc.ItemContext = renderContext;
     bfc.EnableViewState = true;
     bfc.Visible = true;
 
     Controls.Add( bfc );
}

And if you want to use it as an input form, you have to get a non existing SPListItem :

1
2
var item = list.GetItemById( 1 );
var field = item["DateOfEvent"];

By this :

1
2
var item = list.Items.Add();
var field = list.Fields["DateOfEvent"];

And then later add this reflection code to force the render context to take into account the temporary created item :

1
2
var fiContextItem = ( typeof( SPContext ) ).GetField( "m_item", BindingFlags.Instance | BindingFlags.NonPublic );
fiContextItem.SetValue( renderContext, item );

If you use multiple SPWeb…
In your Sharepoint code, you might need to use more than one SPWeb. In that case, you must absolutely take care of creating the right SPContext (using the SPWeb of the SPList of the source SPField) for each BaseFieldControl used.

And if the SPWeb used doesn’t have the same language as your current site, you can change it easily by doing something like that :

1
web.Locale = SPContext.Current.Web.Locale;

If you create your own BaseFieldControl…
If you create your own BaseFieldControl, you should really take a look on the disassembled code of the default BaseFieldControls. It could make you save a lot of time and efforts. And please note that the LookupField (which directly inherits BaseFieldControl) isn’t sealed. Inheriting from it might be a good way to do your own custom lookup BaseFieldControl.

If you enable the ControlMode = SPControlMode.Display
If you want to use your BFC with the Display ControlMode and want to set a value, you must take care of setting the Value BEFORE setting the ControlMode. If you don’t do that it will work on the first render but fail on the first PostBack : Your value will not be used (and displayed) by the BFC.

If you check the value of the BFC
If you check the value of the BFC, you have to remember that each BFC create its value for the SPField from which it has been created. For instance, empty values can be returned as String.Empty, null or even the 0 integer.

Debugging on Sharepoint 2007

Sharepoint debugging isn’t fully automated. So you should really know how to debug and diagnose your assemblies in any given situation.

1. Attaching to the process
It only applies to a debugging environnement.

This is the one that everybody knows (or should at least). You deploy your DLL in the GAC, restart your application pool, access your sharepoint web application in order to load the application pool and the DLL and then attach to the right w3wp.exe process (or every w3wp.exe process if you don’t really know which one to choose).

2. Displaying where the exception happens
It should be used everywhere.

Just after deploying your DLL into the GAC, you should deploy the PDB file with it. In your exception management code, you have the exact line where the exception was thrown. Wether your users report it (with the exact line number), you see it in the logs or you have an automatic reporting system, the point is : You will know exactly where it fails.

If you have a WSP deployment method, you will have :

1
2
rem This WSP File contains the MyCorp.MyApp.MyLib library with the 0x123456789 public key token
stsadm -o addsolution -filename %WSPFILE%

If you have a DLL deployment method, you will have :

1
gacutil /if GAC\MyCorp.MyApp.MyLib.dll

Either way, you need to add the PDB with this command :

1
2
subst X: c:\windows\assembly\gac_msil
copy GAC\MyCorp.MyApp.MyLib.pdb X:\MyCorp.MyApp.MyLib\1.0.0.0_123456789\

If you’re not willing to give away your PDB file (it contains you complete code source and consumes space), you can find out where you app exactly failed just from the offset of the stacktrace reported by Sharepoint (with the CustomError=”Off” and StackTrace=”true” in the web.config). Some people explain how to do it here. Answer “3” allows you to get the IL offset like ASP.Net does in its (non customized) error page.

3. Launching the debugger from the code
This is very useful for features deactivation/uninstallation/installation/activation code.

You just have to add this line when you want to ask the debugger to attach to the assembly.

1
Debugger.Launch();

4. Other options
This article focuses on hardcore problems : Problems that occur inside sharepoint or weird problem that only appear on your production servers.

The WinDBG method seems a little bit overkill to me. Mostly because you still can’t analyze the state of the local variables with our current tools (but I hope it will be made available in a short future).

Sharepoint : SPWebConfigModification

I’ve seen lots of Sharepoint software having an installation manual of at least 20 pages (sometimes 60). Most of the operations they describe could be fully automated. And these software were made by freaking big companies. They should be ashamed of themselves. Maybe they just forgot that computer science is all about saving some time (and not only making money).

One good example is MOSS Faceted search 2.5 (I haven’t tested the 3.0). It takes at least 40 minutes to uninstall this crap. Why isn’t it just ONE WSP or at least one BAT file launching the WSP installation and the other steps ? Is there any real reason for that ?

The SPWebConfigModification solves this web.config modification problem. It’s a pretty interesting feature of Sharepoint. You can edit the web.config file without any complex XML parsing. It doesn’t even matter that you add XML or not. The SPWebConfigModification class manages your add/mod/del operations easily. The only restriction is that you have to add your first configuration elements using the SPWebConfigModification. You cannot modify existing elements this way.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
// Source : http://sharethelearning.blogspot.com/2008/01/adding-bindingredirect-to-webconfig.html
public static void AddBindingRedirect( SPWebApplication webApp, string libraryName, string libraryPublicToken, string oldVersion, string newVersion ) {
	var ownerName = String.Format( "BindingRedirect.{0}", libraryName );
 
	{ // We delete last bindingRedirect
 
		var list = new List<SPWebConfigModification>();
		foreach ( SPWebConfigModification mod in webApp.WebConfigModifications ) {
			list.Add( mod );
		}
 
		foreach ( var mod in list ) {
			if ( mod.Owner == ownerName ) {
				LoggerCommon.LogVerbose( String.Format( "Deleting: \"{0}\"", mod.Value ) );
				webApp.WebConfigModifications.Remove( mod );
			}
		}
	}
 
	{ // We add our redirection
		String path = "configuration/runtime/*[namespace-uri()='urn:schemas-microsoft-com:asm.v1' and local-name()='assemblyBinding']";
		String name = String.Format( "*[namespace-uri()='urn:schemas-microsoft-com:asm.v1' and local-name()='dependentAssembly']/*[namespace-uri()='urn:schemas-microsoft-com:asm.v1' and local-name()='assemblyIdentity'][@name='{0}']/parent::*", libraryName );
		String webConfigValue = String.Format( @"
	<dependentAssembly>
		<!-- Added automatically at {4} -->
		<assemblyIdentity name='{0}' publicKeyToken='{1}' culture='neutral' />
		<bindingRedirect oldVersion='{2}' newVersion='{3}' />
	</dependentAssembly>
", libraryName, libraryPublicToken, oldVersion, newVersion, DateTime.Now );
 
		SPWebConfigModification mod = new SPWebConfigModification( name, path );
		mod.Value = webConfigValue;
		mod.Owner = ownerName;
		mod.Sequence = 0;
		mod.Type = SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode;
 
		webApp.WebConfigModifications.Add( mod );
 
	}
 
	{ // We save our changes
		webApp.Update();
		SPFarm.Local.Services.GetValue<SPWebService>().ApplyWebConfigModifications();
	}
}

If you do a binding redirect from 1.0.0.0 to 1.0.1.0 and your .webpart file references the 1.0.0.0 version, sharepoint will store your webpart as referencing the 1.0.1.0 assembly (and not 1.0.0.0 as you told him). So if you then chose to change the binding redirect from 1.0.0.0 to 1.0.2.0, without redirecting 1.0.1.0 to 1.0.2.0, your webpart will still be the 1.0.1.0 version.

I haven’t tested this for event receivers, but the way they are registered (Sharepoint doesn’t check the assembly you add to the event receivers of a list), I would guess Sharepoint doesn’t change the assembly version.

To solve this webpart updating problem, you can use ranged binding redirect (.Net rules) :

1
2
var site = new SPSite("http://localhost");
AddBindingRedirect( site.WebApplication, "MyCorp.MyApp.MyLib", "0x123456789", "1.0.0.0-1.0.3.5", "1.0.3.5" );

That means that any webpart using a previous version of the “MyCorp.MyApp.MyLib” assembly between 1.0.0.0 and 1.0.3.5 will be redirected to the 1.0.3.5 version.

If your assembly contains page code-behind classes, you should take care of updating the aspx files as well.

Reference :

My little Sharepoint

I recently bought a new laptop. I choose a P8600 processor to make sure to have virtualization support and a little TDP (Thermal Dissipation Power) because I don’t really like the fan noise. And it has 4GB or RAM for these little virtual hosts.

So today, I decided to have a little Sharepoint 2007 of my own. I installed Windows Server 2008 on a VMWare host, activated Remote Desktop, added Sharepoint 2007 and then MOSS 2007. I choose the x86 version of WS2008 because I wanted to limit RAM usage. With only 1GB of RAM, it worked like a charm and it is really fast.

Installing MOSS 2007 wasn’t as simple as you could expect. I had to “patch” the MOSS 2007 install CD with the MOSS 2007 SP1, following this post. You don’t have to follow exactly what it says. You can just take the content of the x86 (or x64 if you have a x64 arch) directory, put it somewhere (like C:\mossinstall) and then do the upgrade with the SP1 by typing “officeserver2007sp1-kb936984-x86-fullfile-en-us.exe /extract:c:\mossinstall\updates”.

By the way, you don’t need to do the same thing with WSS as you can find an already updated version of WSS 3.0, compatible with Windows Server 2008. I guess you could also find (or more likely buy) an updated version of MOSS 2007, but I didn’t.

I’ve developped a little bit on sharepoint but I never installed one. What is pretty astonishing from my point of view is that the installer setups the SSE SQL Server, the IIS + ASP.Net server and then configure everything automatically. It is as easy to install as any software.

SPGridView : Filtering

I wanted to use a little SPGridView in a software and had a little problem. First of all, in order to use filtering on a SPGridView, you have to give an ObjectDataSource by it’s control’s ID. For anything else, you can use the DataSource property directly.

The best sample code I could fin on the net was this one : Creating the SPGridView with sorting and adding filtering.

This example is great because it only shows the basic requirements to setup a SPGridView and it gives you a clean way to build the ObjectDataSource and it explains step by step why you have to do things this way (in Sharepoint, it’s very important).

The problem, and this is the subject of this post, came when I activated the filtering functionnality. The menu displayed and then got stuck on “Loading…” with an alert message saying it got a NullReferenceException. Here is the pic :

I finally found the origin of my problem. In my first version, I was giving the DataSource to my SPGridView on the “PreRender” loading step. For the AJAX call, you never reach this step, you have to give the DataSource sooner.

Once I got that right, I just cached the last DataSource in the ViewState to give in the CreateChildControls method and it worked. The menu was finally displaying fine.

It can’t exactly give you my code (because it’s not legally mine), but here is the basic Idea :

In the real world usage, you will give your DataTable to a property so that it can be displayed later by your SPGridView. So you can basically just copy/paste the ASPGridView class given by Erik Burger and just replace the “SelectData” method by this one :

1
2
3
4
5
6
7
8
9
10
11
12
public DataTable SelectData {
    return DataTable;
}
 
public DataTable DataTable {
    get {
        return ViewState["DataTable"] as DataTable;
    }
    set {
        ViewState["DataTable"] = value;
    }
}

And anywhere in your WebPart’s OnPreRender method, or in any button’s event receiver method, you could build your DataSource.

MOSS 2007 : Managing search properties

In Microsoft Office Sharepoint Server, you have a powerful search service. It can’t say I like everything in sharepoint but the MOSS Search Engine is really amazing. It enables to search fulltext on everything and also to filter precisely your results by searching on columns at the same time.

But the MOSS search engine isn’t as easy as searching directly in CAML. You have to prepare managed properties (from your lists columns) to be able to search on them.

All the code below is just a simplified, more understandable code for what you need to build to suit your needs. This code will need some rework to be used.

How to map columns to managed properties

So, to deploy a (WSP) solution which maps the right columns, you need to :

  • (0. Have a SSP for your site collection)
  • 1. Create the list programaticaly
  • 2. Add the site collection to the content sources
  • 3. Put some data in the list (you have to fill every column with some rubish data)
  • 4. Start a crawl and wait for it (you need to loop on the CrawlStatus of ContentSource object)
  • 5. Create all the mapped columns from the cawled columns in the “Sharepoint” category (not really hard). The best way to do that is to use the fields of your list. So that when you will change some fields in your list, you won’t have to update your column mapping code.

If you still haven’t undertand it, the way data are organized in MOSS search engine is. data (field “Data” for instance) –> crawled property (“ows_Data”) –> managed property( “owsData”, “Data” or any name you want). This is done like this because you might want to map multiple crawled properties with one managed property.

Step 1 is really easy. If you don’t know how to do that, you should stop right here.

Step 2 : Creating a Content Site programaticaly and adding a 15 minutes schedule to it :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
// With a given web SPWeb...
var site = web.Site;
var sspSchema = new Schema( SearchContext.GetContext( site ) );
var sspContent = new Content( SearchContext.GetContext( site ) );
 
// We choose an "unique" name for this content source, we wouldn't want to add it twice
var name = String.Format( "Web - {0}", web.Url );
 
{ // We check if content site doesn't exist yet
	foreach ( ContentSource cs in sspContent.ContentSources ) {
		if ( cs.Name == name )
			return;
	}
}
 
{ // We add the content site
	var cs = sspContent.ContentSources.Create(
		typeof( SharePointContentSource ),
		name
	);
	cs.StartAddresses.Add( new Uri( web.Url ) );
 
	var schedule = new DailySchedule( SearchContext.GetContext( site ) ) {
		RepeatDuration = 1440,
		RepeatInterval = 15,
		StartHour = 8,
		StartMinute = 00
	};
 
	cs.IncrementalCrawlSchedule = schedule;
	cs.Update();
}

Step 3 is pretty easy too. I don’t have to show you anything. The only difficulty you might have is that for document libraries, inserting data is done differently. Here is how you can manage it :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// for a given list SPList 
 
var rand = new Random();
 
SPListItem item;
 
// If it's a document library
if ( list.BaseTemplate == SPListTemplateType.DocumentLibrary ) {
	var fileUrl = String.Format( "{0}/Data_{1}.txt", list.RootFolder.Name, rand.Next() );
 
	var content =  UTF8Encoding.UTF8.GetBytes( String.Format( "Random data file ! {0}", rand.Next() ) );
 
        // New "items" are inserted by adding new files
	var file = web.Files.Add( fileUrl, content );
 
	file.Update();
 
	item = file.Item;
 
} else {
	item = list.Items.Add();
}
 
// from here, the item SPListItem can be handled the same way between these two lists

Step 4, to start a crawl on every content source and wait for it :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
// For a defined site SPSite...
var sspContent = new Content( SearchContext.GetContext( site ) );
 
foreach ( ContentSource cs in sspContent.ContentSources ) {
 
	// Not doing this may be considered as altering the enumeration (and throws an Exception)
	ContentSource cs2 = cs;
	cs2.StartIncrementalCrawl();
}
 
{ // We wait until it ends
	Boolean indexFinished;
 
	while ( true ) {
		indexFinished = true;
 
		// We check if each content-source is still crawling something
		foreach ( ContentSource cs in sspContent.ContentSources ) {
			if (
				cs.CrawlStatus == CrawlStatus.CrawlingFull ||
				cs.CrawlStatus == CrawlStatus.CrawlingIncremental )
				indexFinished = false;
		}
 
		if ( indexFinished )
			break;
		else {
			Thread.Sleep( 1000 );
		}
	}
}

You might wonder why we way for the crawl to end. The reason is you can’t use crawled properties before they have been created by detecting new columns in the list. This is the only way you can do this step by step.

And Step 5 : Make sure that every column of the list has its mapped property. Well, this is why might prefer to do some automatic mapping.
Here, to make things easier, we will say that we want to make managed property to be named like “owsInternalName”. You will see why later.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
Category sharepoint = null;
 
// We take the "SharePoint" category
foreach ( Category cat in _sspSchema.AllCategories )
	if ( cat.Name == "SharePoint" )
		sharepoint = cat;
 
// We only select fields that are in the defaultView 
// (you might want to change this behaviour)
var fields = new List<SPField>();
foreach ( String fieldName in list.DefaultView.ViewFields )
	fields.Add( list.Fields.GetFieldByInternalName( fieldName ) );
 
 
// for every one of these fields...
foreach ( SPField field in fields ) {
 
	var owsNameUnderscore = String.Format( "ows_{0}", field.InternalName );
	var owsName = String.Format( "ows{0}", field.InternalName );
 
	CrawledProperty cp;
 
	// We check if the crawled property exists
	if ( ( cp = CrawledPropertyExists( sharepoint.GetAllCrawledProperties(), owsNameUnderscore ) ) != null ) {
 
		// We then try to get the linked managed property
		ManagedProperty mp = ManagedPropertyExists( cp.GetMappedManagedProperties(), owsName );
 
		// If it doesn't exist
		if ( mp == null ) {
 
			// We create this mapped property
			try {
				mp = _sspSchema.AllManagedProperties.Create( name, CrawledPropertyTypeToManagedPropertyType( cp ) );
			} catch ( SqlException ) {
				// If the mapped property already exists
				// it means that it isn't mapped with our crawled property, so we get it from the 
				// global Managed property store
				mp = ManagedPropertyExists( _sspSchema.AllManagedProperties, owsName );
			}
 
			// And we finally map it with the crawled property
			var mappingColl = mp.GetMappings();
 
			mappingColl.Add(
				new Mapping(
					cp.Propset,
					cp.Name,
					cp.VariantType,
					mp.PID
				)
			);
 
			mp.SetMappings( mappingColl );
 
			mp.Update();
		}
	} else {
		// Crawled property doesn't exist. You have to put some data in it or crawl it.
	}
}

To make this code work, you must have the following methods in your code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
public static ManagedDataType CrawledPropertyTypeToManagedPropertyType( CrawledProperty cp ) {
	switch ( (VariantType) cp.VariantType ) {
		case VariantType.Array:
		case VariantType.UserDefinedType:
		case VariantType.Object:
		case VariantType.Error:
		case VariantType.Variant:
		case VariantType.DataObject:
		case VariantType.Empty:
		case VariantType.Null:
		case VariantType.Currency:
			return ManagedDataType.Unsupported;
		case VariantType.Single:
		case VariantType.Double:
		case VariantType.Decimal:
			return ManagedDataType.Decimal;
		case VariantType.Boolean:
			return ManagedDataType.YesNo;
		case VariantType.Byte:
		case VariantType.Long:
		case VariantType.Short:
		case VariantType.Integer:
			return ManagedDataType.Integer;
		case VariantType.Char:
		case VariantType.String:
			return ManagedDataType.Text;
		case VariantType.Date:
		case VariantType.Date2:
			return ManagedDataType.DateTime;
		default:
			return ManagedDataType.Text;
	}
}
 
public enum VariantType {
	Empty = 0x0000,
	Null = 0x0001,
	Short = 0x0002,
	Integer = 0x0003,
	Single = 0x0004,
	Double = 0x0005,
	Currency = 0x0006,
	Date = 0x0007,
	Date2 = 0x0040,
	String = 0x0008,
	Object = 0x0009,
	Error = 0x000A,
	Boolean = 0x000B,
	Variant = 0x000C,
	DataObject = 0x000D,
	Decimal = 0x000E,
	Byte = 0x0011,
	Char = 0x0012,
	Long = 0x0014,
	UserDefinedType = 0x0024,
	Array = 0x2000
};
 
public static CrawledProperty CrawledPropertyExists( IEnumerable enu, String name ) {
	foreach ( CrawledProperty cp in enu ) {
		if ( cp.Name == name )
			return cp;
	}
 
	return null;
}
 
public static ManagedProperty ManagedPropertyExists( ManagedPropertyCollection coll, String name ) {
	foreach ( ManagedProperty mp in coll ) {
		if ( mp.Name == name )
			return mp;
	}
 
	return null;
}
 
public static ManagedProperty ManagedPropertyExists( IEnumerable enu, String name ) {
	foreach ( ManagedProperty mp in enu ) {
		if ( mp.Name == name )
			return mp;
	}
 
	return null;
}

Automatically added columns

There also is an other way around, and this is the subject of this post : You can tell the “Sharepoint” category in the crawled columns to automatically map new crawled columns with mapped columns. You just have to enable this option in the Sharepoint UI for the “Sharepoint” Category or to do this programaticaly like that :

1
2
3
4
5
6
7
8
9
10
11
// For a defined site SPSite...
var sspSchema = new Schema( SearchContext.GetContext( site ) );
 
foreach ( Category cat in sspSchema.AllCategories ) {
        if ( cat.Name == "SharePoint" ) {
		cat.AutoCreateNewManagedProperties = autoCreate;
		cat.MapToContents = true;
		cat.DiscoverNewProperties = true;
		cat.Update();
        }
}

You should only apply it to the “Sharepoint” category because other categories will add new crawled properties and then not require them anymore. That would add some trashy crawled (and managed) properties that you should clean later.

These new crawled columns are indexed by their internal name (“ows_ReleaseDate” for instance) and the mapped property will have a similar name (“owsReleaseDate”).

By the way, you don’t have to wait for the crawl to end within your solution deployment. Because, new columns will be automatically created. But you should still do it, just in case…

The thing is, it only applies to new crawled columns. For these already crawled columns, you will have no other choice but to map them to managed columns your self (step 5).

Delete crawled properties

You might want to delete useless mapped properties to clean your site collection or to allow future new crawled properties with the same name to be automatically mapped. To do this you need to :

  • Make sure they are not mapped
  • Disable their mapping with the data, managed by the checkbox “contains indexed values of the columns” in the Sharepoint UI
  • Delete all unmapped data (by code or UI) in the “Sharepoint” category option

Programaticaly, it looks like that :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// For a defined site SPSite...
var sspSchema = new Schema( SearchContext.GetContext( site ) );
foreach ( Category cat in sspSchema.AllCategories ) {
 
	// We don't need to mess with any other category. 
	// Doing this wiht the whole "SharePoint category is brutal enough...
	if ( cat.Name != "SharePoint")
		continue;
 
	foreach ( CrawledProperty pr in cat.GetAllCrawledProperties() ) {
 
		// We look if there's some data indexed by the crawled property
		if ( ! pr.GetSamples( 1 ).MoveNext() ) {
			// This is necessary to be able to delete this unmapped property
			pr.IsMappedToContents = false;
			pr.Update();
		}
	}
 
	// Let's do it !
	cat.DeleteUnmappedProperties();
	cat.Update();
}

Search database reset

If your crawled properties have been deleted, they won’t come back when you add a list with the same columns. What you need to do is reset the search database and launch a crawl on it :

1
2
var sspContent = new Content( SearchContext.GetContext( site ) );
sspContent.SearchDatabaseCleanup( false );

The scope problem

If you want to use your managed property in a scope, you have to enable it :

1
2
3
4
5
6
7
8
9
// For a defined site SPSite...
var sspSchema = new Schema( SearchContext.GetContext( site ) );
var sspContent = new Content( SearchContext.GetContext( site ) );
foreach( ManagedProperty mp in sspSchema.getAllManagedProperties() ) {
	if ( ! mp.EnabledForScoping ) {
		mp.EnabledForScoping  = false;
		mp.Update();
	}
}

Note on crawl : I think you should always do some incremental crawl instead of full crawl, it’s really fast (it can take less than 20 seconds) and does all the work. If your search database has been reset or if you added a new content source, sharepoint will do a full crawl when you ask an incremental crawl.

Note 2 on crawl : Crawl isn’t indexing data when new items are added, but you can make it do it. You just have to create an event receiver which will start the crawl when the ItemAdded / ItemUpdated event is triggered.
If you have frequent list updates, you might have modifications while MOSS is performing a crawl. In that case, you have to create an asynchronous mechanism to enable “recrawling” just after the crawl has finished (in a threaded object in the ASP.Net Application object store for instance).

Sharepoint : The 0x80020009 (DISP_E_EXCEPTION) error

If you’re faced with that error, you should know that it doesn’t mean anything except you have a problem. Lots of people have written posts about it, but most of their explanations are wrong. It just means that sharepoint didn’t like what you did somewhere in your masterpage, your page, your webpart, your user control or anything else.

For me, it was because I used the SPWeb.GetListFromUrl method to test if the user actually had access to a list. If it didn’t, I was just catching the exception thrown.

The problem is, sharepoint never behaves like it should. As soon as Sharepoint has thrown the Exception, I couldn’t save any parameters of the webpart.

By the way, I corrected my webpart by checking available lists in the SPWeb.Lists property.