GAC Download Cache

There’s one little feature that you must have totally forgotten in the .Net framework, but it is great.

We can tell our apps to download automatically some DLL we would expect to be in the GAC and that are not. This is one freaking great feature. Instead of forcing your users to install the librairies in their GAC or including the libraries with your applications, you can specify the URL(s) of the DLL(s) your software application depends on. When you launch your program, the .Net framework program will download them automatically if they’re not already in the GAC download cache.

You just have to add something like that in the file “yourapp.exe.config” in the same directory of your “yourapp.exe” application.

1
2
3
4
5
6
7
8
9
10
11
12
13
<configuration>
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity
          name="Lib"
          publicKeyToken="9b52b2ba78ecf379"
          culture="" />
        <codeBase version="1.0.0.0" href="http://www.yourserver.com/dw-assemblies/Lib.dll" />
      </dependentAssembly>
    </assemblyBinding>
  </runtime>
</configuration>

This avoids to :

  • Install something in the GAC
  • Package the required assembly with your software
  • Download or copy the required assemblys for each of your software
  • Clean your old assemblys once your don’t require them

You can see the content of your download cache by typing :

# gacutil.exe /ldl

And you can clear your download cache by typing :

# gacutil.exe /cdl

Access your Google Latitude position from PHP

The code to access your Google Latitude position is even simpler in PHP than it is in .Net :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<?PHP
header('Content-Type: text/plain');
 
$userId = '5616045291659744796';
 
if ( $_GET['user'] ) {
	if ( is_numeric( $_GET['user'] ) )
		$userId = $_GET['user'];
	else
		exit('This isn\'t a valid user id.');
}
 
$url = 'http://www.google.com/latitude/apps/badge/api?user='.$userId.'&type=json';
 
// We get the content
$content = file_get_contents( $url );
 
// We convert the JSON to an object
$json = json_decode( $content );
 
$coord = $json->features[0]->geometry->coordinates;
$timeStamp = $json->features[0]->properties->timeStamp;
 
if ( ! $coord ) 
	exit('This user doesn\'t exist.');
 
$date = date( 'd/m/Y H:i:s', $timeStamp );
$lat = $coord[1];
$lon = $coord[0];
 
echo $date.' : '.$lat.' x '.$lon;
?>

This program is available for testing here. It requires PHP 5.2.0 to run the json_decode method.

I think this is the power of PHP, you can make some powerful code in no time. The drawback is that it’s really slow, and it becomes even slower if you begin to use heavy objects (and objects are often heavy). And I personally think it’s much easier and safer to debug and maintain complex .Net programs than complex PHP programs.

By the way, you can generate your google badge userid that you can then use for the API here. (thank you Neavilag for this comment)

Access your Google Latitude position from a .Net app

When I saw that Google Latitude now enables you to access your data by a JSON feed, I decided to make it communicate with a little GPS tracking project of mine.

I’m really found of all these ways we now have to make anything communicate with anything. You can build interfaces from any system to any other system really easily.

This code enables you to automatically get your GPS position (or the position of a friend) from your JSON Latitude feed. To be able to do that, you have to enable your KML/JSON feed.
It requires .Net 3.5’s System.Web.Extensions

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
using System;
using System.Collections.Generic;
using System.Text;
using System.Web.Script.Serialization;
using System.Net;
using System.IO;
 
namespace LatitudeReader {
	class Program {
		static void Main() {
 
			Console.WriteLine( "What is your user id ?" );
 
			var userId = Console.ReadLine();
 
			if ( userId == String.Empty )
				userId = "5616045291659744796";
 
			// Url of the JSON Latitude feed
			var url = String.Format( "http://www.google.com/latitude/apps/badge/api?user={0}&type=json", userId );
 
			// We download the file
			var ms = new MemoryStream();
			Download( url, ms );
 
			// JSON in text format
			var textContent = UTF8Encoding.UTF8.GetString( ms.ToArray() );
 
			// We convert the JSON text file to an object
			// It returns 
			var jss = new JavaScriptSerializer();
			var jsonContent = jss.DeserializeObject( textContent ) as Dictionary<String, Object>;
 
			// We get the data
			var features = ( jsonContent[ "features" ] as object[] )[ 0 ] as Dictionary<string, object>;
			var geometry = features[ "geometry" ] as Dictionary<string, object>;
			var coordinates = geometry[ "coordinates" ] as object[];
			var lon = coordinates[ 0 ] as decimal?;
			var lat = coordinates[ 1 ] as decimal?;
 
			// And then the timestamp
			var properties = features[ "properties" ] as Dictionary<string, object>;
			var date = ConvertFromUnixTimestamp( (double) (int) properties[ "timeStamp" ] );
 
			// We convert the UTC date to local time
			date = date.ToLocalTime();
 
			Console.WriteLine( "{0} : {1} x {2}", date, lat, lon );
		}
 
		public static DateTime ConvertFromUnixTimestamp( double timestamp ) {
			DateTime origin = new DateTime( 1970, 1, 1, 0, 0, 0, 0 );
			return origin.AddSeconds( timestamp );
		}
 
		private const int BUFFER_SIZE = 1024;
 
		private static void Download( string url, Stream writeStream ) {
			var request = (HttpWebRequest) WebRequest.Create( url );
			var response = request.GetResponse();
 
			var readStream = response.GetResponseStream();
 
			var data = new Byte[ BUFFER_SIZE ];
 
			int n;
			do {
				n = readStream.Read( data, 0, BUFFER_SIZE );
				writeStream.Write( data, 0, n );
			} while ( n > 0 );
 
			writeStream.Flush();
			readStream.Close();
		}
	}
}

The only references you need are : System and System.Web.Extensions

SPGridView : Filtering

I wanted to use a little SPGridView in a software and had a little problem. First of all, in order to use filtering on a SPGridView, you have to give an ObjectDataSource by it’s control’s ID. For anything else, you can use the DataSource property directly.

The best sample code I could fin on the net was this one : Creating the SPGridView with sorting and adding filtering.

This example is great because it only shows the basic requirements to setup a SPGridView and it gives you a clean way to build the ObjectDataSource and it explains step by step why you have to do things this way (in Sharepoint, it’s very important).

The problem, and this is the subject of this post, came when I activated the filtering functionnality. The menu displayed and then got stuck on “Loading…” with an alert message saying it got a NullReferenceException. Here is the pic :

I finally found the origin of my problem. In my first version, I was giving the DataSource to my SPGridView on the “PreRender” loading step. For the AJAX call, you never reach this step, you have to give the DataSource sooner.

Once I got that right, I just cached the last DataSource in the ViewState to give in the CreateChildControls method and it worked. The menu was finally displaying fine.

It can’t exactly give you my code (because it’s not legally mine), but here is the basic Idea :

In the real world usage, you will give your DataTable to a property so that it can be displayed later by your SPGridView. So you can basically just copy/paste the ASPGridView class given by Erik Burger and just replace the “SelectData” method by this one :

1
2
3
4
5
6
7
8
9
10
11
12
public DataTable SelectData {
    return DataTable;
}
 
public DataTable DataTable {
    get {
        return ViewState["DataTable"] as DataTable;
    }
    set {
        ViewState["DataTable"] = value;
    }
}

And anywhere in your WebPart’s OnPreRender method, or in any button’s event receiver method, you could build your DataSource.

NetEventServer

I talked some time ago about a library I made to take advantage of the kernel network events. I now release it and explain how to use it. It can help people to do some little network softwares without knowing where to start from.

I built it for network servers made to communicate with remotely connected embedded chips. I wanted to be able to always stay in touch with a huge number of chips without any real cost. So, my very personal goal was to built a server network layer for massive M2M applications.
I also made a little web server with it supporting Keep-Alive and partial file download (wit the “Range” header) and an other little library to send serialized objects.

I made this little network library to accomplish two main goals :

  • Simplify network server development
  • Be able to support a lot of connections

It is actually able to support a lot of connections : On a little Linux server using Mono (with 512 MB memory with swap deactivated), I easily managed to listen to 60 000 simultaneous connections without consuming more than 40% of the server’s memory.

And it allows to create network servers in only few line of code :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
using System;
using System.Collections.Generic;
using System.Text;
using SoftIngenia.NetEventServer;
 
namespace TestServer {
 
	/// <summary>
	/// My little server
	/// </summary>
	class MyServer {
 
		public static String BytesToString( Byte[] data ) {
			var sb = new StringBuilder();
			sb.Append( String.Format( "[ {0} ] {{ ", data.Length ) );
			for ( int i = 0; i < data.Length; ++i )
				sb.Append( String.Format( " 0x{0:X02}", data[ i ] ) );
			sb.Append( " }" );
			return sb.ToString();
		}
 
		/// <summary>
		/// My server view of the client
		/// </summary>
		class MyClient {
			public MyClient( uint id ) {
				Id = id;
			}
 
			public uint Id { get; private set; }
 
			public int NbMessagesReceived { get; set; }
 
			public void Treat( byte[] data ) {
				Console.WriteLine( "{0}.Treat( {1} );", this, BytesToString( data ) );
				NbMessagesReceived++;
			}
 
			public override string ToString() {
				return String.Format( "Client{{Id={0}}}", Id );
			}
		}
 
		private readonly TcpEventServer _server;
		private readonly Dictionary<uint, MyClient> _clients = new Dictionary<uint, MyClient>();
 
		public MyServer( int portNumber ) {
			_server = new TcpEventServer( portNumber );
			_server.ClientConnected += server_ClientConnected;
			_server.ClientDisconnected += server_ClientDisconnected;
			_server.BinaryDataReceivedFromClient += server_BinaryDataReceivedFromClient;
		}
 
		public void StartListening() {
			_server.StartListening();
		}
 
		void server_BinaryDataReceivedFromClient( uint clientId, byte[] data ) {
			_clients[ clientId ].Treat( data );
		}
 
		void server_ClientDisconnected( uint clientId ) {
			Console.WriteLine( "Client {0} disconnected !", clientId );
			_clients.Remove( clientId );
		}
 
		void server_ClientConnected( uint clientId ) {
			Console.WriteLine( "Client {0} connected from {1} !", clientId, _server.RemoteEndPoint( clientId ) );
			_clients.Add( clientId, new MyClient( clientId ) );
		}
 
 
	}
 
	class Program {
		static void Main() {
			var myserver = new MyServer( 3000 );
			myserver.StartListening();
 
			Console.WriteLine( "Listening..." );
			Console.ReadLine();
		}
	}
}

This app launched gives you something like that :

1
2
3
4
5
Listening...
Client 1 connected from 127.0.0.1:53792 !
Client{Id=1}.Treat( [ 5 ] { 0x68 0x65 0x6C 0x6C 0x6F } ); // "hello"
Client{Id=1}.Treat( [ 2 ] { 0x0D 0x0A } ); // '<CR>' '<LF>'
Client 1 disconnected !

The library also enables you to receive data as text. You just have to subscribe to the “ReceivedLine” event. There’s no performance cost if you don’t subscribe to the event.

For network server, you still need to do some frame recognition. I usually instantiate a FrameParsing class into every client on the server side.

You can download the NetEventServer library with its XML and PDB files.

C# 4.0

I tested it in VS2010. Which is beautiful by the way, I like the WPF rendering. And I’m happy they didn’t disable the possibility to generate .Net 2.0 assemblies. I wanted to test the new historical debugger functionnality, which enable you to see the state of variable in the past time, but I didn’t take the Team System Edition (and I’m too lazy to download it again).

named and optionnal arguments
I’m so happy Microsoft has created optional and named arguments for method calls. It avoids a lot of stupid method overload. And I just can’t way to be using it in my code.

This is also a good news for people using Interop methods, like the one for Word 2007. I’ve seen a code using them, it looks like that :

1
2
3
object nullObj = null;
object name = "doc.docx";
officeClass.SaveAsWord2007Document( ref name, ref nullObj, ref nullObj, ref nullObj, ref nullObj, ref nullObj, ref nullObj, ref nullObj, ref nullObj, ref nullObj, ref nullObj, ref nullObj, ref nullObj );

With C# 4.0, it could be translated to something like that :

1
officeClass.SaveAsWord2007Document( "doc.docx" );

The C# 4.0 will convert it to exactly the same IL code internally (creating a refNullObj).

But it’s still just a little COM+ specific feature, you can’t use in Mono/Linux for instance. So I don’t think we can consider it as a real language feature.

DLR
I tested DLR with the “dynamic” type, I think it should only be used when we have no other choice (when we want to communicate with some scripting languages for instance). One reason is that it breaks the VS auto-completion, it’s like you were typing your code in notepad. The other reason is that, from my test, dynamic calls are 3 times slower to execute than compiled calls :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
static void Main() {
	int nb = 100000000;
 
	Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.RealTime;
 
	TimeSpan without, with;
 
	{ // Without dynamics
		Console.WriteLine( "Without dynamics..." );
 
		var c = new Clacla();
 
		var start = DateTime.Now;
		for ( int i = 0; i < nb; ++i ) {
			c.DoSomething();
		}
		without = DateTime.Now - start;
		Console.WriteLine( "Elapsed time : {0}", without );
	}
 
	{ // With dynamics
		Console.WriteLine( "With dynamics..." );
 
		dynamic d = new Clacla();
 
		var start = DateTime.Now;
		for ( int i = 0; i < nb; ++i ) {
			d.DoSomething();
		}
 
		with = DateTime.Now - start;
		Console.WriteLine( "Elapsed time : {0}", with );
	}
 
	Console.WriteLine( "Dynamics call are {0} times slower.", Math.Round( (double) with.Ticks / without.Ticks, 2 ) );
 
	Console.ReadLine();
}

The Clacla class is a totally useless class :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public class Clacla {
	private string _s;
	private int _x, _y;
	private dynamic _d;
 
	public Clacla( int x = 1, int y = 2, string s = "", dynamic d = null ) {
		_x = x;
		_y = y;
		_s = s;
		_d = d;
	}
 
	public int DoSomething() {
		return (int) Math.Pow( X, 2 ) + Y;
	}
 
	public int X { get { return _x; } }
 
	public int Y { get { return _y; } }
 
	public string S { get { return _s; } }
 
	public dynamic D { get { return _d; } }
 
	public override string ToString() {
		return String.Format( "X={0}, Y={1}, S={2}, D={3}", _x, _y, _s, _d );
	}
}

The result is :

1
2
3
4
5
Without dynamics...
Elapsed time : 00:00:14.7285156
With dynamics...
Elapsed time : 00:00:39.8613281
Dynamics call are 2,71 times slower.

Covariance / Contravariance

It will simplify things for sure but I’m a little disappointed by the remaining restrictions.

The best explanation I could find on it so far is here.

C# + IL
I think Microsoft really had a great idea when they created their “Intermediate Language” to split the language from the managed library. Because if Sun had done the same thing you would be able to use enums (which is just an “int” internally) in an “old” 1.4 JRE.
With C# 4.0, you can still code for “old” .Net 2.0 environnements.

MOSS 2007 : Managing search properties

In Microsoft Office Sharepoint Server, you have a powerful search service. It can’t say I like everything in sharepoint but the MOSS Search Engine is really amazing. It enables to search fulltext on everything and also to filter precisely your results by searching on columns at the same time.

But the MOSS search engine isn’t as easy as searching directly in CAML. You have to prepare managed properties (from your lists columns) to be able to search on them.

All the code below is just a simplified, more understandable code for what you need to build to suit your needs. This code will need some rework to be used.

How to map columns to managed properties

So, to deploy a (WSP) solution which maps the right columns, you need to :

  • (0. Have a SSP for your site collection)
  • 1. Create the list programaticaly
  • 2. Add the site collection to the content sources
  • 3. Put some data in the list (you have to fill every column with some rubish data)
  • 4. Start a crawl and wait for it (you need to loop on the CrawlStatus of ContentSource object)
  • 5. Create all the mapped columns from the cawled columns in the “Sharepoint” category (not really hard). The best way to do that is to use the fields of your list. So that when you will change some fields in your list, you won’t have to update your column mapping code.

If you still haven’t undertand it, the way data are organized in MOSS search engine is. data (field “Data” for instance) –> crawled property (“ows_Data”) –> managed property( “owsData”, “Data” or any name you want). This is done like this because you might want to map multiple crawled properties with one managed property.

Step 1 is really easy. If you don’t know how to do that, you should stop right here.

Step 2 : Creating a Content Site programaticaly and adding a 15 minutes schedule to it :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
// With a given web SPWeb...
var site = web.Site;
var sspSchema = new Schema( SearchContext.GetContext( site ) );
var sspContent = new Content( SearchContext.GetContext( site ) );
 
// We choose an "unique" name for this content source, we wouldn't want to add it twice
var name = String.Format( "Web - {0}", web.Url );
 
{ // We check if content site doesn't exist yet
	foreach ( ContentSource cs in sspContent.ContentSources ) {
		if ( cs.Name == name )
			return;
	}
}
 
{ // We add the content site
	var cs = sspContent.ContentSources.Create(
		typeof( SharePointContentSource ),
		name
	);
	cs.StartAddresses.Add( new Uri( web.Url ) );
 
	var schedule = new DailySchedule( SearchContext.GetContext( site ) ) {
		RepeatDuration = 1440,
		RepeatInterval = 15,
		StartHour = 8,
		StartMinute = 00
	};
 
	cs.IncrementalCrawlSchedule = schedule;
	cs.Update();
}

Step 3 is pretty easy too. I don’t have to show you anything. The only difficulty you might have is that for document libraries, inserting data is done differently. Here is how you can manage it :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// for a given list SPList 
 
var rand = new Random();
 
SPListItem item;
 
// If it's a document library
if ( list.BaseTemplate == SPListTemplateType.DocumentLibrary ) {
	var fileUrl = String.Format( "{0}/Data_{1}.txt", list.RootFolder.Name, rand.Next() );
 
	var content =  UTF8Encoding.UTF8.GetBytes( String.Format( "Random data file ! {0}", rand.Next() ) );
 
        // New "items" are inserted by adding new files
	var file = web.Files.Add( fileUrl, content );
 
	file.Update();
 
	item = file.Item;
 
} else {
	item = list.Items.Add();
}
 
// from here, the item SPListItem can be handled the same way between these two lists

Step 4, to start a crawl on every content source and wait for it :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
// For a defined site SPSite...
var sspContent = new Content( SearchContext.GetContext( site ) );
 
foreach ( ContentSource cs in sspContent.ContentSources ) {
 
	// Not doing this may be considered as altering the enumeration (and throws an Exception)
	ContentSource cs2 = cs;
	cs2.StartIncrementalCrawl();
}
 
{ // We wait until it ends
	Boolean indexFinished;
 
	while ( true ) {
		indexFinished = true;
 
		// We check if each content-source is still crawling something
		foreach ( ContentSource cs in sspContent.ContentSources ) {
			if (
				cs.CrawlStatus == CrawlStatus.CrawlingFull ||
				cs.CrawlStatus == CrawlStatus.CrawlingIncremental )
				indexFinished = false;
		}
 
		if ( indexFinished )
			break;
		else {
			Thread.Sleep( 1000 );
		}
	}
}

You might wonder why we way for the crawl to end. The reason is you can’t use crawled properties before they have been created by detecting new columns in the list. This is the only way you can do this step by step.

And Step 5 : Make sure that every column of the list has its mapped property. Well, this is why might prefer to do some automatic mapping.
Here, to make things easier, we will say that we want to make managed property to be named like “owsInternalName”. You will see why later.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
Category sharepoint = null;
 
// We take the "SharePoint" category
foreach ( Category cat in _sspSchema.AllCategories )
	if ( cat.Name == "SharePoint" )
		sharepoint = cat;
 
// We only select fields that are in the defaultView 
// (you might want to change this behaviour)
var fields = new List<SPField>();
foreach ( String fieldName in list.DefaultView.ViewFields )
	fields.Add( list.Fields.GetFieldByInternalName( fieldName ) );
 
 
// for every one of these fields...
foreach ( SPField field in fields ) {
 
	var owsNameUnderscore = String.Format( "ows_{0}", field.InternalName );
	var owsName = String.Format( "ows{0}", field.InternalName );
 
	CrawledProperty cp;
 
	// We check if the crawled property exists
	if ( ( cp = CrawledPropertyExists( sharepoint.GetAllCrawledProperties(), owsNameUnderscore ) ) != null ) {
 
		// We then try to get the linked managed property
		ManagedProperty mp = ManagedPropertyExists( cp.GetMappedManagedProperties(), owsName );
 
		// If it doesn't exist
		if ( mp == null ) {
 
			// We create this mapped property
			try {
				mp = _sspSchema.AllManagedProperties.Create( name, CrawledPropertyTypeToManagedPropertyType( cp ) );
			} catch ( SqlException ) {
				// If the mapped property already exists
				// it means that it isn't mapped with our crawled property, so we get it from the 
				// global Managed property store
				mp = ManagedPropertyExists( _sspSchema.AllManagedProperties, owsName );
			}
 
			// And we finally map it with the crawled property
			var mappingColl = mp.GetMappings();
 
			mappingColl.Add(
				new Mapping(
					cp.Propset,
					cp.Name,
					cp.VariantType,
					mp.PID
				)
			);
 
			mp.SetMappings( mappingColl );
 
			mp.Update();
		}
	} else {
		// Crawled property doesn't exist. You have to put some data in it or crawl it.
	}
}

To make this code work, you must have the following methods in your code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
public static ManagedDataType CrawledPropertyTypeToManagedPropertyType( CrawledProperty cp ) {
	switch ( (VariantType) cp.VariantType ) {
		case VariantType.Array:
		case VariantType.UserDefinedType:
		case VariantType.Object:
		case VariantType.Error:
		case VariantType.Variant:
		case VariantType.DataObject:
		case VariantType.Empty:
		case VariantType.Null:
		case VariantType.Currency:
			return ManagedDataType.Unsupported;
		case VariantType.Single:
		case VariantType.Double:
		case VariantType.Decimal:
			return ManagedDataType.Decimal;
		case VariantType.Boolean:
			return ManagedDataType.YesNo;
		case VariantType.Byte:
		case VariantType.Long:
		case VariantType.Short:
		case VariantType.Integer:
			return ManagedDataType.Integer;
		case VariantType.Char:
		case VariantType.String:
			return ManagedDataType.Text;
		case VariantType.Date:
		case VariantType.Date2:
			return ManagedDataType.DateTime;
		default:
			return ManagedDataType.Text;
	}
}
 
public enum VariantType {
	Empty = 0x0000,
	Null = 0x0001,
	Short = 0x0002,
	Integer = 0x0003,
	Single = 0x0004,
	Double = 0x0005,
	Currency = 0x0006,
	Date = 0x0007,
	Date2 = 0x0040,
	String = 0x0008,
	Object = 0x0009,
	Error = 0x000A,
	Boolean = 0x000B,
	Variant = 0x000C,
	DataObject = 0x000D,
	Decimal = 0x000E,
	Byte = 0x0011,
	Char = 0x0012,
	Long = 0x0014,
	UserDefinedType = 0x0024,
	Array = 0x2000
};
 
public static CrawledProperty CrawledPropertyExists( IEnumerable enu, String name ) {
	foreach ( CrawledProperty cp in enu ) {
		if ( cp.Name == name )
			return cp;
	}
 
	return null;
}
 
public static ManagedProperty ManagedPropertyExists( ManagedPropertyCollection coll, String name ) {
	foreach ( ManagedProperty mp in coll ) {
		if ( mp.Name == name )
			return mp;
	}
 
	return null;
}
 
public static ManagedProperty ManagedPropertyExists( IEnumerable enu, String name ) {
	foreach ( ManagedProperty mp in enu ) {
		if ( mp.Name == name )
			return mp;
	}
 
	return null;
}

Automatically added columns

There also is an other way around, and this is the subject of this post : You can tell the “Sharepoint” category in the crawled columns to automatically map new crawled columns with mapped columns. You just have to enable this option in the Sharepoint UI for the “Sharepoint” Category or to do this programaticaly like that :

1
2
3
4
5
6
7
8
9
10
11
// For a defined site SPSite...
var sspSchema = new Schema( SearchContext.GetContext( site ) );
 
foreach ( Category cat in sspSchema.AllCategories ) {
        if ( cat.Name == "SharePoint" ) {
		cat.AutoCreateNewManagedProperties = autoCreate;
		cat.MapToContents = true;
		cat.DiscoverNewProperties = true;
		cat.Update();
        }
}

You should only apply it to the “Sharepoint” category because other categories will add new crawled properties and then not require them anymore. That would add some trashy crawled (and managed) properties that you should clean later.

These new crawled columns are indexed by their internal name (“ows_ReleaseDate” for instance) and the mapped property will have a similar name (“owsReleaseDate”).

By the way, you don’t have to wait for the crawl to end within your solution deployment. Because, new columns will be automatically created. But you should still do it, just in case…

The thing is, it only applies to new crawled columns. For these already crawled columns, you will have no other choice but to map them to managed columns your self (step 5).

Delete crawled properties

You might want to delete useless mapped properties to clean your site collection or to allow future new crawled properties with the same name to be automatically mapped. To do this you need to :

  • Make sure they are not mapped
  • Disable their mapping with the data, managed by the checkbox “contains indexed values of the columns” in the Sharepoint UI
  • Delete all unmapped data (by code or UI) in the “Sharepoint” category option

Programaticaly, it looks like that :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// For a defined site SPSite...
var sspSchema = new Schema( SearchContext.GetContext( site ) );
foreach ( Category cat in sspSchema.AllCategories ) {
 
	// We don't need to mess with any other category. 
	// Doing this wiht the whole "SharePoint category is brutal enough...
	if ( cat.Name != "SharePoint")
		continue;
 
	foreach ( CrawledProperty pr in cat.GetAllCrawledProperties() ) {
 
		// We look if there's some data indexed by the crawled property
		if ( ! pr.GetSamples( 1 ).MoveNext() ) {
			// This is necessary to be able to delete this unmapped property
			pr.IsMappedToContents = false;
			pr.Update();
		}
	}
 
	// Let's do it !
	cat.DeleteUnmappedProperties();
	cat.Update();
}

Search database reset

If your crawled properties have been deleted, they won’t come back when you add a list with the same columns. What you need to do is reset the search database and launch a crawl on it :

1
2
var sspContent = new Content( SearchContext.GetContext( site ) );
sspContent.SearchDatabaseCleanup( false );

The scope problem

If you want to use your managed property in a scope, you have to enable it :

1
2
3
4
5
6
7
8
9
// For a defined site SPSite...
var sspSchema = new Schema( SearchContext.GetContext( site ) );
var sspContent = new Content( SearchContext.GetContext( site ) );
foreach( ManagedProperty mp in sspSchema.getAllManagedProperties() ) {
	if ( ! mp.EnabledForScoping ) {
		mp.EnabledForScoping  = false;
		mp.Update();
	}
}

Note on crawl : I think you should always do some incremental crawl instead of full crawl, it’s really fast (it can take less than 20 seconds) and does all the work. If your search database has been reset or if you added a new content source, sharepoint will do a full crawl when you ask an incremental crawl.

Note 2 on crawl : Crawl isn’t indexing data when new items are added, but you can make it do it. You just have to create an event receiver which will start the crawl when the ItemAdded / ItemUpdated event is triggered.
If you have frequent list updates, you might have modifications while MOSS is performing a crawl. In that case, you have to create an asynchronous mechanism to enable “recrawling” just after the crawl has finished (in a threaded object in the ASP.Net Application object store for instance).

SMSOTAP 1.2

I made some few changes to the SMSOTAP program for the TC65 :

  • I removed the time limit, it’s stable enough to do not force you to update it frequently.
  • It now uses class 1, PID 7d messages instead of class 0, PID 00 compatibility mode (it doesn’t change anything).
  • It generates OTAP SMS only with the parameters you specifiy and use as few SMS as possible. Most of the time, you can use only one OTAP SMS
  • It will prevent you from sending SMS above 140 chars in 8 bits and 160 chars in 7 bits.
  • I fixed a little bug, I replaced “APORNUM:” by “APNORNUM:”

Configuration file from the previous v1.1 version will still work on this one.

[ Download SMSOTAP 1.2 ]

Thanks to Martijn for his comments on the program. You can send me comments if you would like to see new features added or some bugs corrected.

Sharepoint : The 0x80020009 (DISP_E_EXCEPTION) error

If you’re faced with that error, you should know that it doesn’t mean anything except you have a problem. Lots of people have written posts about it, but most of their explanations are wrong. It just means that sharepoint didn’t like what you did somewhere in your masterpage, your page, your webpart, your user control or anything else.

For me, it was because I used the SPWeb.GetListFromUrl method to test if the user actually had access to a list. If it didn’t, I was just catching the exception thrown.

The problem is, sharepoint never behaves like it should. As soon as Sharepoint has thrown the Exception, I couldn’t save any parameters of the webpart.

By the way, I corrected my webpart by checking available lists in the SPWeb.Lists property.