Performance Issues on Customized SharePoint Applications

By:   |   Comments   |   Related: > SharePoint Administration


Problem

The scope of this tip is to explain the ways to handle performance issues on a customized SharePoint application. Usually these kind of performance issues happen due to the way in which the SharePoint API has been used in the code or due to the application architecture design for accommodating various business needs.

Solution

The performance degradation occurs mainly due to following reasons.

  1. Memory leakage.
  2. User concurrency that causes several threads to execute the same SharePoint API functionalities and method calls.
  3. Abnormal size increase of the content database transaction log file.
  4. Accessing data from large SharePoint lists.

The above factors cause deadlocks and thread execution time outs which may end up with improper functionalities of the application.

Let's look into the several ways to handle these issues...

1.  Memory Leakage:

We have to use the SharePoint API very carefully, otherwise it will cause huge memory blot ups. The usage of SPWeb and SPSite objects will cause memory leakage issues if these objects are not handled or disposed of properly. The one way to analyze the code for non disposed SharePoint objects is to check an entire solution with a tool named SPDisposeCheck. This tool is a Visual Studio add-in and on the compilation of the SharePoint VS solution it will point out lines of code where there is a need to dispose of the objects after the execution point. The best way to use the SharePoint API objects is to maintain an object scope inside using {} blocks.

Correct Usage Sample :

	using (SPSite site = new SPSite ("URL"))
{
	using (SPWeb web = site.OpenWeb())
        {
	//implementation
        }
}
                    

Incorrect usuage :

SPSite site = new SPSite ("URL");
SPWeb web = site.OpenWeb ();
                    

There will be a curiosity on every developer’s mind as to why the .NET garbage collector is not taking care of this kind of object disposal. The answer is quite simple; it is like every SPWeb/SPSite has an internal reference SPRequest class which uses unmanaged COM resources to connect to the backend SQL Server. Because of this we need to explicitly dispose of these objects.

There is a very nice MSDN blog by Roger Lamb regarding SharePoint Object disposal patterns.

2. User concurrency that causes several threads to execute SharePoint API functionalities and method calls at same time:

The SharePoint OM (Object Model) has several heavy weight API calls - such as spweb.BreakRoleInheritance(), spweb.Webs.Add etc. The higher user concurrency to these API calls may cause deadlocks on the database. To avoid multiple threads to execute the same API call on the same time use the lock { } function for the lines of code having these API function calls.

lock (objlock)
        {
            spweb.BreakRoleInheritance(true);
        }
                   

3. Abnormal size increase of content database transaction log file:

The main reason for the large transaction log file is due to a long running transaction happening while SharePoint tries to push very large amounts of data to the Content Database or because of settings on the Content Database’s Recovery Model to the "Full" Recover Model. If the Recovery Model is set to "Full", it will retain the transaction log records until they are backed up.

So for handling this situation of the transaction log file memory blot up, we need to plan a bit according to the environment (Production/Staging/Testing) where this issue happens. If this is happening on the environment other than Production, It is better to keep the Database recovery model to "Simple". Because on staging / testing environments we do not require all the transactions to be logged and kept until the next back up. But in a Production environment its better to keep the Recovery model to "Full". So in Production environments its better to have some Database maintenance plans/jobs scheduled at regular intervals to back up the transaction log file in order to truncate the logs on the .LDF file.

4. Access data from large SharePoint lists:

The methodology to store data inside SharePoint Lists or Document libraries should be taken care of in detail. Because Microsoft has provided straight forward capacity planning documentation, we have to design the architecture or storage pattern according to the recommended limits. For example, It is advised that a SharePoint list can go up to 2000 items per container/view. Going above that limit may cause a decrease in performance while accessing the data inside the List.

We should take extra care while accessing the items from a SharePoint List through the Object Model. For instance, SPList.Items. Count is a poor functionality by the Object Model but it can be improved with better coding of  the Object Model - like SPList.ItemCount.

Next Steps


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Abin Jaik Antony Abin Jaik Antony

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article

















get free sql tips
agree to terms