I used to be a C# SharePoint Developer – Part 4 – Performance

JavaScript. Not an interpreted language as the name would suggest. Really it isn’t a scripting language at all, and for all intents and purposes it is compiled by what is known as a Just In Time compiler.

For reference the Microsoft JavaScript Engine:

Compilation of a program is performed in several steps. First of a lexer is used, this turns the code into a table of commands separated by the syntax restraints of the language. This is then turned into an Abstract Syntax Tree, which is a command tree representation of the language. At this point any syntax errors which exist are found and thrown. From here it is engine specifics on the implementation. A mixture of compilation methods are used to transform the JavaScript into machine code. Most often this is mapping to prewritten templates for commands which are then linked together to generate a machine output. This process of doing template based compilation saves time, however its main flaw is that the instructions are often not optimized in the best possible manner for the intent of the code.

This is the problem with JIT compilation, optimization steps do not happen, it is often just a direct mapping to a set of machine instructions that perform the task in which the language specification takes. As computing power increased, the JIT compiler started to become less JIT and as much pre compilation as possible. Modern browsers compile the source code of the JavaScript application as soon as it has free threads to do so. This allows then for more advanced techniques such as we see in Microsoft’s Edge browser.

Microsoft Edge has a revolutionary two step approach to code compilation with JavaScript. Firstly it performs a code mapping technique to compile the source. This is done as early as possible to make the JavaScript available to the page. Then when a free thread comes along it performs optimization steps on the code. This two pass process allows the browser to give the end user a better perceived performance. This does not necessarily mean that the JavaScript application will run faster. Quite the opposite in-fact. While many optimisations have been included in the compilation process of Microsoft Edge, the browser drastically fails to meet the performance offered by WebKit and Mozilla based browsers.

What we are looking for in optimisations is to reduce the amount of instructions, and use as optimised routines as we can. The best example of this is loops. Loops are the hardest and most complicated structure to optimise in the basic instruction sets. When you perform a loop you are checking values of values and performing a jump statement based on the outcome. The value of this data might not necessarily be local either it might be a reference value, in which another part of the program to resolve the reference will have to be called. This can also be seen in C# most notably when using interfaces with structures such as lists which are enumerable. The problem is in every loop you are referencing through several objects in memory in order to build up the object you require. These reference types do not use contiguous memory; they are heavily fragmented thus this resolution of the objects takes extra time. The opposite to this is a native type such as a string, integer or even a struct which supplies a singular allocated memory block for its data storage.
So the based principle of loops in any language is to reduce the amount of instructions, and to attempt to remove all reference types where possible.

JavaScript isn’t so simple though. Objects use delegation instead of inheritance. This means that when you call an object that is in a chain, it has to reference to that object through the chain. If you familiar with the concept of a linked list, where an object holds the reference to an object inside of itself, but a child object has no external reference unless store, then this is exactly what we are dealing with in JavaScript. This referenced based delegation goes both ways as well. If you create an object based off another object you have made, which itself is based on the object type you are calling the property accessed from the type it was originally created, it is not called in your current object. To quantify this we can say object c is based off object b, which itself is an [object Object]. We access an object property, the engine asks the objects parent how to access a property, which then has to ask its parent parent, how to perform this task causing multiple references up the chain of objects to be called in order to access the method which returns the property of object c.
What does this delegation and linked list style model gain? Massively reduced memory usage. When you use reference types in C# it is quite easy to run into hundreds of megabytes of allocated memory. In JavaScript this same structure will only use a fraction of the space as the defined structures aren’t copied in memory and we use delegation in order to learn how to access properties and methods.

We can however override this with the prototypal pattern, and copy the objects on creation. This gains us no performance as we are still using prototypal delegation.

Another factor of how we optimise loops is what type of loop and object referencing system do we use. The closer to assembly our thinking is of the loop the faster it will perform. All browsers perform some instruction trimming and derefencing of objects for us, but these optimisations vary wildly from browser to browser. Some patterns of loops perform better naturally as they reduce the need for many of the more complex checks and indeed simulate how such loops would be written in assembly.

Recently browsers have added a close to metal set of methods, which allow direct access to stack operations. By placing what you need to loop through on the stack and using the pop command with a while loop, we can simplify the amount of required instructions compared to a loop that uses a tuple of instructions to perform the loop calculation which would also be changing the stack several times during these operations while also dealing with reference types. This has resulted in loop speeds going from several thousand loops per second for a standard for loop checking a referenced type into hundreds of million loops per second depending on the browser involved.

This shows that optimisations and new approaches to age old problems with JavaScript and the variety of compilers available do exist. This is indeed not a new, problem. We have several compilers from Microsoft for C# and also Mono. We have numerous for C++ and many students have written their own during study. The important aspect here is that JavaScript is just another programming language with exactly the same types of problems when it comes to compilers as any other language.

This dive into the workings of a compiler helps us to then shed light on other optimisations that we can achieve. We know that we use memory references and non-contiguous memory with JavaScript. It is also important to be aware of the way object delegation works when writing your code. It is also important to know that JavaScript is just another programming language, so all those best practices for coding that you follow in C# or any other language, do not forget them here.

This is a huge problem with JavaScript development, most of the time it is seen as Just a scripting language, when it isn’t. Deep nesting of code, high cyclomatic complexity, embedded delegate methods all drastically reduce code performance. We don’t do these things in C#, so don’t do them in JavaScript.

The next limiter to JavaScript is the lack of knowledge of the specification. Misunderstanding how things work, what the default values of some methods are, and how variants work can get anyone frustrated. JavaScript has its foundation heavily in Computer Science Theory. Its functional approach can confuse object orientated programmers, and make them tackle programming challenges in the wrong manner. An incorrect approach to a problem can cost you development time and this costs businesses money.

The next big thing isn’t actually JavaScript. It is understanding browser rendering and timings, and how browser handles invalidating the DOM causing a redraw of the page. Certain modifications of the DOM cause a redraw to happen, and this is costly. So the plan is to cause as few of these as possible. If you require many changes due to animation or another reason, you can also use the built in commands for getting the frame redraw intervals to find a window to push new DOM elements into the tree.

One of my favoured methods is to use what is known as a Shadow DOM. The Shadow DOM is part of the Web Components specification. This reduces the amount of code required to manipulate the DOM, reduces rendering latency, and increases overall performance by reducing again the amount of operations that have to be performed. This in its own way is a batching process, which of course is a highly optimal scenario to be in.


While loops are significantly faster than for loops, and just as easy to infer meaning. Further to this decrementing a variable in JavaScript is 1 CPU operation faster than incrementing a variable. Also referring to objects during a loop is a slow operation due to the deferred inheritance model of JavaScript. Also going “closer to the metal” will massively improve performance. Using native methods to iterate over objects and arrays is significantly faster than controlling a loop yourself.

We can take away 3 basic optimisations from this:

1. Use a while loop
2. Decrement where possible
3. Create a direct reference to any values that won’t change during the loop, yet are required

The other optimisations you can perform are all browser dependant, and in most cases completely unrequired.

Because loops are a super complex topic themselves I will do a separate post on these soon.

Asynchronous methods

Asynchronous methods don’t share code with other operations in the browser. Therefore they perform extremely quickly compared to inline code. You can utilise this power by using either setTimeout or preferably promises/webworkers.
Promises are also useful for sequentially chaining commands (Monadic programming), and Ajax based calls. This is important and useful when using any of the SharePoint of Office REST APIs, as it allows you to call deferred components of the returned JSON objects.

setTimeout and setInterval

Before now I have mentioned setTimeout as a good way of running a method asynchronously. However I haven’t mentioned setInterval. The reasoning behind this is because it comes with the warning, don’t use it. It can be a dangerous method to use, and setTimeout handles the same problem in a much better way. Why is setInterval so bad? You set a method to run, every (x) milliseconds, and that method takes (y) milliseconds to complete. However you have no idea if the first method has completed. What this can mean, is that it catch up with itself. If it is sharing data, or running an Ajax call then it may eventually fail the second call.

Now we know which to use for our calls, I have to point out another danger of these two methods. They don’t only take methods for their first parameter they can also take a string. This is a security concern, and should be dealt with appropriately in our code. Allowing these methods to parse code, allows a 3rd party application inject code, were you don’t want it, bypassing security restrictions on the browser. In EcmaScript5 Strict protocol however, all evaluated code methods are blocked, so going forward this presses another reason to use strict mode.

Below I have put together a secure implementation of the setTimeout method that can be used for single calls, and one that will loop a call when the previous loop has completed.

Code Extract:

// Runs a method once, and throws an error if
        // someone tries to use it with evaluated code
        AsyncMethod: function (method, waitTime, args) {
            if (typeof method == "string")
                throw "Possible Cross Site Scripting attack detected";
            setTimeout(method, waitTime, args);
        // Loops a method indefinitely, unless false is returned
        AsyncIntervalMethod: function (method, waitTime, args) {
            if (typeof method == "string") {
                throw "Possible Cross Site Scripting attack detected";
            (function rencoreInternalLoop() {
                if ( === false) {
                setTimeout(rencoreInternalLoop, waitTime);


We have already looked at asynchronous methods using setTimeout. But, it just isn’t very flexible. We can also use event driven code, such as adding an event listener to a container and looking for a change. Again this isn’t so flexible.

Promises give us true asynchronous success or failure. You can also look in at the status of the event giving feedback. This gives Promises the flexibility we require for our code.

Before you get too excited, with great power comes a limit. No Internet Explorer support even in IE11, and non in Opera Mini. Edge does however currently support promises. So in Microsoft land where does this leave us? The answer is in frameworks and shims where required, however we don’t want to load an entire framework such as jQuery to use just one part of it. If you are already using jQuery, and using it heavily then you can use jQuery deferred (jQuery deferred is a mechanism to shim a promise until full support is achieved). However it isn’t a real promise and only provides partial support, which in some cases makes it redundant if your intent is to provide a graceful fall back.

Paul Miller has an interesting project that supports a shim for ECMAScript 6. This is the standard that includes promises. It is much smaller than jQuery in size, and allows support of many more ES6 features. However if you want to go down this path I would lean on the recommendation that you take out the modules that you won’t be using from the es-6 shim. As long as the dependency chain isn’t broken you can use it.

The negative side to promises out of the way, so I will continue with usage. A promise in the ES6 standard has 4 states:

fulfilled –has succeeded
rejected –  has failed
pending – hasn’t been fulfilled or rejected
settled – has been fulfilled or rejected

A promise is thenable which means that once its state is fulfilled it calls the corresponding provided call back for either success or failure. Any object that implements the then method is promise-like. The objects then method implements either fulfil, reject or both calls backs in the desired place.

This process of “do this, then” allows us to build up a story in our code, ensuring the previous task is complete before starting the next. However this isn’t all promises gives us. We have:

Promise.resolve(promise) – Is a true Promise method
Promise-Like (Has a then method)
Promise.resolve(object)  – A promise that fulfils to an object
Promise.reject(object)  – A promise that rejects to an object
Promise.all(array) – All items in the array must be fulfilled, and rejects when any item in the array rejects. The order of the array is the order the items will be fulfilled.
Promise.race(Array) – The first item in the array to be fulfilled, will fulfil the promise.

The call backs for the resolve/reject methods are taken into the constructor taking the form of:

new Promise(function (resolve, reject) {});
new Promise(executor);

Then we have 2 methods which handle the outcome to our promise.

  • myPromise.then(onFulfilled, onRejected) – As stated before, this is the method tells the promise what to call should it fulfil or reject. It is called after passing through the Promise.resolve method, either passing the resulting completed promise, or an object with the reason it was rejected.
  • myPromise.catch(onRejected) – This actually just calls myPromise.then(undefined, onRejected) but provides a nicer way to read the code.

Promises are a simple concept as we see here, however using them can quickly get complicated. We can see that they return promises or objects. This means promises can be chained, allowing us to build complex statements asynchronously.

// Simple example
function asyncMethodPromise(text) {
    // Create a new promise
    return new Promise(function (resolve) {
        // Prove the promise by using an asynchronous method call
        setTimeout(function () {
        }, 5000);
// Use the promise, logging the text after 5 seconds has passed
asyncMethodPromise("Hello world").then(function (s) {

// More complex XHR example
function loadFile(url) {
    return new Promise(function (resolve, reject) {
        var request = new XMLHttpRequest();'GET', url);
        request.responseType = 'text';
        request.onload = function () {
            if (request.status == 200) {
                // Succesful response so resolve
            } else {
                // Failed response pass back an error object
        request.onerror = function () {
            // Failed response pass back an error object
            reject(Error('There was a network error.'));
// Passing both fulfilled and rejected as methods
    function (fileContext) {
        // Do something with the file
    function (errObject) {
        // Do something with the error object
// Semantic code improvement using catch
    .then(function (fileContext) {
        // Do something with the file
    .catch(function (errObject) {
        // Do something with the error object

The ability to write this sort of cleaner, asynchronous code, allows us to write more efficient applications and where possible this approach should be used.


AJAX gets a special mention due to its heavy use in SharePoint. Ajax is a thing, it is a short term to describe Asynchronous JavaScript + XML. Despite the name, XML isn’t often used and we tend to prefer JSON since it is lighter in payload and processing. Now we know that we aren’t actually dealing with a single technology, but a group of technologies we can start to draw separation between each part. In order to create an AJAX call

  • POST requires 2 request, GET requires 1. Therefore prefer GET in most scenarios
  • The maximum URL length in IE is 2000 characters, at which time switch to POST
  • REST with POST is still faster due to caching (Read below)
  • “Accept-Encoding: deflate, gzip” for REST requests does work, however I would avoid this due to bandwidth being cheap, and it increase read time by over 6x and write time server side by 4x.
  • Never have more than 5 Ajax requests going at a single time per server, creating a handler to achieve this either with scheduling when 5 is reached or in a queue. A better approach still would be by designing a page that means this cannot happen.
  • Only use Verbose in REST calls when required, for some calls you need much less data (Requires a later SharePoint 2013 patch for on premises)
  • Ensure Ajax requests don’t saturate the server, if you have 10,000 people making 5 requests a second which is 50,000 requests per second to the server on-top of other components the server is running, and other requests the server may be handling.
  • Provide a fall back mechanism should there be a temporary loss of network
  • Use SP.RequestExecutor in SharePoint context and all cross domain calls, jQuery.ajax is recommended for out of SPContext Apps.

REST vs JSOM. The short and hard rule is to avoid posts due to a post requiring 2 requests. Therefore avoid JSOM. However if a method is only JSOM then you require it. The OData standard, uses a mechanism called HTTP ETag ( This ensures that only requests with new data are actually sent and received, allowing for efficient caching on the browser, further reducing actual requests.

The long and short of it, you should see a significant improvement in performance using REST over JSOM for any type of data request.

The other side of the coin, is batching in JSOM. If you require significant enough batch sizes (4 operations or more in a batch) then JSOM will indeed come out on top. However a single error in the bath will cause all to fail.

The code sample (Which can be improved by using promises) for this can be found at the bottom of the complete security Module example below.

Example Performance Module snippet for use with JSOM in SharePoint on Premises:

// Registered namespace for the module
// Global init method, for the Security module
// allowing for auto MDS Garbage collector registration
(function $_global_rencore_performance() {
    // A field to track the number of Ajax calls we have made
    rencore._private["ajax"] = 0;
    rencore = (function (rencoreAB) {
        // Fastest pattern for collection recursion
        rencoreAB.Performance = {
            // Runs a method once, and throws an error if
            // someone tries to use it with evaluated code
            AsyncMethod: function (method, waitTime, args) {
                if (typeof method == "string")
                    throw "Possible XSS attack detected";
                setTimeout(method, waitTime, args);
            // Loops a method, unless false is returned
            AsyncIntervalMethod: function (method, waitTime, args) {
                if (typeof method == "string") {
                    throw "Possible XSS attack detected";
                (function rencoreInternalLoop() {
                    if ( === false) {
                    setTimeout(rencoreInternalLoop, waitTime);
            // For loops in C# are optimised into while loops,
            // JIT doesn't have time for this optimisation
            FastLoop: function (params) {
                var inc = 0,
                    max = params.Collection.length,
                    method = params.Method,
                    collection = params.Collection;
                while (inc++ < max) {
          , inc, collection);
            // Avoid the extra comparison step it takes
            // We can decrement from max, and compare directly 
            // without strict comparisons
            FastestLoop: function (params) {
                var inc = params.Collection.length;
                var method = params.Method;
                var returnObject;
                var collection = params.Collection
                while (inc--) {
          , inc, collection);
            // Example Ajax implementation for Cross Domain calls
            // and automated canary checks
            // This method also automatically handles
            // POST or GET, and loads
            // SP.RequestExecutor if required.
            GetData: function (params) {
                rencore.GetScript("sp.requestexecutor.js", "SP.RequestExecutor", function () { }, true);
                var url = params.url;
                var executor = new SP.RequestExecutor(_spPageContextInfo.webAbsoluteUrl);
                    url: url,
                    method: url.indexOf("@") > -1 ? "POST" : "GET",
                    headers: { "Accept": "application/json; odata=verbose" },
                    success: function (data) {

                            'data': JSON.parse(data.body)
                    error: function (data) {
                        rencore.alert("Error communicating with server");
    return rencoreAB;

Summary and what’s next?

I’ve eased into performance here, and there are some more complex singular examples I will go into when I hit the deep dive part of this series.

I also haven’t covered SPFx but the principles here are quite general, and will give you an idea of how the TypeScript output will behave, as TypeScript is mostly just JavaScript with extra syntactic sugar.

Next post I will write about each design pattern that is commonly used in JavaScript.

© Hugh Wood 1980-Present