Create SharePoint Document Set (and set metadata) using REST

A quick post today to augment what’s out there in the “Googleverse”.  I needed to create a Document Set in client side code, and went out to find the appropriate calls to make that happen.  To update the metadata on the folder you create (which is all a Document Set really is under the covers), you simply make an “almost” normal list item update call.  So the following is the various “functions” you need and how to string them together to do this task.  As you read through, I’ll point out in the code where other older posts on this topic steer you wrong.

WARNING, this code is not optimized for best practices but is generalized for reuse. As sample code, it may not work in all scenarios without modification.
NOTE: this code requires jQuery to execute the AJAX calls and the promise
NOTE: The use of odata=verbose is no longer required and better practices would suggest that it should not be used in production. See this post from my partner Marc Anderson more information.

This first function is what is used to create the document set folder. The function uses the folderName parameter as the title of the Document Set

The following code is a generic update function, we’ll use it to update our Document Set’s metadata after its been created. In other posts out there, you’ll see the url of the AJAX call set to the folder.__metadata.uri. Unfortunately, that uri is no longer valid as a way to update the metadata and the call will fail. Also, when updating list items there’s a standard “type” that defines the object your updating, with our Document Set this type is different than a generic list item, and so I’m passing it in from our calling function. It can partially be retrieved from the folder creation response’s metadata, but it’s not exactly correct and the call will fail.

NOTE: the list’s display name in this case has no spaces or odd characters, if yours does you will need to escape those characters when creating the list type, for example a list containing an “_” you would use the following code: “SP.Data.” + list.replace(‘_’, ‘_x005f_’) + “ListItem”

So now we have functions that do the work for us we just need to call them. In this case I’m showing the code encapsulated in a function that does the calls but returns a promise to the calling function so that the caller can be notified when the document set has been created completely.

The call to createDocSet includes the Document Set’s content type, this can be retrieved from the URL of the Content Type definition page. Also note in this code that you need to do a bit of manipulation of the eTag if you’re going to pass it. You technically could use a wildcard instead of extrapolating the eTag, but for completeness I’ve included it.

Code Creep – SharePoint “CDN”

Centralizing your SharePoint client side code

“Code Creep”… no it’s not the latest thriller movie out of Hollywood, although it probably could be. I’m referring to the sprawl of client side code files that are stored when implementing client side web parts or “widgets” in SharePoint. A common solution for implementing “widgets” in SharePoint is to store the files in a document library, linking to them with a CEWP that will then run and render your “widget”. This is an effective way to implement customization when you don’t have administrator access, or you’re running in SharePoint online, or you just prefer the flexibility of a client side development paradigm; as some of my colleagues in the SharePoint community like to say, “It isn’t code, it’s content.” However, depending on the complexities of your environment and your development staff, this kind of end run can cause maintenance issues at best, horror stories at worst.

There are many ways to solve the code creep problem, from simple to incredibly complicated, and of course, as with everything there is no one-size-fits-all answer. Some guidance from my perspective centers on where your code will be implemented and how big your farm/tenant(s) are.

I’ve created a matrix below that outlines my thoughts on the subject.

solutionmatrix

 

The solution I’m going to focus on in this post is the “Store code in a site collection specifically for your client side code”, or basically creating a private CDN (Content Delivery Network) within your own tenant/farm.  In my opinion this is a fairly good solution to balance code maintenance/deployment without going all the way to the cost and complexities of implementing a full blown commercial style CDN.

The scenario is that you have developed or are developing client side “widgets” that you’re going to use in multiple site collections within a farm or tenant. My solution is to build a site collection specifically for storing the code needed to render those widgets.  And by code I mean all the html, js, and css files.  Any third party libraries that are already hosted on a CDN could be referenced separately and do not need to be added to your internal CDN, however, my rule of thumb is that if your SharePoint farm is behind a firewall and people access it from an internal network, you should consider downloading copies of the libraries you need and host them locally.  No reason why your solutions shouldn’t work if the internet goes down.

So let’s say I create a new site collection and I call it CDN so that my URL is http(s)://mysharepointurl/sites/CDN

I can disable most site collection and site features, leaving enabled at a minimum:

  • SharePoint Server Standard Site Collection features
  • SharePoint Server Standard Site features

Everything else is optional depending on what you want to do in your CDN, create approval workflows, etc…

The key to the solution is getting the permissions right. We want to make sure that everyone that needs access to code at any time now or in the future can get it, otherwise the “widgets” won’t work for them. But what we’d also like is the ability to version and “lightly” test that code without affecting them. So to that end we’re going to give “All authenticated users”/”Everyone” read permission to our CDN site by adding them to our CDN Visitors group. We can then add our developers to the CDN Members group, and we can add our CDN Managers or Administrators to the CDN Owners group. Now, by default, unless we break inheritance, all our code “libraries” will be able to be read by everyone and managed by our developers.

With permissions taken care of, we can create a library or libraries in the site to hold our code. There’s a lot of ways this could be organized and you should certainly take some time to think it through. Maybe you want to have different groups of developers have contribute rights to different code bases, etc… the key is to make sure you don’t remove visitors read rights from any of your code libraries and that you modify the versioning settings of your library as follows, the key being that we want Draft Item Security set to “Only users who can edit items”.

 

This allows you to “publish” major versions of the code files and until you do the user will continue to use the last published version.  Now you to do some “light” testing on the modifications to make sure everything is working before you “publish” it to the users.  I do not encourage you to use this method as a full out ALM solution but as a light weight one it can work well.  You could also in theory create approval workflows that would “publish” the content for you, but that’s a different post.

librarysettings

 

minorversion_libviewSo here’s an example of how you might use this.  I’ve uploaded some files into my “Code” library and note that they’re all minor versions of the file.  I’ve added myself as a CDN Member so I have the ability to “Contribute” to this library.

Now I need to insert the widget on the page and to do that I need to be a tad fancy.  This is because I cannot use a simple CEWP and point to the URL of an html file in my CDN Code library because CEWP cannot cross site collections.  To get around this you can either write your own binding function or utilize the Widget Wrangler to bootstrap your code simply into the page.  Below is an example of using a SEWP for that purpose with the Widget Wrangler to implement an AngularJS 1.x application:

puttingcodeonpageThe key here is that this code embedded onto the page is benign. Other than referencing the files that implement the solution it really doesn’t do anything and therefore it won’t need to be changed in order to modify the widget’s UX.

Now if we save the page and view it we’ll see our widget. Because our files have not been published my end users see nothing.

minorversion_julie

 

If I then publish all the files for this “widget” you can see that the end user now sees the same thing I do.

majorversion_user

 

So, as you can see there are real ways to help avoid the dreaded “code creep”.  From simply storing all your code in a library in the site collection to utilizing a commercial CDN.  The moral of the story is there is no one-size-fits-all answer, so you need to assess your needs and try and centralize your client side code in a way that makes the most sense for your environment allowing you to manage your solutions from one location.

Sympraxis Development Process – part 1

DevProcessMarc and I discussed in our August Sympraxis Newsletter starting a blog series to share what we’re learning while implementing a SharePoint client side development process.  So this is my first post on the topic, and here’s a link to his first post… it’s interesting to see how different our perspectives on the process were.

In all my previous experience I’ve either been in a team or in a regulated industry or both.  All of these scenarios dictate that you have at least some process in place and in the case of the regulated pharma industry, rigorous processes in place. 

I’m an organized soul in general and grew up with a mother who should have been a professional organizer and is probably a tad OCD.  I remember her doing the accounting for our family business.  She had a color coding system of pens (red, green, blue) for checking off cleared checks, deposits, and other issues in the checkbook register and whose desk was always immaculate (and still is) with her black pen, red pen, and mechanical pencil diagonally aligned across the right top corner of her blotter (that she really didn’t need as the thing was/is pristine).  Don’t even get me started on how she “cleaned” the labels right off the knobs on the stove.

So to say that joining Marc’s rather haphazard method of source control was a shock is potentially an understatement but what’s fabulous was that he was happy, and I even might speculate a bit excited, to have something at least a little more organized.  And further, with two of us, sometimes working for the same client, and sometimes on the same project, it just really needed to happen. 

Ok, so first we had to agree on source control.  We knew we were going to the cloud.  As a two-person team whom work out of our homes we don’t want to have a server footprint.  I grant you we could have spun up some Azure space and built servers, but seriously, why would we do that when there are great cloud choices and as my friends know… I don’t do infrastructure! 

Given Marc wanted absolutely NOTHING to do with Visual Studio proper as an IDE I felt like that somewhat ruled out TFS Online.  I should point out that TFS Online can be configured to use GitHub so that you can have the best of both worlds.  TFS has some other tools for managing the project and tasks in addition to source control so if you’re working with a larger team or in a more regulated environment this may be a good choice for you.  You can find out more about the integration here.

Now that we choose GitHub as our repository and I had made the switch from Visual Studio proper to Visual Studio Code for most of my development we decided to start with a small GitHub plan.  I created a few private repos one of which was for clients.  Within a few weeks we realized the error of our ways.  The client’s repo although nicely organized was cumbersome to sync with since there was so much in there.  Luckily we hadn’t gotten that far and we were only at that point working on one client together.  So we upped our GitHub plan, created a repo per client, shuffled our code around, and are back on track.

The next thing we had to tackle was the absolutely horrendously cumbersome task of modifying files and testing them in SharePoint.  As Marc explains in his first post on this topic, his process was to literally edit in place by opening the library where the files were with the “Open with Explorer” which while he may have been fine with I literally couldn’t even get myself to do.  I think I may have even blacked out temporarily when I saw him do it.

However, for all this looseness in process, I did really like that he stored his files in the site collections master page gallery.  As he explains in his post everyone has read access to the location, but very few should have access to actually wander into the library.  So in this, I ended up picking up Marc’s process, but that meant that instead of being able to drag and drop my file changes into the browser window I had to manually upload them… I thought I was going to lose my mind. 

We started researching various ways to get the files into SharePoint using gulp.  Luckily there were some options out there, two that come to mind.  One by our respected colleague Wictor Wilen – gulp-spsync.  I think it would have been a great solution but requires you have tenant admin access and in our experience we almost never are granted that level of access to our client’s tenants so we needed something else.  If, however you’re working on your own tenant and have that level of access it’s probably worth a look.  We then found spsave which we found works pretty well for uploading files to SharePoint online and SharePoint 2013 on premises and have implemented it along with gulp-cache to only upload files that have changed.

So at this point we have a pretty streamlined process for getting the files into SharePoint as we work.  In the future we need to add more to validating the code we’re writing such as linting and various other things… more to come as we implement.

Meanwhile, if you have specific questions, please feel free to add them in the comments and we'll attempt to cover them.

The man with the “bacon covered donut” could not be ignored

Maplebacondonuts2_bakedbyrachel

I recently attended a multi-day event at the Microsoft campus in Redmond, WA.  Early in the morning and blurry-eyed from battling the time change, I found my way into the event room to see Marc D. Anderson in an aisle seat plugging away at his computer with a mouthwatering bacon covered donut sitting daintily on a paper napkin at his elbow… that, my friends, is a conversation starter!

Sometimes in life, things are just obvious really quickly.  I had met Marc years ago through our mutual friend Sadie Van Buren but really didn’t get to know him very well, as is the case with many developer types, I’m way more comfortable talking to my computer than I am talking to people… and I do, sadly, literally talk to my computer as many of my current and former colleagues can attest to.

So there, at that Microsoft event over a bacon donut, we started a new conversation and got reacquainted.  In this rather short time, it became glaringly obvious that I needed to make the move and join Sympraxis Consulting.  I have had a wonderful journey at BlueMetal and I cannot say enough good things about the organization as a whole.  They have super talented people who do amazing work, more cutting edge consulting than any group I’ve ever worked with and I learned so much through that association.  But it was time for me to take on a new challenge, and spread my wings a bit more. 

So I’m off, and I think the future looks amazingly bright.  With our combined skills Sympraxis has a ton to offer organizations looking to implement, improve, and expand their SharePoint platform be it on premises or in the cloud through Office 365 and Azure.  I’m really excited to be joining Marc and know not only are we going to do great work, but we’re going to have an absolute blast doing it.

Widget Wrangler Webcast and New Release

Widget Wrangler Webcast and New Release

(Cross posted at Bob German's blog, Bob German's Vantage Point)

Here’s a quick update on the Widget Wrangler – the light-weight JavaScript framework that helps you build flexible widgets that can be used in SharePoint content editor web parts, add-in parts, or really pretty much everywhere.

WWChannel9

The Widget Wrangler was featured in a webcast on Channel 9 today. The Office team’s Vesa Juvonen interviewed WW creators Julie Turner and Bob German, who explained the framework and demonstrated how to use it with AngularJS, jQuery, and plain old JavaScript. Please check it out here!

Also today we’re pleased to announce the release of Widget Wrangler version 1.0.1. This new version is backward compatible with the old one; the new release includes:

  • CSS Support – Allows packaging CSS references from within your widget; the Widget Wrangler will efficiently load each CSS file once on each page, even if it’s referenced by multiple widgets
  • Multi-module support – Allows bootstrapping multiple AngularJS modules within a widget (thanks to Peter Wasonga for the feature suggestion; Peter writes widgets in Kenya)
  • A new TypeScript sample; the Widget Wrangler works the same with TypeScript or JavaScript; this is mainly useful to show how to develop an AnuglarJS widget in TypeScript
  • Improved/reorganized documentation

You can get the new release on our Github repo at https://github.com/Widget-Wrangler/ww. The Widget Wrangler is also a part of the Microsoft OfficeDev Patterns and Practices library, and will be updated there in the next PnP release.

Thanks to everyone for your interest and support, and happy widget writing!

Flexible SharePoint Development with Widget Wrangler

(Cross posted at Bob German's blog, Bob German's Vantage Point)

What’s a widget, and why should I care?

In economics, a widget is a name for a generic gadget or manufactured good; on the web, a widget is a generic piece of web functionality running on a page. What makes widgets special is that, unlike controls in ASP.NET or directives in AngularJS, widgets are generally released separately from the web page that hosts them, and are often deployed by end users.

If you’re reading this blog, you probably know something about Microsoft SharePoint, and this might sound familiar. A widget is a lot like a web part, only much lighter weight. In fact, widgets can easily be hosted in content editor web parts, on a list form, in a SharePoint add-in, or outside of SharePoint. If you're careful, you can reuse the same widget in all those contexts!

This work comes out of projects that Bob German and I have done at BlueMetal; for example, I used widgets when I developed the web parts on the BlueMetal's Office 365 intranet. The approach was to use light branding with widgets, with each widget running in a content editor web part.

IntranetWidgets

Widgets on the BlueMetal intranet

The widgets in the screen shot are:

  1. News feed (driven by SharePoint content)
  2. My Clients and Projects (shows links to the user's current consulting projects)
  3. Tabbed Calendar, Community, and Discussions (driven by SharePoint content)
  4. Tabbed New Hires and Anniversary carousel (driven by SharePoint content)
  5. Twitter feed

They're all written in HTML and JavaScript, and work equally well on premises or in Office 365. Each widget is an AngularJS application that can be versioned independently and dropped on any page in SharePoint. But, unlike Add-in parts, there are no IFrames. The widgets don't have to run in content editor web parts – they can run on any web page, so they're much more flexible.

So the answer to the question, "Why should I care?" is because widgets give you:

  • Flexibility: Widgets can be versioned independently and moved around freely on web pages in and out of SharePoint
  • Reusability: Widgets allow one code set to run in a web part, on a SharePoint form or page, or outside of SharePoint
  • Maintainability: Widgets written in an MV* framework like Angular or Knockout are easier to test and maintain

Any snippet of HTML with JavaScript can be considered a widget, however good widgets have additional attributes:

  • They're isolated so they won't interfere with the web page that hosts them, or with other widgets on the page. Ideally multiple copies of a widget can run on a page with no interference.

  • They load efficiently so users don't have to wait a long time for them to render on the page.

  • They're self contained so they can be reused easily. A widget that depends on various script tags, CSS files, and other elements on a page is more brittle and harder to reuse than a widget that is contained within a single HTML element.

  • They're developed using the power of modern JavaScript frameworks such as AngularJS for supportability and testability. (This is purely optional, however, and this article will also explore widgets written in jQuery or plain JavaScript.)

Introducing Widget Wrangler

Today my colleague Bob German and I are pleased to announce a new, light-weight JavaScript library for managing widgets called the Widget Wrangler. It's available now on Github for your widget wrangling pleasure. It's also part of the new JavaScript Core in the January 2015 release of Microsoft's OfficeDev Patterns and Practices library (hence the file name pnp-ww.js).

Widget Wrangler:

  • Helps avoid interference with the hosting page and other widgets
  • Loads scripts efficiently across all widgets on the page
  • Allows widgets written with MV* frameworks such as AngularJS and KnockOut, as well as plain old javascript.
  • Helps isolate your code and UI for easy portability to multiple platforms and environments

A widget consists of a single HTML element (the widget root – usually a <div>) that contains HTML for the widget, and a script tag that references the Widget Wrangler. The script tag includes custom attributes that tell Widget Wrangler what JavaScript to load and how to "boot" the widget.

For example:

The Widget Wrangler (pnp-ww.js) will load in-line, and will take care of loading the scripts the widget needs (in this case Angular and script.js) and bootstrapping the AngularJS application. The custom attributes tell Widget Wrangler how to load the widget:

Tag Required Description
ww-appname yes Used to create a name for the app. In the case of an Angular widget, this is the module that will be passed to the angular.bootstrap function when starting the widget.
ww-apptype no Currently "Angular" is the only supported framework that will auto-bind upon load completion.
ww-appbind no The function to be executed when all the script files have completed loading.
ww-appscripts yes A JSON object that defines the javascript files the widget needs in order to run

NOTE: It is necessary to specify ww-apptype (for an Angular widget) OR ww-appbind (to do the binding yourself).

The ww-appscripts element contains a JSON object that tells Widget Wrangler what scripts to load before bootstrapping the widget. This is a collection of objects in which each object contains properties as follows:

Tag Required Description
src yes URL of the script to be loaded; this can be absolute, relative to the page, or by using a tilde prefix, relative to the pnp-ww.js script (for example, src=~/myscript.js)
priority yes An integer indicating the order in which the script should be loaded; first priority 0 scripts will be loaded, then priority 1, etc. Priorities must begin at 0 and not skip any numbers, and scripts in the collection are expected to be in priority order. Multiple scripts can be declared at the same priority level in order to load them concurrently.

A widget can either run as an AngularJS application, which is bound to the widget root, or using a custom binding function specified in the ww-appbind attribute. In the latter case, the widget root DOM element is passed to the binding function so the widget can access the browser DOM relative to the widget root instead of having to find it on the page. This helps to isolate the widget. For example, it's common practice to hard-code an HTML element ID and then find it with jQuery; this works fine for one widget, but prevents multiple widgets with the same ID.

Widget Wrangler has no dependencies on SharePoint or other script libraries, and works with the same browsers as AngularJS. IE8, which is only supported by a special build of AngularJS 1.3/1.4, is not currently supported – ergo it will not work with SharePoint 2010 which forces the pages to run in IE8 emulation mode. Widget Wrangler works with the same browsers as SharePoint 2013.

Widget Wrangler tries to load the scripts needed by each widget as efficiently as possible, and will only load each script once even if it's used in multiple widgets. (NOTE: The current implementation determines what scripts are the same using their URL; a future version may be smart enough to identify multiple versions of common libraries at different URL's.) Use the "priority" property in the ww-appscripts attribute to control parallel script loading; for example all priority 0 scripts will load in parallel, followed by priority 1 scripts, etc. Priority numbers must begin at 0 and must be contiguous (i.e. 0, 1, 2…) In the example above, script.js depends on AngularJS, so AngularJS is given priority 0 (and loads first), and script.js is loaded only when Angular (and any other priority 0) scripts are loaded.

The main repository for the Widget Wrangler is here; it's also a part of the OfficeDev Patterns and Practices Library here. Please use the main repository for access to the Widget Wrangler tester and for pull requests.

Widgets and JavaScript Frameworks

Widgets can be written using any number of JavaScript frameworks; this section will explore some of the most popular.

AngularJS Widgets

AngularJS is a favorite framework to use with widgets, mainly because of its MV* design pattern and rich selection of services and directives. However AngularJS was really designed for single-page applications (SPAs) that take over an entire web page. A typical AngularJS application is "auto-bootstrapped" using the ng-app directive; while this is fine for SPAs, the documentation clearly states that you can only have one ng-app directive on a page.

To get around this limitation and allow many widgets on a page, the Widget Wrangler uses the angular.bootstrap() function; there is no hard limit on the number of Angular apps that can run on a page using this method.

(NOTE: If you want to use Widget Wrangler in a page that already uses AngularJS, ensure that the widget doesn't overlap the existing Angular application – i.e. it can't be inside the element that is decorated with ng-app. Also ensure the versions of Angular are the same or similar enough that both the SPA and widget(s) will work with either one.)

You can find a simple AngularJS widget at http://bit.ly/ww-ng1. This sample uses Plunker so you can run and experiment with the code right in your web browser. In this sample you'll see two instances of a Hello World widget which vary only in their view so one of them says goodbye instead of hello. This shows how to embed the view right into the widget so you can make each instance render differently.

A more advanced example can be found at http://bit.ly/ww-ng2. This example shows a weather forecast, and demonstrates how to pass configuration information – in this case the location of the weather forecast – into the application via the ng-init directive in the view. It also shows how to use ng-include to place the view in an HTML template so it's shared by all instances of the widget.

WeatherWidgets

Weather Widgets

Here is the markup for one of the weather widgets:

The Angular controller includes a function to fetch the weather forecast as soon as Angular processes the ng-init binding:

A third example at http://bit.ly/ww-ng3 shows how to connect two Angular widgets. This is accomplished via a service that relays messages in the form of JavaScript objects from senders to receivers over named channels.

If you look at the code you may notice that this service communicates via a shared object that hangs off the window object. Normally in Angular a service could store such an object locally, and the service (declared as a factory) would be shared by all who reference it. But that doesn't work here since each widget is a completely separate Angular application. Modules, services, etc. with the same names are all isolated completely within each widget, and Angular does a great job keeping them separate. In the sample, each sender and receiver widget gets its own service instance, so information is shared outside of Angular in the window object.

Knockout Widgets

KnockoutJS is another great example of an MVVM style JavaScript library. There's an example of simple Knockout widgets athttp://bit.ly/ww-ko1. There are two instances of the widget on the page to demonstrate isolation; here is one of the widgets:

Notice that this time the ww-appBind attribute is specified; this contains the binding function myWidget.Load. script.js contains this function:

Notice how the binding function uses the new keyword to make a new ViewModel object for each widget; without this, isolation would be lost and all the widgets would share the same ViewModel and data.

jQuery Widgets

Here's an example that not only shows a jQuery widget, but demonstrates how to take existing jQuery code and make it into a Widget. In this case, it's based on this jQuery UI example of a color picker. The original sample includes several references to specific element ID's, so the code would need to be modified to handle more than one color picker on a page.

ColorWidgets

jQueryUI sample made into a widget, now supports multiple instances on a page

You can see the widget version at http://bit.ly/ww-jq1. As you can see, there are two instances of the widget on the page; all the code is shared yet they work independently. To make this work, the following code changes were needed:

  • Change the element ID's to classes, so it's legal to have more than one
  • Add a bootstrap function similar to the Knockout example, that creates a new "controller" for each widget instance
  • When the widget bootstraps, pass the element into the jQuery code and reference the elements relative to the element. For example, $('#red') becomes $(element).find('.red')

Plain JavaScript Widgets

Sometimes less is more, and plain JavaScript is better and faster than using even a light-weight library like jQuery. If you want to use Widget Wrangler on its own, without any other libraries, check out the example at http://bit.ly/ww-js1. This is a widget that Ford Prefect would love!

Notice that it uses the new keyword in the binding function to create a new object for each widget instance. It also generates a unique index for each instance that's used in a button click attribute. This index is passed into the click event handler to allow it to find the correct instance when the event fires.

Widgets in SharePoint

The Patterns and Practices library includes an example that shows how to use widgets in various kinds of SharePoint projects. The example is a Microsurvey that asks a single question, then shows a simple graph of all the responses to that question.

SurveyWidget

Microsurvey Widget – Question and Results Views

The example can be packaged and deployed three ways:

  • As a SharePoint Hosted Add-in
  • Directly in a SharePoint site using drag-and-drop deployment by and end user
  • Directly in a SharePoint site using PowerShell deployment from a central site, so a single copy of the solution can be used in many sites. This has the advantage that the solution can be updated in one place and the change will be immediately available in all sites.

The solution includes a web part and custom new, edit, and display forms for managing the list of questions. It's also smart enough to deploy its own list storage using JavaScript, so the questions and answers lists are generated the first time the solution is used.

Widgets allow a high degree of reuse in this example. For example, the code to display a question is written as a widget; it appears in the web part (or add-in part), and in the New and Edit forms. Thus one copy of the widget is used in 3 places, reducing code duplication and allowing all of them to be updated by editing the common code.

For a deep dive on the Microsurvey sample, including a quick introduction to AngularJS, check out Bob's Collab365 talk,Building Flexible SharePoint Solutions with AngularJS. This will show you various ways of using and deploying widgets in SharePoint, however it uses the precursor to Widget Wrangler, which was called InitUI.js. The sample code in github has since been updated to use Widget Wrangler.

The Widget Wrangler Manifesto

The Widget Wrangler is open source, and we welcome suggestions and pull requests at https://github.com/Widget-Wrangler/ww. (Please submit pull requests against the dev branch!) If you're thinking of contributing, please keep these points in mind. Widget Wrangler:

  1. Has no dependencies on any other scripts or frameworks
  2. Is easy to use
  3. Minimizes impact on the overall page when several instances are present
  4. Matches AngularJS 1.x browser support
  5. Is tested and works well with SharePoint Online and SharePoint 2013 or greater, however it in no way depends on SharePoint

Widget Wrangler Tester

The Widget Wrangler main repo includes a test program that makes it easy to exercise the library with a large number of widgets on a page.

WidgetTester

Widget Wrangler Tester

The test program is written in ASP.NET, and it dynamically generates test scripts and Angular applications that check to ensure that dependencies are loaded, and that track the elapsed time during the test. To run it, start the WWBase project in Visual Studio on the Test/TestPage.aspx page.

Enter your scenario in the text box on the left side of the page. Each line in the scenario is a widget entered in the form:

In this example the tester will fabricate two scripts, and set up the Widget Wrangler to first load Script1, then Script2, and then bootstrap the application called AppName. Here's the widget the test program would generate for this line in a scenario:

You can test parallel script loading by using parenthesis; for example:

will generate a widget that loads scripts S1 and S2 in parallel, then loads S3 when both of those have loaded.

The test program shows an index for each widget to demonstrate that each one is isolated, and a blinking asterisk to show that the data binding continues to work after all the widgets are loaded. On the right of the screen, you can see a log of scripts loaded and the timings.

Backlog

Here are some of the enhancement ideas on our backlog; please comment and help us set our priorities!

  • Smarter detection of duplicate or already loaded scripts (e.g. AngularJS loaded from two different URL's)
  • Version number checking for libraries such as Angular and jQuery, so a widget can declare the range of versions it supports; possible co-existence of multiple library versions (See this proof of concept)
  • Angular 2.0 support
  • Diagnostic widget you can add to a page to show load sequence, timings, and exceptions
  • IE 8 support (to have parity with SharePoint 2013 browser support)

 

JSLink Validation – from Basic to Advanced

Custom field validation using JSLink is an extremely powerful beast. In this post I’m going to make an effort to demystify the different levels of validation you can put into your custom template and how to put it together. Everything I’m about to cover has been covered before in different ways and in different combinations. My hope is that this will help separate out what’s needed and what’s not depending on your scenario… so to that end I’ll cover three scenarios. Basic, which will be OOB validation that is custom applied. By that I mean you want to optionally make a field required just like SharePoint does, but you want to control when it’s required.  Custom, which will be a custom validation function that renders its error message just like OOB validation error messages are rendered.  And finally, advanced, where not only do you want to write a custom validation but you want to control how the error state is communicated to the user.

So let’s start at the beginning and we’ll build on the solution from there. First I want to establish the framework for the solution:

Basic

Basic validation is fairly straight forward. You would simply add this code inside of your custom field rendering function (editTaskOwner).

First set up the form context and then create a new “ValidatorSet”:

In the next line we add the new validator to the validation set:

And then lastly, we attach the validation set to the field. In the case of this example I’m using formCtx.fieldName… but this could obviously also be a simple string. I bring this up, because there are limitations on what types of fields you can customize using Custom Templates, namely Taxonomy fields… this is a way to add validation to them from somewhere else in your code.

Note: If you’ve noticed I skipped line 4, more on that later.

The Result

Custom

If you want to write your own validation then you need to do a few extra steps.

Create the custom validation function. This function would go within your validation function but outside of the field custom render function (see the framework at the top)

Modify the RegisterValidator call

 (Optional) Depending on how you render the field you may have to add the following code. What I mean by that is if you use one of these OOB field rendering functions you do not need this line, if you develop your own layout then you will need this to “attach” the error message to the right object in the DOM. In this example my custom people picker field is rendering html wrapped with <div id=”TaskOwnerDiv”></div>. So I need to reference the div’s ID in the SPFormControl_AppendValidationErrorMessage call.

The Result

Advanced

So, if that didn’t seem advanced enough for you, the last scenario is that you may want to customize how the “error” is displayed to the user. Maybe you want to display an image, or collect all the validation messages into one area. That’s possible by doing the following:

Write custom error rendering code. This code needs to be completely outside of the custom rendering template code. Here’s a really basic example.

Modify the registration of the error callback, which causes your custom function to be fired if the isError flag is true.

The Result

So, as you can see custom form validation is extraordinarily powerful with Custom Templates and can allow you to really take SharePoint to the next level.

JSLink Custom User Field Schema

I had the requirement of setting the default value of a person field to the current user.  After looking around in the great wide internet I found a very helpful article by Glenn Reian which got me started.

Where I ran into a problem was that my user field had customized settings that weren't being pulled through into the custom implementation of my people picker.  As it turns out the issue was with the schema that is passed to the SPClientPeoplePicker_InitStandaloneControlWrapper function.  In Glenn's example (and every other example I found out there) this schema is hard coded, which is perfectly acceptable in most cases.  However, I needed some values to be slightly different to adhere to my column settings.  

As it turns out there are two solutions.  The first, obvious one, is to adjust the schema manually in the code.  And again this may be a fine solution.  But as Glenn did, I had separated my concerns and created what I hoped to be a fairly reusable version of initializePeoplePicker.  So now I needed to enhance that function to pass through adendums to the schema or maybe it's own schema.

What I found was something i wasn't quite expecting.  The schema I needed was actually right there in the context variable in JSLink.  So, using Glenn's implementation and extending it slightly I just modified initialzePeoplePicker to the following: 

var initUserDefaultPeoplePicker = function (ctx, peoplePickerElementId, ppSchema) {
    if (ppSchema === null) {
        ppSchema = {};
        ppSchema['PrincipalAccountType'] = 'User';
        ppSchema['ShowUserPresence'] = true;
        ppSchema['SearchPrincipalSource'] = 15;
        ppSchema['ResolvePrincipalSource'] = 15;
        ppSchema['AllowMultipleValues'] = false;
        ppSchema['MaximumEntitySuggestions'] = 50;
        ppSchema['Width'] = '280px';
    }
    var uri = _spPageContextInfo.webAbsoluteUrl + "/_api/SP.UserProfiles.PeopleManager/GetMyProperties";
    getAjax(uri).done(function(user) {         
        // Set the default user by building an array with one user object
        var users = new Array(1);
        var currentUser = new Object();
        currentUser.AutoFillDisplayText = user.DisplayName;
        currentUser.AutoFillKey = user.AccountName;
        currentUser.Description = user.Email;
        currentUser.DisplayText = user.DisplayName;
        currentUser.EntityType = "User";
        currentUser.IsResolved = true;
        currentUser.Key = user.AccountName;
        currentUser.Resolved = true;
        users[0] = currentUser;         // Render and initialize the picker
        SPClientPeoplePicker_InitStandaloneControlWrapper(peoplePickerElementId, users, ppSchema);
    });
};
and then from the custom rendering function for the user field I passed the schema associated with the field through:
function efTaskOwner(ctx) {
    var retVal = '<div id="TaskOwnerDiv">';
    retVal += initDefaultPeoplePicker(ctx.CurrentItem["TaskOwner"], 'TaskOwnerDiv', tx.CurrentFieldSchema)
    retVal += '</div><span class="etmRequiredField"></span>';     
    return retVal;
}

SharePoint 2013 JSLink – All Fields Rendered

While creating a custom Client Template using JSLink, I came up against the issue of knowing when all the fields were rendered on the form.  To explain where the issue arises let me first take just a moment to explain when building a custom template for this type of form, where you want to manipulate the fields, you have available to you both a Pre and Post Render function.  What that does is fire the function attached to it either pre or post each custom field rendering being executed.

The reason I bring this up is that there could be some misconception that it fires before field rendering starts and after all field rendering is complete, but that’s not the case. So if your form has 10 fields, these functions will each fire 10 times.  I also found document.ready to be unreliable as it often fired before all the fields were rendered, and further, if I needed to make decisions based on the context of the form, I would no longer have access to that information.

So, the solution does in the end involve the OnPostRender function of the Template Override, but what you do there is what counts. So just to put everything in context, and for brevity, here is the shell of the custom Client Template file.  Note the declaration of the postfields variable inside of My.CustomTemplate.

Ok, now we need to fill in the onPostRenderTemplate function.  Primarily, we need to know when we’ve gotten through all the fields on the form. This is accomplished by incrementing the "global" postfields variable within the onPostRenderTemplate function.  The question is what are we testing it against to know when we've rendered all the fields.

The answer is JavaScript prototype function keys which seems to be fairly well supported.

The Object.keys() method returns an array of a given object’s own enumerable properties, in the same order as that provided by a for...in loop (the difference being that a for-in loop enumerates properties in the prototype chain as well).

Ergo, if you look at the ctx.Template.Fields and get the length that gives you the number of Fields on the form that will be "rendered" and provides you a way of telling when the last Field has been rendered.

So now I can execute some fancy functions to do thinks like:

Hide Fields

Modifying the Fields label to make it look like it was Required

or some other post rendering customization based as I stated on values in the ctx variable.