Creating a property pane for editing items in your SPFx web parts

It’s a great privilege and great fun to work with the exceptional team at Shire that’s building a bleeding edge intranet to support their now 24,000 employees and growing. The team is exploring some very new territory and learning a lot along the way. During last weeks webinar, Microsoft’s Mark Kashman promised we’d post some of the lessons learned in the project. It’s my pleasure to share with the greater world a tidbit of that knowledge in the form of how to create a property pane for an individual item, not just the web part using the SharePoint Framework (SPFx). Bob German has also done several blog posts on the lessons learned, you can start reading them at Bob German’s Vantage Point.

In the custom web parts that are being built the UX team had decided upon a configuration that includes both a basic and advanced mode. The advanced mode, is sourcing the items to be displayed in the web part from a list. I’m not going to talk about that here, what I’m going to address is the idea of configuring all the items in the web part itself by the addition of a property panel specifically designed to add or edit one of those items. This is a separate property panel from the web part property panel in which you would configure overreaching properties of the web part, such as layout or title.

I’ve created a simple example (lacking in everything but the necessary functionality) to illustrate this concept by creating a webpart that displays a set of links. It doesn’t really matter how I render those links, could be buttons, unordered list, etc… the point is that I would have an array of link items that would be curated through the property panel not an external list. SharePoint’s own modern hero web part does this, so it shouldn’t be that hard, right?! It took our team member Mike Tolly a good amount of reverse engineering to figure it out… and now his pain is our gain! Sorry Mike!!!

Within our web part we build a React component that has a set of properties. Those properties include things like linkItems which is the array of items I want to show, and functions for working on that array like editItem, deleteItem, and rearrangItems, etc…

Below is the code from this simple example where inside of the class definition for my web part I’ve added a property for the activeIndex of the item being edited, updated the render function to create my SpfxItemPropPane component, and created two separate property pane configurations in getWebPartPropertyPaneConfiguration and getItemPropertyPaneConfiguration. The real meat of the solution is in the protected getPropertyPaneConfiguration function where I make the decision to render the item property pane if the property pane is being called from code vs being called by the web part edit button. Obviously if you wanted even more item property panes you could add additional logic and properties to determine which property pane you were calling.

To complete the picture, my SpfxItemPropPane component tsx looks like this:

To get the complete solution, please visit my GitHub repo

Hopefully, even though this is a very simplified example, it will get you started if you’re looking to create multiple property panes in your SPFx web part. Happy Coding!

SharePoint + Flow (+ Azure Functions): Launching a Microsoft Flow from Client-Side Code

The requirement seemed deceptively simple… and it was, somewhat… simple. I’ll start with showing you how very simple it is to launch a Microsoft Flow (“flow”) from your client-side code hosted, well… wherever. I will give you this caveat, launching a flow this way requires no authentication. The URL is entirely obscure, but if you’re concerned that the flow you’re starting does something you only want to allow authenticated users in your organization to do then you may want to rethink this. As with all security issue you need to assess and balance security with risk. I suppose that’s true of life too.

Creating a Microsoft Flow that can be launched from the client

One of the most common uses of workflow, at least for me with my clients, is to send email notifications. Microsoft Flow is excellent at this, with the caveat that the email cannot be sent on behalf of the user running the workflow unless the user you use to create the send email step under in the flow can send email on behalf of the user running the workflow. That is to say, there isn’t a way to send email from the authenticated user with the Outlook connector without the appropriate permissions.

Ok, so let’s say you’d like to send an email notification from your client-side application running in SharePoint. The idea is that you would want to hand over to flow the information about how to compose the email and then it would do the rest. As my 5-year-old likes to say, easy peasy lemon squeezy. There’s even a walk-through of doing just this from Irina Gorbach on the Microsoft Flow Blog Calling Microsoft Flow from your application

To add to that post just a bit, the Request connector has a section for advanced parameters. The “Method” by default is a “POST”, you can certainly specify this strictly if you want. If you’re not passing parameters in your scenario and only want to use a GET, you’d have that option. Also, depending on your application there is a second parameter called “Relative path”. That’s used to specify your parameter on the path and use the “GET” method, this could be used for advanced routing in SPA applications. A more in-depth post in the Azure Logic Apps documentation can help you understand this scenario better Call, trigger, or nest workflows with HTTP endpoints in logic apps

Also, you may want to consider adding a “Response” action, also outlined in the afore mentioned Azure Logic Apps documentation, to your flow as the Azure post indicates to tell your client-side code what happened. If you don’t it will return a status 202 – Accepted by default.
Just to reiterate, once you have the flow done, you simply add an ajax/$http/<your favorite implementation of XMLHttpRequest> request to your client-side code, like you would to make any other REST call. Unlike with SharePoint calls though you will not need to add tokens to the header to make a POST call. Using AngularJS $http provider, the call would look like:

User Context for Microsoft Flow – The new elevated privileges

To keep with my lemon theme what might be considered lemons, in that you cannot execute actions as the user from a flow, we shall turn to lemonade, in that flow provides us developers with the perfect vehicle to execute work with elevated privileges. Given how much we can do through client-side code, all as the currently authenticated user, I’m personally quite happy with making this trade, especially with the addition of launching Azure functions as part of my flow. In my next scenario let’s discuss the idea of adding a Help Desk Request widget to the home page of every site collection in SharePoint. This reusable bit of code would be an excellent candidate for an SPFx web part but to keep the complexity level down I’m going to discuss it from the perspective of creating a basic client-side web part using the standard methods I often discuss which is to say using a SEWP/CEWP to put a bit of HTML/CSS/JavaScript on the page. The solution will basically be a form with a button that allows the user to enter the issue and submit it to a Help Desk list in another site collection that is secured to those that run the help desk.

When the user clicks submit what we’d like to do is launch a flow that will insert an item into the Help Desk Request list, where the user that is submitting the issue doesn’t even have read rights. To do this I’ve decided to create another O365 user called “Help Desk” that will act on the behalf of the help desk. That user has been given contribute rights to the Help Desk Request list. Yes, I’m absolutely aware that taking this action will require another monthly fee for that user, and I have to say I really wish there was a “service” account level user that we could add that could access an email box and get access to SharePoint, etc, etc, that would either be free or available at a significantly discounted monthly rate… sadly there is not.
Note: Although there is the concept of an unlicensed user that is a service account per say, the level of privileges that user would then have is way too high. Further, flow will not recognize it as a valid connection.
You could also do this with any other user that has access to the Help Desk Request list. However, please keep in mind that if that user ever leaves or their account is removed/deactivated for whatever reason your flow will stop working. At the very least you will want to make sure you share your flow with one or more other users so that if something happens there will be at least one other person with rights to the flow that can change the context of the actions.

First is the request trigger connection. I set this up with the following JSON payload where user is the user’s login name.

Next I added the SharePoint “Create item” action and set the values of the item with variables from my request trigger body. Note that I’ve made sure the action is running under the user. This way the user will have permissions to add the item to the list.

If you’re looking at the above images and wondering wait, there are more fields in my form/JSON payload than in my flow “Create item” step… your eyes are not deceiving you. Read on…

Wait!? WHAT! Microsoft Flow can’t do that???? –Azure Functions to the rescue

This section is dedicated to my biggest pet peeve (at the moment) which is what I would consider basic missing features of Microsoft Flow SharePoint connectors. The fact that it lacks support for all basic list and library field types (i.e. Person, choice, manage metadata/taxonomy, etc), makes it somewhat less than a “mvp” (minimally viable product) but, well, who am I right. So, in lieu of a working product I’ll just share the work around. My griping aside, this section will hopefully become more and more obsolete with every passing month so I suppose that’s at least something.

My good friend Bob German (partially due to my relentless prodding) just posted an excellent series on creating Azure Functions that “talk” to SharePoint. You can read them here Calling SharePoint CSOM from Azure Functions. I used this method to write a customized Azure Function that would update my SharePoint list item created in the flow with the remaining information that could not be updated by the flow. I’m certainly not going to rehash what he expertly explained but I will share a tidbit that Bob also tracked down and why this post has been delayed a couple weeks and that is the API key for adding a custom connection to your Azure Function from flow.
To create the connection you need a swagger file/URL which you can get by going to your azure function and checking out the API Definition tab (in preview as of this post). I had tried to use the “Export to PowerApps and Flow” tool there but couldn’t get it to work, not that it won’t be working by the time you read this. Also, as of this post you’re going to need to do a little tweaking to the “Definition” section, for some reason it doesn’t really get what it needs from the swagger. Here’s what it looked like for me, your mileage may vary.

In all my efforts trying to get this to work properly at some point switched from pointing at the API Definition URL to trying to build my own swagger file, in hindsight I think the URL worked just fine.

More kudos to Bob on helping me through the security part. He figured out that the API key parameter label needs to be “code”.

And again, during Bob’s talk “Going with the Flow: Rationalizing the Workflow Options in SharePoint Online” he explained that to get the flow connector to understand my payload I needed to “Import from sample” which gives you a little flyout where you can specify how your REST call needs to look. Since I’m using the Body and not query string parameters my Request section is now all set for me.

Finally, my completed flow which I call exactly like the other simpler flow from the beginning of this post.

Hopefully a few of these scenarios will help you think through how you might make Microsoft Flow part of your SharePoint online solutions. Happy Coding!

Utilizing ngOfficeUIFabric People Picker in SharePoint

One of the great joys of developing custom forms in SharePoint is developing the controls for some of the more complicated field types, specifically the Taxonomy Picker and the People Picker. If you’re sensing sarcasm, you would be correct. There are brave souls out there who recreated these components for us that utilize no less than five (and sometimes more) Microsoft Javascript libraries. The reality is, for the People Picker, which is what I’m going to be discussing today, you’re really looking for a type ahead input field that filters a list of people that you can retrieve from SharePoint. Sounds easy right… *sigh* if only.

If you’ve started using the new SharePoint Framework (SPFx) there is a big push to use the OfficeUIFabric. This framework provides not only styling but components that mimic what we’re given out of the box. Unfortunately, if you’re not a React framework user it seems that the investment by Microsoft is significantly lacking. I suppose this makes sense. Regardless, a team of non-Microsoft people embarked on a community project to create an AngularJS version of this library, ngOfficeUIFabric. Although it has some issues, overall it seems to work pretty well, but given it took me a bit to figure out what does and doesn’t work and exactly the best way to wire it up I thought I’d share my findings. This is where you can find the online demo for the people picker.

The idea, is to provide a function that either returns a list of people that match the query string (or not, there really is no requirement) or a promise to return a list of people. It supports Angular’s ng-model directive as well as ng-disabled. Obvious missing components are the ability to specify whether the field should be multiselect or not (it’s always multiselect), and the ability to trigger any functions on selection or selection change, I believe this is potentially a bug. Never fear dear reader, we can get around these limitations for the most part and the ability to avoid seventeen million additional libraries is a huge plus. Further, the architecture will certainly work in modern… so for all you brave SPFx coders out there you can take the same principles and apply them to the React component or even this Angular component depending on which framework (you want to use).

Ok, let’s get down to brass tacks…

Data Source

The people picker component wants an array of objects with the following attributes. We’ll call that object “Person”

Attribute Description
initials Used in lieu of a users picture with a background color.
primaryText The primary display text identifier of the user/group.
secondaryText Some secondary information you want to highlight about the user/group.
presence Text value, available options are available, busy, away, blocked, dnd, offline
group The results group you want to display the person in when the results of the search are displayed. This takes the form of an object with the following properties { name: “Results”, order: 0 }
color If there is no image, this will be the background color for the users “Initials” block. Available options are lightBlue, blue, darkBlue, teal, lightGreen, green, darkGreen, lightPink, pink, magenta, purple, black, orange, red, darkRed
icon This is the URL to a thumbnail image of the user, if provided this will be used in lieu of the initials and color attribute.
id Ideally this would be the id for the user in your site collection, but in lieu of that it’s a good option to use the users account name, or fully qualified domain name.

Now that we understand what structure the data needs to take, we need to go get it. To my mind there are two ways to solve this problem, the first, and probably the easiest would be to use Search. By which I mean the Search REST endpoint in SharePoint. That will be the direction I’ll take for this post, that said, you could always leverage my previous posts on utilizing the Microsoft Graph API inside SharePoint and use that to get the results. That would have the distinct advantage of being able to provide significantly more interesting information about the user if you needed it.. like manager assuming your user data is complete and up to date <insert plug for Hyperfish here>.

Note: NGFPP.currentSite is the url for the current site collection

Now, as you can see from a Search query I’m not going to be able to get the user’s id which would be needed to set the value of a person field. In this scenario, I’m specifically attempting to get all the available users from our directory, not only the ones that have actually logged into our SharePoint site. So, before I’ll be able to create/update an item in a list I’ll need to convert that person’s id (account name) into an actual id from the hidden users list in the Site Collection. For that we can use the ensureuser REST endpoint which is analogous to its CSOM cousin.


Now that we have all the back-end pieces put together we can get started with the user interface. The one bug that I still haven’t completely solved is that of a wrapping issue, you can see it in the screenshot below. I suspect that the issue is related to various SharePoint css attributes and I just haven’t found the right one to override. But it’s trivial so I’ve decided to move on without solving it for now. Eventually I’m going to get it!

Since I’m on the topic let’s start with a couple minor CSS overrides you’ll want to include within your form. I’d strongly suggest scoping them to your form as well as you wouldn’t want to upset SharePoint styling elsewhere.

Now onto the HTML, here’s a small snippet of a table that I created for the “form”. I’m only showing the row for the People Picker in it. Of note, uif-people is linked to a variable on my controller that is assigned the function and the ng-model is linked to an empty array that can receive the selected “Person” items from the source array. In my example, I’ve decided to use the compact type and include a search delay… these are options and I’d encourage you to look over the demo’s to decide what’s right for you.


Single User Hack

One of the missing components of the solution is the ability to limit the user to selecting only one value. The work around I came up with, which I completely admit is a hack, was to implement a $watchCollection. This way when I see the model change I can determine if more than one item is selected and if so replace the originally selected item with the newly selected item. I’ve found in implementation this looks very smooth to the user so I’m happy with it as a work around. To add this to the above controller you would do the following:


So even with its flaws I’m moving forward with the implementation simply because it feels like a significantly cleaner solution. If someone out there takes this on and solves the css issues before I do PLEASE let me know and I’ll update the post with your solution. Hopefully I’ll figure it out soon!


The complete code is available on my GitHub repo if you’re interested in looking at the complete solution.

SharePoint time, is not your time, is not their time.

If you develop client side solutions for SharePoint you’ve either run into this or you will run into the following scenario. SharePoint stores all its date/time fields in UTC time. The site collections, sites, and the users, can have their own time zone settings. If you’re using SharePoint out of the box because all the content is rendered on the server and pushed to the client with all the date/time translation has been done for you. This makes wonderful sense, except when you try and write JavaScript against those same data points. The REST endpoints that return the data for you give you the date string in a format that is specific to the regional settings of the person asking for them. Sadly, this doesn’t translate as well to JavaScript as you might like. I’ve set up a scenario to illustrate the point with a couple of manipulations you can make depending on your desired goals.


I have two PC’s (ok, one is virtual 😊). I set my virtual machine’s time zone to Pacific Daylight Time (PDT) and my main machine is set to Eastern Daylight Time (EDT). Then I have a SharePoint site collection whose regional settings are set for Eastern Time (or UTC-5:00 aka EDT). I created a list with a title field, and two date fields one to show date/time and one to show just date. The date only field was to illustrate that the problem exists regardless of whether the user intentionally sets the time or not. I created an item in the list from my computer set to Eastern time… Then I went to my computer set to Pacific time and created a second item. I set the dates and times for both items the same from their respective UIs. Again, this is to illustrate that the local time of the computer has no bearing on what SharePoint sees the date/time as. Regardless of whom entered the item the dates are displayed based on the regional settings effective on the site.

I’ve written some code that I’m going to expose using a CEWP… the code does the following things:

  1. Read the regional settings of the site.
  2. Gets all the items in my SPDateTime list and loops through them, for each item…
  3. Get the Items Date field, create a JavaScript date object, display the date object and the string that was used to create it.
  4. Get the Items Date No Time field, create a JavaScript date object, display the date object and the string that was used to create it.
  5. Adjust the Items Date field into the time zone of the regional settings in effect on the server and display it.
  6. Adjust the Items Date field into UTC time and display it.

Ok, so let’s start with the computer in EDT and take a look at what our client side code does:

What you’re probably noticing right away is that everything looks great. It’s just what you’d expect. So, what’s the problem… well… if you’re developing client side code and all the time zone settings for all of your users and their computers are going to be in the same time zone… absolutely no problem at all.

The tricky part begins when we look at the computer where the time zone of the computer is set to PDT.

Ok, so what happened here is that when the date strings were passed into JavaScripts Date() function, the browser is actually then converting that date into the local time of the computer. So 4/15/2017 12:00 am becomes 4/14/2017 9:00 pm (3 hours earlier). Again, this makes perfect sense, but if you want the user to experience dates independent of time zone, you’re in trouble. This can often happen if you’re building SharePoint “applications” date/times as fixed points in time that will be used as comparators.

Ok, so let’s look at a couple of workarounds and depending on your scenario you’ll have to decide if either of them work for you. I’m not going to go into how those regional/personal settings work but I will provide you a link to where Gregory Zelfond, gives a nice explanation: Setting proper SharePoint Time Zone for users.

Adjust date to time zone of “server”

The first manipulation I made was to adjust the date field to the time zone of the “server”, when I say server I mean whatever regional setting is in effect for that “page”. I personally can’t come up with a ton of scenarios where this is useful with the exception of making comparisons. In our PDT example which changes 4/15/2017 12:00 am to 4/15/2017 3:00am, which would be midnight PDT. I readily admit this is an odd scenario but you may need it (I actually have).

Adjust to UTC time zone

The second, which I think is entirely more useful, is converting to UTC time which basically means were going to ignore the time zone entirely. So, for our scenario this means 4/15/2017 12:00am shows up as 4/15/2017 12:00am.

The Code

For this solution, we’re going to need to make two REST calls the first will be to get the regional time zone of the web we’re working in. To do that you need to make a GET request to:


The response for this call is the following, where we will use the Bias, and DaylightBias to calculate the region the server is operating in so we can mimic the values the server displays:

The second is to get all the items in our test list. Below is the code to generate the various date/time values I outlined above. Keep in mind, this is only a small code snippet from inside the loop that is traversing the items returned from out afore mentioned list.
*Assume that data is an array of responses

For the full code sample, you can go to my github repo and look in the SPDateTime folder.

For completeness sake, I should mention that if you’re going to be doing a lot of date/time manipulation it might make sense to utilize the moment.js library which makes a bunch of this stuff significantly simpler. I tend to be a minimalist when it comes to libraries, only using one when I have use for it. But if it makes you more efficient by all means don’t be a martyr and reinvent the wheel.

Hope this can help a few people out there struggling with date/times in SharePoint client side solutions.

Happy Coding.

A Big, Thank You!

Microsoft MVPYesterday was a huge day for me professionally as I was awarded my first (and hopefully not last) Microsoft MVP Award. It was such a huge honor, and I felt it appropriate to give a shout out here to the Microsoft community for their support. There have been so many great mentors in my life over the years, many of whom I still collaborate with on a regular (if not daily) basis. Everyone that I interact with has been so supportive in helping me find my way whether it be technically or “socially”, and I hope I have often been able to reciprocate in kind. So, thank you to everyone I interact with in the community for being so open and generous with your knowledge and time!

It’s been a whirlwind year for me, as the date of March 1, also marks a year since I first started talking to Marc about the possibility of joining him at Sympraxis Consulting. What felt like a huge decision was probably the best one I’ve ever made. He’s been a great mentor as I embarked on taking what I always did quietly behind the scenes out into the community. This last year has seen me do things many who’ve known me were somewhat shocked by… helping organize the Granite State SharePoint User Group, blogging more, speaking at conferences (this was the big one), and then connecting with various colleagues of Marc’s (and now of mine), in the MVP community. So, also, a special thank you to him, as I absolutely couldn’t have done it without him.

I hope this marks only the beginning of this new-ish phase of my professional life, thanks again!

Greetings from New Hampshire, Where I’m Co-Authoring a Document

As Marc said in his post on this experience, which of course he published before me, he and I needed to do some work on a Word document together today. We started by emailing, but almost immediately realized we should just share it on our OneDrive. I was in browser at the time, but have since continued to co-author it in Word on my desktop. I’m not going to lie, I’ve had problems with the co-authoring experience in the past, but was keeping an open mind since Marc and I recently “upgraded” (not sure that’s the right word) to the “First Release for Current Channel (Office Insider Slow)” version of Office 2016. I’m having challenges with authentication since that happened but otherwise I’m liking some of the new features. Anyway, cutting to the chase I wanted to add some more insight to Marc’s post regarding some differences I saw when using the desktop version of Word in this co-authoring experience.

When I opened the document from my OneDrive sync’d folder on my desktop, it immediately asked me if I wanted to automatically share changes as they happened. I of course said “yes”, not sure why I wouldn’t.

Note that the UI shows Marc’s smiling face, along with a “Skype” icon… make note that is linked to Skype for Business, so when you click on it that’s what pop’s up. Unlike the browser version where the chat window in integrated in the UI. I think this makes good sense for the desktop version, but it is different.

There’s then an icon to let you share with other people, and the “Activity” button. This one is interesting because it’s a much different experience than working in the browser. When you enable the desktop activity panel, you see a listing of “save” activities, and if you click on one of the historical activities it shows you that version of the document. You can then “Compare” or “Restore” it.

If you click on “Compare” you see a UI that shows you what revisions were made as well a three “views” of the document. Word has had this feature for a while when comparing to historical versions but it bears pointing out how much more robust the experience is on the desktop.

In the browser, you get a list of activities, but not the same level of functionality. Again, this makes sense but it’s worth noting.

We further did a little test with this blog article by my sharing with him via the “Share” button in Word (desktop) and him being on his iPhone in Oslo with a Wi-Fi connection, and we’re having a pretty darn good experience. We both had to “save/sync” the document before he showed up as “editing” but I could see him editing real time from his phone. You can see from the image below that it showed he was working on this section, and when I tried to change a word in this section Word told me it was locked.

For reference, Marc provided these screen shots of his experience on his phone, pretty compelling I think.

As Marc said, I think this is a feature we’ll find we use more and more given that we work remotely from each other as a normal course of business. As that model becomes more pervasive in corporate America and small businesses everywhere need to collaborate more I can see it expanding. What may not be as good of an experience is co-authoring on a document when the users are not part of the same subscription. In a recent episode of The Microsoft Cloud Show, Andrew Connell related some very bad experiences he was having trying to collaborate with external users. Hopefully this is an area where Microsoft can focus on improving soon.

Extending SharePoint with ADAL and the Microsoft Graph API – Part 3 (The Execution)


In Part 1, I discussed the background and setup information you would need to successfully embark on a client site widget for SharePoint that accesses the Microsoft Graph API (MSGraphAPI). In Part 2, we went in depth to the various ways of utilizing the adal.js and adal-angular.js libraries for authentication. Now, here in Part 3 we’re going to get right into the nuts and bolts of a real solution that does the following:

  • Utilizes a third-party JavaScript library to create an Excel file
  • Uses the MSGraphAPI to upload the file into a SharePoint document library
  • Manipulates the file using the Excel endpoints that are part of the MSGraphAPI

To be fair the third party library we’re using can manipulate the Excel file, but I want to leverage the API built by Microsoft that does more. Plus, it’s just a fun demo.


In Part 2, I gave three different examples of configuring and utilizing the ADAL library. The first step would be to create your “solution” and configure ADAL appropriately depending on whether you’re going to use AngularJS with ngRoute or Components or something else. I’m assuming you know how to implement one or the other of those patterns so I will only be including the code for the functions themselves in the text and not the overall project. Keep in mind I wrote my code using AngularJS so if you see a reference to “vm.”, in the code, that’s a reference to a UI binding property.

The MSGraphAPI root URL for the SharePoint library requires a couple of components that you’re going to have to gather together. The first is the site collection id, which is a GUID you can get by pasting “https://<your tenant><your site collection>/_api/site/id” in a browser. The second is the GUID of the library you want to access. You can get that most easily by navigating to the settings page of the library and decoding it from the URL.

UPDATE 5/2017
Due to a change in the SharePoint beta endpoints as a result of the sites endpoint going to v1.0 you will also need the web id, also a GUID you can get by pasting “https://<your tenant><your site collection>/_api/site/rootweb/id” in a browser. Or, if the site you’re referencing is a sub site you will need to reference that instead and get the GUID.

On the page, we have a button that executes the “createFile” function. I’ve used promise chaining here so that we can make sure we execute the asynchronous calls to do the various pieces of work in the right order. Here are the steps with a brief description and some highlights (if applicable) and then the actual code.

Step Function Description Return Values
1 createXlsx Utilizes the SheetJS/xlsx library to create an empty Excel file. A JavaScript arraybuffer that can be uploaded to SharePoint/OneDrive.
2 saveXlsx Utilizes the MSGraphAPI to upload the file to the specified SharePoint library. The id of the file, and a temporary URL which can be used to download the file. This is more applicable to OneDrive but can be handy if you want to put the URL into the page after you complete your operations on it.
3 getWorksheets Utilizes the MSGraphAPI Excel endpoint to get a list of worksheets in the Excel File
4 updateCell Utilizes the MSGraphAPI Excel endpoint to change the value of a cell

There is obviously a huge number of other things you could do with Excel file, including adding and retrieving charts and tables, etc.
Some “global” variables I’ll reference in some of the functions:

UPDATE 5/2017
The url for the beta endpoint changed slightly in that became and the documentation that says {site-id} is really a triplet that includes the <hostname>,<spsite-guid>,<spweb-guid>

The createFile function is executed by the user clicking a button/link.

Creating the Excel File

As I said earlier, we’re going to utilize a third-party library to create the Excel file. To me this seems like obvious missing functionality from the MSGraphAPI, but there may be reasons for this of which I’m unaware. So until it’s added, we can use SheetJS/js-xlsx. The documentation provides a nice simple example for creating a valid xlsx document.

Saving the Excel File to a SharePoint Document Library

The saveXlsx function utilizes the new beta endpoints that access SharePoint through the MSGraphAPI rather than the SharePoint REST endpoints. So, to save the file to the SharePoint library we use the base URL defined by the _CONFIG.SP_EP variable. See the Setup section for details on putting this URL together.

Because we are using adal-angular.js we can create a function that will execute the $http request and will append the authentication token to the header all without having to do anything extra.

We could, alternatively use the SharePoint REST endpoints to get the file into place; using the MSGraphAPI to upload the file is certainly not a requirement. However, since we are using the MSGraphAPI, the return payload includes an id that will then use later. We’re going to save that value in a variable called tempID.

At this point, our new, empty Excel file is sitting in our document library. In and of itself, this is pretty darn cool. Ok, let’s move on.

Manipulating the Excel File

I’ve included a simple read method and a very basic update method here, just to give the general idea. First, the read method gets an array of Worksheets in the Excel file. If you recall from the createXlsx function we only put one sheet in the file, so the result is an array with one item. We then assign the array to a binding variable and display it in the UI.

Second, we’ll update a cell in the worksheet. To do this we’ll have to provide a payload of data and then identify the range we want to update. I’ve hardcoded it here but obviously, you can make this dynamic.
Here’s the payload, that we set up in the createXlsx function…

…and then passed to the updateCell function in the data payload of the $http call.


I’m excited to be able to provide this coverage of the process of utilizing the MSGraphAPI from a client side solution in SharePoint. I really hope that it helps someone somewhere get up to speed quicker and create some awesome solutions of their own. I’m providing a link to my GitHub repo where you can download this AngularJS sample in its entirety. You’ll need to provide your own tenant id, client id, site collection id, and library id, but otherwise it should work as described here.
Please feel free to comment or reach out to me on Twitter (@jfj1997) if you have any questions.


OAuth Flows

Andrew Connell – Looking at the Different OAuth2 Flows Supported in AzureAD for Office 365 APIs
Microsoft – Integrating applications with Azure Active Directory
Matt Velloso – Troubleshooting common Azure Active Directory Errors
Microsoft – Should I use the v2.0 endpoint?


GitHub – Azure Active Directory Library for JS
Cloud Identity – Introducing ADAL JS v1
Cloud Identity – ADAL JavaScript and AngularJS – Deep Dive
Cloud Identity – Getting Acquainted with AuthenticationResult
Cloud Identity – Getting Acquainted with ADAL’s Token Cache
Microsoft – Call the Microsoft Graph API using OAuth from your web part

Microsoft Graph API

Microsoft – Microsoft Graph permission scopes
Microsoft – App authentication with Microsoft Graph

Extending SharePoint with ADAL and the Microsoft Graph API – Part 2 (The Authorization)


In Part 1 of this series I covered all the setup needed to start your Microsoft Graph API (MSGraphAPI) client side widget. In Part 2, we’re going to dive into the many ways to use adal.js and its counterpart adal-angular.js. I’ve included the same resources I included in Part 1, under the section for ADAL you’ll find a lot of references to Cloud Identity blog by Vittorio Bertocci a Principal Program Manager at Microsoft who has blogged extensively on the library, explaining in depth the technical workings of it. I encourage you to read those posts I’ve included below to get a complete understanding of the library. Also, included in the references is a post about utilizing ADAL in the SharePoint Framework (SPFx). As is, ADAL was never meant to be used as part of a widget architecture as ADAL isn’t a singleton, so if you have multiple web parts on your page all referencing ADAL you’re going to have issues. The post “Call the MSGraphAPI using OAuth from your web part” gives you an extension that will help isolate ADAL so that you can utilize it as part of a more strongly developed widget pattern. Since my demo is just that, and since my solution will be the only one running on the page that uses the ADAL library I’m not going to address those modifications here. But, I encourage you do so if that is part of your use case.

The ADAL library for JavaScript

Finally, we get to the part where we talk about writing some code. ADAL stands for “Active Directory Authentication Library”. Based on the client you’re using and which authentication endpoint you’re using, there are a multitude of different examples and SDK’s available as you can see on the MSGraphAPI, Getting Started page. Because we’re going to write client side code (aka JavaScript, either transpiled from Typescript or native) and access via Implicit Flow to the MSGraphAPI, we’ll use the adal.js library. It comes in two parts, adal.js and adal-angular.js. If you’re going to use the AngularJS framework, you’ll want both pieces. If not, you can just include adal.js, but there will be more work to do to authenticate and get a token. You can find the source in the ADAL GitHub repo.

User Authentication

One of the things that bothered me was the idea that the user would have to “log in” manually every time the ADAL library would need to authenticate them. In my mind, I envisioned a pop-up that would prompt them for credentials. In the scenario where you’re running this code on your on-premises server in a hybrid scenario, and haven’t set up federated sign-in to your O365 tenant that would be valid, however, in the most likely scenarios I can envision the code would be running in your SharePoint site in your O365 tenant… therefore asking the user to log in again would be annoying at best. Well, sure enough that’s not what happens, the library uses a hidden iframe on the page to make the call to get the user authenticated, since they are technically already authenticated to O365 this is just a matter of “confirming” it for lack of a better term. So, the page does flicker but otherwise this is unnoticeable to the user.

*Note: Thanks to Wictor Wilen for bringing up the issue with using adal.js in IE with a trusted site. Please check out this issue, from the GitHub repo.

ADAL Config

A big part of utilizing the adal.js libraries is to get all the configuration settings correct. I want to highlight some of the configuration properties that I reviewed and what was useful. You’ll see how to put it together and pass it to adal.init() later. The definitions here come straight from the documentation in the adal.js file itself.

  • tenant: string – Your target tenant.
  • clientID: string – Client ID assigned to your app by Azure Active Directory.
  • endpoints: array – Collection of {Endpoint-ResourceId} used for automatically attaching tokens in webApi calls.
  • popUp: boolean – Set this to true to enable login in a popup window instead of a full redirect.Defaults to false.
  • cacheLocation: string – Sets browser storage to either ‘localStorage’ or sessionStorage’. Defaults to ‘sessionStorage’.
  • anonymousEndpoints: array – Array of keywords or URI’s. ADAL will not attach a token to outgoing requests that have these keywords or uri. Defaults to ‘null’.

Using ADAL.js with No Framework

The most tedious coding scenario with ADAL is utilizing it without the AngularJS add-on. I found this blog article on how to do it, but unfortunately for me although it worked initially, when it came time to renew the token the ADAL library was throwing errors. After quite a bit of time on it, reviewing the adal-angular.js file and various other blog posts, I managed to work out a scenario that seems to work reliably.

For simplicity’s sake, I’m showing an entire html file including the JavaScript in one code snippet. I commented the code extensively but in a nutshell, we’ll do the following:

  • 1. For simplicity code is executed on page load using jQuery’s document.ready function. The goal of that bit of code is to determine if AAD is doing a callback and if so, let the adal.js library handle it.
    • a. If not a callback, check if the user is authenticated, if not, call the ADAL login function
    • b. If not a callback, and user is authenticated, then execute any initialization code we want to run.
  • 2. When a call needs to be made against the MSGraphAPI, e.g., the sympraxis.getGraphData function, first get the token by calling the sympraxis.getAuthToken function (which returns a promise since it may need to make an asynchronous call to AAD, and if so we need to wait until that completes.
    • a. If the token is in the cache, return it by resolving he promise.
    • b. If the token is not in the cache, acquire a new one and then resolve the promise with the new token.
  • 3. Make the REST call to the MSGraphAPI and include the token in the header.

Using Angular 1.x framework with ngRoute

If you’re a fan of the AngularJS framework, then the adal-angular.js library does all the heavy lifting for you. It extends both AngularJS’s $http provider and the ngRoute directive. It adds the bearer token that was retrieved using the adal.js library to the $httpProvider in your REST calls for you. In addition, it accepts an additional configuration setting on each of your routes which determines whether AD login should be required or not. If set to true, when you navigate to the particular route, the adal-angular.js library makes sure the user is logged in, and then also makes sure the $httpProvider appends the token. If it’s not set – or set to false – then the token will not be appended to the $http calls. Also, note here that I’ve utilized html5Mode on the $locationProvider. I did that because of a recommendation in the documentation that indicated that having it on fixes issues with endless callbacks. I too found this to be an issue, but only when bypassing ngRoute. For safety, I put it in both examples, but I’ll leave it to you to test whether it’s necessary in your solution or not.

So, at this point I’m sure you can see that this scenario is significantly simplified from our “No Framework” version above. Other than the changes to the .config, no other changes are necessary. You just go about your business making $http calls and the adal-angular.js library does the rest .

Angular 1.5+ using Components

Angular version 1.5 introduced a new concept called “Components” which was viewed widely as a superior architectural strategy for building Angular applications. So much so a very similar schema was adopted for Angular 2. With components, you generally do not use ngRoute. Further, with many widget solutions, routing is overkill. So, we need to consider another strategy for managing when the $http provider should include the token and, because ngRoute was making sure the user is authenticated for us (as I noted in the previous section), we’re going to need to handle that as well.

For authentication, we’ll reuse the concepts we discussed in the “No Framework” section by making sure on page load we trap the callback and allow the ADAL.js library to handle it. Because this is a component there is the handy $onInit() function. That will work perfectly for our needs.

Now to handle server calls that are not meant to have the token amended… enter, anonymousEndpoints. In this scenario, our configuration would not include the $routeProvider. Instead we would include relative URLs we want to ignore when making $http calls. In this case I included two anonymous endpoints, one for the location of my component templates, and the other is the SharePoint REST APIs.

I specified relative URLs in the anonymousEndpoints array we want to ignore because if you review the code that decides if the $http call should append the bearer token to it, you can see that if the URL includes http or https it will try to find a matching endpoint. If it does not find one, it will utilize the token that was used for the login resource. For example, if you try to make a call against the SharePoint REST API and the URL you use includes it’s going to append the bearer token and subsequently fail. Also, note that I only included the root of the URLs I want ADAL to ignore. That is because the test for anonymous endpoint uses a “contains” check.

The controller for the component we create would then define an $onInit() function that would handle login for those components that need it. There are certainly other ways architecturally to handle this, but I wanted to keep things simple so I wouldn’t lose the point in the elegance of the architecture. At a baseline this is what it would look like. We’re going to expand on this, and explain the SP_EP url in the _CONFIG in Part 3.


Now we’ve completed Part 2, you should have everything you need to go off and start making calls to the MSGraphAPI. But, if you’re interested, Part 3 will bring all of this together and show you how to create an Excel spreadsheet from scratch, add it to a SharePoint document library, and then manipulate it with the Excel API’s. Please stay tuned…


OAuth Flows

Andrew Connell – Looking at the Different OAuth2 Flows Supported in AzureAD for Office 365 APIs
Microsoft – Integrating applications with Azure Active Directory
Matt Velloso – Troubleshooting common Azure Active Directory Errors
Microsoft – Should I use the v2.0 endpoint?


GitHub – Azure Active Directory Library for JS
Cloud Identity – Introducing ADAL JS v1
Cloud Identity – ADAL JavaScript and AngularJS – Deep Dive
Cloud Identity – Getting Acquainted with AuthenticationResult
Cloud Identity – Getting Acquainted with ADAL’s Token Cache
Microsoft – Call the Microsoft Graph API using OAuth from your web part

Microsoft Graph API

Microsoft – Microsoft Graph permission scopes
Microsoft – App authentication with Microsoft Graph

Extending SharePoint with ADAL and the Microsoft Graph API – Part 1 (The Setup)

When Marc and I were at Ignite this past September, #SharePoint was the most tweeted hashtag. We heard a lot about the new SharePoint Framework (SPFx), which was clearly the focus for developers. But another oft-discussed technology topic centered on the expansion of the Microsoft Graph API (MSGraphAPI). It’s clearly going to be the API of choice going forward to access all Office 365 content, but its maturity is still early days. At Ignite, Microsoft announced the beta endpoints for accessing SharePoint through the Microsoft Graph API.

Overall I think this is a good thing, as the API has significantly better adherence to the OData standard compared to the SharePoint REST services. That said, as users of the SharePoint REST services we’re very used to the simplicity of those calls and we literally pay no attention to authentication if we’re operating on SharePoint pages. The tokens we need are already made available right on the page, we just pluck them out, and so there’s little effort. As the features and functionality of the MSGraphAPI leap ahead and we’re trying to extend the SharePoint UI to take advantages of all the new features and functionality, we’re going to have to become comfortable dealing with authentication issues so we can leverage all that power.

As I worked to understand all the ways I could utilize the MSGraphAPI I realized that I was collecting a rather lengthy list of resources and reaching out to the various experts I know in the community to get clarification on what I was finding. It seemed appropriate to consolidate that information into a series of blog posts. Part 1 will cover all the background information on Azure Active Directory, authentication methods and flows. Part 2 will go into the SDK library for getting an authorization token. And Part 3 will bring it all together in a demo application that runs as a widget on a SharePoint page, but accesses the MSGraphAPI to create and manipulate an Excel document in a SharePoint library. As we move forward with other solutions based on the MSGraphAPI, I may do additional posts to demonstrate useful techniques.

So, let’s begin. Our goal is to access a SharePoint document library and use the Excel API (included in the MSGraphAPI) that will allow us to manipulate Excel files in code. An example use-case for this solution is to generate an “export” of the data you’re tracking on your site so that others can do analysis on it for a data analytics project. Before we write any code, we need to do the following:

  1. Select an authentication method
  2. Determine the type of flow (small “f”, not the Flow automation tool) you will use to get an access token that you can utilize to authenticate with a resource that trusts Azure Active Directory.
  3. Register your application with Azure Active Directory to define your set up and the permissions it needs.
  4. Select the SDK library that is right for your project based on the Operating System or Access Application (e.g.., web browser) and development language.

Once that’s done, you can write your application – this is almost the easy part. But first I’ll provide some detail on the steps above.

Authentication Choices

There are two authentication choices when trying to access the MSGraphAPI from client side code. I’m going to focus here on JavaScript and access specifically for users who are already authenticated in SharePoint. The two authentication providers the MSGraphAPI support are:

  • To authenticate users with personal Microsoft accounts, such as or accounts, AND authenticate users with enterprise (that is, work or school) accounts, use the Azure Active Directory (Azure AD) v2.0 endpoint.
  • To authenticate users with enterprise (that is, work or school) accounts ONLY, use Azure AD.

The second one of these, “authenticating users with enterprise accounts” is the one that is appropriate for our scenario. The “App Authentication with Microsoft Graph” will walk you through a more extensive decision matrix about which endpoint is right for you, so if you have a more complicated scenario than what I’m focused on, e.g.., authenticating users to your application that isn’t hosted in SharePoint and utilizes or accounts, please review that documentation. You’re also going to want to review “Should I use the v2.0 endoint?” as well, as there are a significant number of restrictions that may affect you.

Implicit Flow (aka Implicit Grant Flow)

When you utilize one of the aforementioned authentication choices you need to decide on what type of “flow” you’re going to use. Your choices are “Implicit Grant Flow” or “Authorization Code Grant Flow” or “Client Credentials Grant Flow.” Andrew Connell’s blog post on this subject can help you learn more about the three types that are supported. In this case, because of how we’re going to access the MSGraphAPI (via the browser) and the language we’ll use to do it (JavaScript), the decision has been made for us in the sense that the SDK we’re going to utilize forces you to use Implicit Grant Flow. The idea is to get an access token to impersonate a user. However, unlike an authorization code grant flow, instead of requesting an authorization code first, the client is issued the access token directly. The access token has a life of only one hour before it expires and the user would need to request a new token to make additional requests.

Why the one-hour expiration? In basic terms because we are operating in a browser, if the access token was always valid, it becomes easier for any other application or user to “steal” said token and access the server without authorization. All the mucking around with tokens and authentication flows is a way to make sites more secure.

Registering your Application

Updated Guidance 2/9/2017 – Use the vs to create your Application

This section has been re-written to use the newer portal. I was under the impression that by doing so I would be creating an application that was incompatible with ADAL.js… however, based on comments from John Liu (@johnnliu) as well as a conversation with Yina Arenas (@yina_arenas), Principal Program Manager Lead for the Microsoft Graph, it appears I was misguided. So, my error becomes your gain as I will attempt to completely document creating an application in the new portal that will work with ADAL.js, and has some added benefits to boot as many things are much simpler.

That said, one thing remains the same, you still need to have access to the Azure portal for your tenant. Ergo, you’re going to have to find the individual who does and bake them cookies. Maybe a lot of cookies.

After launching the site, I navigated to Active Directory, then I clicked on the “App registrations” heading. You can see here that the application I created in the old is still there (“ADALTest”) and a new one I created for this test called “ADALTest2” has been added – I did so by clicking “Add” at the top.


Once your application is created you need to set the properties and permissions. This is where things are slightly different from creating an application in the old portal. As you can see below you’ll get an “Application ID” assigned automatically. This takes the place of the client key from the old portal, this is confusing if you’ve done this in the old portal, but honestly given were using Implicit Flow it makes a whole lot more sense that you wouldn’t need a client secret because you’re technically not using one. Ok, so what you’ll need to do is give your application and App ID URI, I used the URL of my site collection. This URL can be used only once, so if I wanted to create a second app, I would need to give it a different URI. This is a much bigger discussion regarding governance, and reuse of these applications which I’m not going to go into now, but rest assured I will at some point when I’ve solidified my position.

You will also need to set up a “Reply URL”. In this case, because we will access the application from SharePoint, this needs to just be your SharePoint host name. I will cover the “Required permissions” section below. There’s also an “Owners” section and a “Keys” section. That “Keys” section is what threw me off originally, because in the old portal we used the key it generated as the client id, but as I said in the new portal we use the Application ID.

In addition, you will need your Tenant ID (Guid), in the old portal we got this from the URL, but in the new portal they’ve given us a nice little tool to get it. Go to the top right and click on the “?” and then choose “Show diagnostics”. That will bring up a new page that will show you a JSON object that has a tenant’s section, you’re going to want the guid for your domain’s tenant, although multiple other tenants may show up.


One of the things that can be confusing about setting up your application in Azure AD is configuring the permissions scopes for the application itself. This article gives you the full details on setting up the proper permissions based on what you need to access in the MSGraphAPI. It also includes several scenarios. For our scenario, which you’ll see in more detail in Part 3, I only needed to grant the application the delegated permission “Have full access to all files user can access”. By default, the application has the “Sign in and read user profile” delegated permission for Windows Azure Active Directory. Since I do some testing by accessing the “me” endpoint, that gives me my user profile information I’m leaving this, but feel free to remove it if you’re not reading the user’s profile.

So, you will first “add” the “Microsoft Graph” application to the “Required Permissions” section. Then click on it to see the available application and delegated permissions that can be assigned. The gotcha with permission in the new portal is that after you select the permissions you want and “save” the changes, you then need to do an additional step and “grant” them. You do so by click on the “Grant Permissions” button from the “Required Permissions” page.

If after you’ve gotten through Part 2, you get the error “The user or administrator has not consented to use the application with ID….” in the browser console it most likely means that you forgot to do the “grant” step I outlined above.

Enabling Implicit Flow

In the new portal, there’s a nice easy way to modify the manifest for your application to allow Implicit Flow. Click on the “Manifest” button for your application. A window will appear that gives you the JSON object that is the applications manifest.

Find the “oauth2AllowImplicitFlow” property and change its value to “true”. Then click “Save”.


I hope that this part can help others understand the various building blocks of setting up a client based widget for SharePoint that accesses the MSGraphAPI. In Part 2, we’ll cover the ADAL library and its various configurations to actually get the authorization we need, and then in Part 3, I’ll use everything we’ve covered in Parts 1 and 2 in a demo that will provide a complete end to end solution for creating an Excel file (currently utilizing a third-party JavaScript library as the functionality doesn’t exist yet in the MSGraphAPI), putting that file into a SharePoint library, and changing the data values in it.


OAuth Flows

Andrew Connell – Looking at the Different OAuth2 Flows Supported in AzureAD for Office 365 APIs
Microsoft – Integrating applications with Azure Active Directory
Matt Velloso – Troubleshooting common Azure Active Directory Errors
Microsoft – Should I use the v2.0 endpoint?


GitHub – Azure Active Directory Library for JS
Cloud Identity – Introducing ADAL JS v1
Cloud Identity – ADAL JavaScript and AngularJS – Deep Dive
Cloud Identity – Getting Acquainted with AuthenticationResult
Cloud Identity – Getting Acquainted with ADAL’s Token Cache
Microsoft – Call the Microsoft Graph API using OAuth from your web part

Microsoft Graph API (MSGraphAPI)

Microsoft – Microsoft Graph permission scopes
Microsoft – App authentication with Microsoft Graph

Create SharePoint Document Set (and set metadata) using REST

A quick post today to augment what’s out there in the “Googleverse”.  I needed to create a Document Set in client side code, and went out to find the appropriate calls to make that happen.  To update the metadata on the folder you create (which is all a Document Set really is under the covers), you simply make an “almost” normal list item update call.  So the following is the various “functions” you need and how to string them together to do this task.  As you read through, I’ll point out in the code where other older posts on this topic steer you wrong.

WARNING, this code is not optimized for best practices but is generalized for reuse. As sample code, it may not work in all scenarios without modification.
NOTE: this code requires jQuery to execute the AJAX calls and the promise
NOTE: The use of odata=verbose is no longer required and better practices would suggest that it should not be used in production. See this post from my partner Marc Anderson more information.

This first function is what is used to create the document set folder. The function uses the folderName parameter as the title of the Document Set

The following code is a generic update function, we’ll use it to update our Document Set’s metadata after its been created. In other posts out there, you’ll see the url of the AJAX call set to the folder.__metadata.uri. Unfortunately, that uri is no longer valid as a way to update the metadata and the call will fail. Also, when updating list items there’s a standard “type” that defines the object your updating, with our Document Set this type is different than a generic list item, and so I’m passing it in from our calling function. It can partially be retrieved from the folder creation response’s metadata, but it’s not exactly correct and the call will fail.

NOTE: the list’s display name in this case has no spaces or odd characters, if yours does you will need to escape those characters when creating the list type, for example a list containing an “_” you would use the following code: “SP.Data.” + list.replace(‘_’, ‘_x005f_’) + “ListItem”

So now we have functions that do the work for us we just need to call them. In this case I’m showing the code encapsulated in a function that does the calls but returns a promise to the calling function so that the caller can be notified when the document set has been created completely.

The call to createDocSet includes the Document Set’s content type, this can be retrieved from the URL of the Content Type definition page. Also note in this code that you need to do a bit of manipulation of the eTag if you’re going to pass it. You technically could use a wildcard instead of extrapolating the eTag, but for completeness I’ve included it.