Apple Music, iTunes Match, iCloud Music, yada, yada, yada

With Apple’s recent roll out of Apple Music it’s generated a lot of confusion from consumers. I’ve lost count on the the number of blog posts I read that attempt to explain the nuances between Apple Music, iTunes Match, and iCloud Music Library.The following article is a good read…

http://www.imore.com/itunes-match

So, why am I added the the list of blog posts on this subject? More for my own sanity, but also to provide my own perception of these Apple services and what they mean to consumers.

Apple Music

If you’ve used Spotify, Beats Music, Xbox Music, or any one of the myriad of streaming music services, Apple Music should come as no surprise. AM is a streaming music service that allows consumers to stream any music available in Apple’s iTunes music store to Apple devices. Apple will soon offer the service to Android consumers.

Similar to it’s competitors, AM is available to consumers for the monthly fee. $9.99 (in the US, other countries have different prices) for an individual account and $14.99 for a family plan.

The idea of AM is that you can listen to music anywhere you have Internet, or download music for offline listening, create playlists in the iOS music app and within iTunes on the Mac. Siri understands requests to play a particular genre, artist, track, album, or year of music, which my children love in the car. What makes AM appealing (to me at least) is the ability to listen to AM music songs alongside my purchased music songs in the same playlist on all my Apple devices. AM standing alone makes perfect sense, but it’s the existence of other Apple music services that’s causing some confusion. Read on…

iTunes Match

iTunes match was revolutionary when Apple first introduced it. Previously, service providers, like Google, had the ability to upload your music to cloud servers to allow streaming on the go. The majority of us settled for carrying around iPods with large storage or a subset of our music library on what storage we had available on a portable device. I remember the painful experience of keeping copies of my music library on multiple Apple devices and Windows PCs so I could listen to the same music in the office, at home, and on the bus. Google music required I use their HTML5 player, and I didn’t like that.  iTunes Match changed everything for me.

iTunes Match is a service costing $25 per year, allowing iTunes to scan your media library (on a Mac or PC) and match songs found in the iTunes Music Store. Matched songs are then available to consumers to play on any iOS device and within iTunes on the Mac and PC as long as you have an Internet connection. Even though an original song exists on your Mac at home, you can play the same song on your office Windows PC (using iTunes) or on your iPhone via the music app. What about those eclectic songs that you own that do not reside in the iTunes Music Store? Simple, iTunes uploads them to some private space in Apple’s cloud so you can download and play them on other devices.

iTunes Match is different to Apple Music in many ways, but predominantly:

  • iTunes Match only matches music you already own, whereas AM allows you access to all music in the iTunes Music store.
  • iTunes and the iOS music app downloads matched songs in full, before allowing you to play them (at least that was the way it was before iTunes 12.2 and iOS 8.4).
  • Apple Music streams songs in the same way that Pandora and Spotify do.
  • Apple Music tracks are DRM encoded, iTunes Matched songs are not.

Now that Apple Music is here, do I need iTunes Match? This is a question asked by many, and the answer isn’t simply yes or no. It really depends on your intent to own your own music or not. If you’re paying for AM each month and do not plan on cancelling the service any time soon, there is no good reason to pay the yearly iTunes Match fee in addition. As long as you keep up with your AM subscription, all songs in your music library will remain as long as they’re available in the iTunes Music Store. I cannot say for certain, but I have to believe that when my iTunes Match subscription ends, all those previously “matched” songs will either remain as such, or convert to “Apple Music” songs. We’ll find out soon as AM subscriptions gain longevity and iTunes Match subscriptions lapse.

BTW, it’s worth my mentioning that signing up for Apple Music will not cancel your iTunes Match subscription. I had to cancel mine manually via my account page – see instructions here.

If you’re an iTunes Match subscriber and have decided to take advantage of Apple Music 3 months free trial, and are not sure you plan on subscribing to AM full time, I recommend you do not let your iTunes Match subscription lapse. Assuming you have your original non-DRM files downloaded somewhere in iTunes, you can always go back to the yearly $25 model and continue to match those songs you own. Those AM songs you don’t own will stop working because of Apple Music DRM. As long as your Match subscription is active you should be able to continue listening to the music you do own on all Apple devices. On the other hand, cancelling both AM and iTunes match means you’ll lose all cloud music access and can only play songs which you have stored locally and DRM-free.

Now, if you’re diligent (read: anal), like me, you’ll most likely have a tidy back up of all your original ripped music (from CDs you own, right?). In the event that both your Apple Music and iTunes Match subscriptions lapse, you should be able to go back to the originals.

Some took the brave step of deleting their originals after signing up with iTunes Match. Some haven’t paid much attention and their library consists of both locally downloaded songs as well as cloud only matched songs (especially if you haven’t played them recently). Those with multiple devices may have local music on one device and not another – it’s hard to tell. My recommendation is to backup any and all locally downloaded songs, via iTunes, while you’re still subscribed to iTunes Match. This way you’ll at least have music you own DRM-free somewhere. Those AM music tracks that you never purchased will disappear (and not play if you have a local DRM copy available).

iCloud Music Library

I left the best to last… if you’ve signed up to use Apple Music, iTunes probably (should have) gave you the option to switch over to iCloud Music Library. Here is another helpful link. What’s this, a third service from Apple for music? Sort of…

The best way to get this mismatch of Apple Music Services straight in your head is to consider Apple Music and iTunes Match as “services” and iCloud Music Library as a freebie add-on for AM subscribers. After all, AM and iTunes Match are paid subscription services in their own right, which you can opt in or out. iCloud Music Library is an extra feature available to those signed up with Apple Music.

iCloud ML is exactly what the name says it is – it’s your iTunes Music Library stored in the cloud. Long-time users of iTunes Match are probably screaming at this blog post and saying that is what they’ve been using all along and they’re partially right. iCloud ML aims to replicate your iTunes Music Library across all Apple devices and include match, non-matched, and Apple Music songs in all playlists. iTunes Match would not sync playlists that contained non-matched and non-uploaded songs. Personally, I think Apple took this feature from iTunes Match and made it available to AM subscribers so AM subscribers could cancel iTunes Match without losing non-matched local music in the cloud.

Unfortunately, iCloud ML has gotten bad press since the roll out of Apple Music. If you look at the slew of complaints since the roll out of iOS 8.4 and iTunes 12.2, most are about iCloud ML and not the actual AM service. From what I can tell, people migrating from other music streaming services to AM continued their life without issue (except for recreating their favorite playlists in the new AM service). However, those that manage their own Matched music libraries in iTunes were very upset when iCloud ML started monkeying with their music libraries. There were lots of complaints of missing songs, missing artwork, incorrect song metadata, changes to the playlist not replicating to all devices… the list goes on. Apple recently pushed an update to iTunes – 12.2.1 to address an issue where Apple classified matched songs as DRM Apple Music songs in version 12.2.

To clarify – you do not need to switch over to iCloud Music Library if you’re an Apple Music subscriber. In fact, if you’re untrusting of Apple’s recent roll out, then I’d recommend not opting into iCloud ML. In this case, you’ll be able to listen to AM songs on all your devices and see AM playlists, but your local music will remain local. I still have time left with my iTunes Match subscription so cannot determine if opting out of iCloud ML will eradicate my “matched” tracks if I have AM turned on without an iTunes Match subscription active.

I took the plunge with iCloud ML and made sure I had a back up of my original MP3 and AAC files. I came from an iTunes Match subscription, which I cancelled the automated billing shortly after taking the plunge with AM. I’m curious to see what will happen to my “matched” songs once my iTunes Match subscription lapses – hopefully they’ll stay DRM-free, but I’m not too bothered knowing I have my originals and plan on staying with AM for the immediate future.

Something I found out of late, and I’m not sure if Apple is addressing it, is that iCloud ML and Apple Music appear to impose request throttling. In non-techie terms – iTunes and iOS can make a finite number of calls to the AM and iCloud ML servers within a period of time (I’m not sure how many requests or the window of time). This, like most web services, prevents denial of service attacks by malicious applications flooding a service with too many requests. The upshot of this is that I found I hit the throttle limit easily when making mass changes to my iTunes Music Library with iCloud ML enabled. I spent an hour “loving” tracks in my library so that AM would produce better curated playlists and recommendations, after which I’d lose connectivity to AM and iCloud ML. It was quite frustrating.

Summary

To summarize… Apple Music and iTunes Match are two different Apple Cloud services. You do not necessarily need iTunes Match if you’re an AM subscriber, but might want to go back to Match if you cancel your AM subscription – in which case keep back ups of your original and matched DRM-free downloads.

iCloud Music Library is a bit of a cluster-**** at the moment and it appears that Apple is making strives to fix it. iCloud ML works for me (after hours of tinkering) but if you’re proud of the many hours invested in your iTunes Music Library, you may not want to let iCloud ML run rampant over it just yet.

SharePoint PowerShell Scripts

It’s time to give my blog a fresh injection of content….

I’ve been working for several months on a lot of PowerShell script work for SharePoint (2010, 2013, and SharePoint Online). I figured it was about time that I put sanitized copies of my scripts up on my blog site for all to read.

I’ve added a new section on my blog, aptly named “Scripts”, which you can access via the top level navigation of this site. From there, I present an ongoing set of links to each script I develop and publish. At this time, the following is a list of the client-side scripts I’ve uploaded…

To access my SharePoint 2013 farm provisioning scripts, see here on GitHub.

Bulk Check In for SP2010 Files with No Version Info

SharePoint best practice is to disable “require check in” on document libraries before doing a large bulk import of documents. I received an email from a customer last week, who had not followed this best practice and had over 35,000 documents checked out, which no one but he could see.

Unlike checked out documents with previous check in version(s), a newly uploaded document is not visible by anyone but the person who uploaded the file, even administrators. Fortunately, SharePoint provides a way for an admin to take ownership of these documents via “Manage checked out documents” in the library settings. However, when dealing with a document count that exceeds the default threshold of 10,000, SharePoint returns an error. Temporarily increasing the threshold gets around the error, but then the user interface becomes intolerably slow.

Even after taking ownership, then there’s the task of bulk check in, which is again, a slow process via the UI for large item count document libraries. What I wanted was a PowerShell script to both take ownership of the documents and then check them in. Below is the server-side script I created….

Note: I had to use server-side and not client-side PowerShell because CSOM does not expose checked out files method. The script was tested on SharePoint 2010.

Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue;

function BulkCheckIn {
param([Microsoft.SharePoint.SPFolder]$folder);
$folder.Files | ? { $_.CheckOutStatus -ine "None" } | % {
Write-Host $_.ServerRelativeUrl;
$_.CheckIn("Initial check in", [Microsoft.SharePoint.SPCheckinType]::MajorCheckIn);
}
$folder.SubFolders | % { BulkCheckIn -folder $_; }
}

$site = Get-SPSite http://mysitecollection/;
$web = $site.OpenWeb('subsite/subsite');
$lib = $web.Lists['Documents'];
$lib.CheckedOutFiles | % {
Write-Host "Processing $($_.Url)";
$_.TakeOverCheckOut();
}
BulkCheckIn -folder $lib.RootFolder;

SharePoint 2013 List Item Save Timeout

I received a report from a colleague today that he was getting timeout errors after clicking the save button on a list item edit form. Initial testing of web head performance showed no issues and the ULS log only reported timing out during the save.

Taking a deeper look, we established that the list in question had a custom 2013 workflow attached – aha! At the same time, OOTB publish workflows were taking longer than usual to complete.

Next step, we checked on the Workflow Manager log in the event view on each of the machines in our Workflow Manager farm. Lo and behold, we found a critical issue with connecting to the Service Bus on one of the servers. Turned out that all three service bus services were stopped. In checking the other two servers in the Workflow Manager quorum, they too showed stopped SB services.

My guess is that IT had rolled out a patch for Service Bus and not checked to see if the services restarted on each affected server. I believe Microsoft recently released a patch for Service Bus, which may or may not require a server reboot, which could account for the services having stopped and not restarted (expected after a reboot).

So, there you have it, if you come across these symptoms, check your workflow.

Install Workflow Manager with PowerShell

By default, SharePoint 2013 on-premises installation includes the legacy SharePoint 2010 workflow engine only. To take advantage of Workflow 2013 you must install Workflow Manager. You can achieve this by installing Workflow Manager with the Web Platform Installer and then configure with the Workflow Manager Configuration Wizard. I have recently cobbled together my own SharePoint 2013 configuration scripts to setup SharePoint 2013 on-prem – soup to nuts – with Powershell. In keeping with this same theme, this blog post details the script required to install and configure Workflow Manager.

First things first, Workflow Manager is a separate installation from SharePoint, available on the web. I used the latest version of the Web Platform Installer to pull down the files. I had to install WPI first.

With WPI installed, open an elevated console and run the following command:

webpicmd /offline /Products:WorkflowManagerRefresh /Path:c:WorkflowManagerFiles

We now have the Workflow Manager installation files in c:WorkflowManagerFiles. Installing WFM is a simple case of running the following command:

WebpiCmd.exe /Install /Products:WorkflowManagerRefresh /XML:c:/WorkflowManagerFiles/feeds/latest/webproductlist.xml

Note: Make sure you install the ‘Refresh’ of WFM if installing on Windows Server 2012 R2. Installing the original 1.0 version causes issues when registering WFM with SharePoint 2013. I strongly recommend patching SharePoint 2013 to SP1.

After completing the installation, Workflow Manager launches the configuration wizard. You can use the wizard WFM if you like and need do no more, but if you want to see the juicy script that the wizard generates, then read on.

The following is my complete script to configure a WFM farm – it’s a simple configuration. For those interested, Spencer Harbar has a series of good posts on configuring load balanced WFM: http://www.harbar.net/articles/wfm2.aspx.

function WFM-Configure {
    # Create new SB Farm
    $SBCertificateAutoGenerationKey = ConvertTo-SecureString -AsPlainText  -Force  -String $passphrase;
    $WFCertAutoGenerationKey = ConvertTo-SecureString -AsPlainText  -Force  -String $passphrase;
    $managementCS = 'Data Source=' + $dbServer + ';Initial Catalog=' + $dbPrefix + '_WFMSB_Management;Integrated Security=True;Encrypt=False';
    $gatewayCS = 'Data Source=' + $dbServer + ';Initial Catalog=' + $dbPrefix + '_WFMSB_Gateway;Integrated Security=True;Encrypt=False';
    $messageContCS = 'Data Source=' + $dbServer + ';Initial Catalog=' + $dbPrefix + '_WFMSB_MessageContainer;Integrated Security=True;Encrypt=False';
    Write-Host -ForegroundColor White ' - Creating new Service Bus farm...' -NoNewline;
    try {
        $sbFarm = Get-SBFarm -SBFarmDBConnectionString $managementCS;
        Write-Host -ForegroundColor White 'Already Exists';
    }
    catch {
        New-SBFarm -SBFarmDBConnectionString $managementCS -InternalPortRangeStart 9000 -TcpPort 9354 -MessageBrokerPort 9356 -RunAsAccount $spServiceAcctName `
            -AdminGroup 'BUILTINAdministrators' -GatewayDBConnectionString $gatewayCS -CertificateAutoGenerationKey $SBCertificateAutoGenerationKey `
            -MessageContainerDBConnectionString $messageContCS;
        Write-Host -ForegroundColor White 'Done';
    }
    # Create new WF Farm
    Write-Host -ForegroundColor white " - Creating new Workflow Farm..." -NoNewline;
    $wfManagementCS = 'Data Source=' + $dbServer + ';Initial Catalog=' + $dbPrefix + '_WFM_Management;Integrated Security=True;Encrypt=False';
    $wfInstanceCS = 'Data Source=' + $dbServer + ';Initial Catalog=' + $dbPrefix + '_WFM_InstanceManagement;Integrated Security=True;Encrypt=False';
    $wfResourceCS = 'Data Source=' + $dbServer + ';Initial Catalog=' + $dbPrefix + '_WFM_ResourceManagement;Integrated Security=True;Encrypt=False';
    try {
        $wfFarm = Get-WFFarm -WFFarmDBConnectionString $wfManagementCS;
        Write-Host -ForegroundColor White 'Already Exists';
    }
    catch {
        New-WFFarm -WFFarmDBConnectionString $wfManagementCS -RunAsAccount $spServiceAcctName -AdminGroup 'BUILTINAdministrators' -HttpsPort 12290 -HttpPort 12291 `
            -InstanceDBConnectionString $wfInstanceCS -ResourceDBConnectionString $wfResourceCS -CertificateAutoGenerationKey $WFCertAutoGenerationKey;
        Write-Host -ForegroundColor white 'Done';
    }
    # Add SB Host
    Write-Host -ForegroundColor white ' - Adding Service Bus host...' -NoNewline;
    try {
        $SBRunAsPassword = ConvertTo-SecureString -AsPlainText  -Force  -String $spServiceAcctPwd;
        Add-SBHost -SBFarmDBConnectionString $managementCS -RunAsPassword $SBRunAsPassword -EnableHttpPort `
            -EnableFirewallRules $true -CertificateAutoGenerationKey $SBCertificateAutoGenerationKey;
        Write-Host -ForegroundColor white 'Done';
    } 
    catch {
        Write-Host -ForegroundColor white 'Already Exists';
    }
    Write-Host -ForegroundColor white ' - Creating Workflow Default Namespace...' -NoNewline;
    $sbNamespace = $dbPrefix + '-WorkflowNamespace';
    try {
        $defaultNS = Get-SBNamespace -Name $sbNamespace -ErrorAction SilentlyContinue;
        Write-Host -ForegroundColor white 'Already Exists';
    }
    catch {
        try {
            # Create new SB Namespace
            $currentUser = $env:userdomain + '' + $env:username;
            New-SBNamespace -Name $sbNamespace -AddressingScheme 'Path' -ManageUsers $spServiceAcctName,$spAdminAcctName,$currentUser;
            Start-Sleep -s 90
            Write-Host -ForegroundColor white 'Done';
        }
        catch [system.InvalidOperationException] {
            throw;
        }
    }
    # Get SB Client Configuration
    $SBClientConfiguration = Get-SBClientConfiguration -Namespaces $sbNamespace;
    # Add WF Host
    try {
        $WFRunAsPassword = ConvertTo-SecureString -AsPlainText  -Force  -String $spServiceAcctPwd;
        Write-Host -ForegroundColor White ' - Adding Workflow Host...' -NoNewline;
        Add-WFHost -WFFarmDBConnectionString $wfManagementCS `
        -RunAsPassword $WFRunAsPassword -EnableFirewallRules $true `
        -SBClientConfiguration $SBClientConfiguration -CertificateAutoGenerationKey $WFCertAutoGenerationKey;
        Write-Host -ForegroundColor White 'Done';
    }
    catch {
        Write-Host -ForegroundColor white "Already Exists";
    }
}

Let’s break the script down a little. I should mention that cutting and pasting my script into your Powershell window won’t work to start because the script assumes the existence of a few predefined variables, as follows:

  • $dbServer – SQL Alias for my SQL server (best to use an alias not the server name)
  • $dbPrefix – Prefix for all my database names
  • $spServiceAcctName – Name of my service account (used for workflow manager)
  • $spServiceAcctPWD – Password for my service account
  • $spAdminAcct – Admin account for SharePoint
  • $passphrase – Passphrase used by WFM when joining new hosts to the farm

WFM consists of two parts – the Service Bus and the Workflow Engine. To start with, my script creates a new Service Bus Farm…

$SBCertificateAutoGenerationKey = ConvertTo-SecureString -AsPlainText  -Force  -String $passphrase;
    $WFCertAutoGenerationKey = ConvertTo-SecureString -AsPlainText  -Force  -String $passphrase;
    $managementCS = 'Data Source=' + $dbServer + ';Initial Catalog=' + $dbPrefix + '_WFMSB_Management;Integrated Security=True;Encrypt=False';
    $gatewayCS = 'Data Source=' + $dbServer + ';Initial Catalog=' + $dbPrefix + '_WFMSB_Gateway;Integrated Security=True;Encrypt=False';
    $messageContCS = 'Data Source=' + $dbServer + ';Initial Catalog=' + $dbPrefix + '_WFMSB_MessageContainer;Integrated Security=True;Encrypt=False';
    Write-Host -ForegroundColor White ' - Creating new Service Bus farm...' -NoNewline;
    try {
        $sbFarm = Get-SBFarm -SBFarmDBConnectionString $managementCS;
        Write-Host -ForegroundColor White 'Already Exists';
    }
    catch {
        New-SBFarm -SBFarmDBConnectionString $managementCS -InternalPortRangeStart 9000 -TcpPort 9354 -MessageBrokerPort 9356 -RunAsAccount $spServiceAcctName `
            -AdminGroup 'BUILTINAdministrators' -GatewayDBConnectionString $gatewayCS -CertificateAutoGenerationKey $SBCertificateAutoGenerationKey `
            -MessageContainerDBConnectionString $messageContCS;
        Write-Host -ForegroundColor White 'Done';
    }

Notice that my script does some checking and error control so that I can run this script multiple times without it causing error because of pre-configuration. OK, now to the Workflow Farm…

Write-Host -ForegroundColor white " - Creating new Workflow Farm..." -NoNewline;
    $wfManagementCS = 'Data Source=' + $dbServer + ';Initial Catalog=' + $dbPrefix + '_WFM_Management;Integrated Security=True;Encrypt=False';
    $wfInstanceCS = 'Data Source=' + $dbServer + ';Initial Catalog=' + $dbPrefix + '_WFM_InstanceManagement;Integrated Security=True;Encrypt=False';
    $wfResourceCS = 'Data Source=' + $dbServer + ';Initial Catalog=' + $dbPrefix + '_WFM_ResourceManagement;Integrated Security=True;Encrypt=False';
    try {
        $wfFarm = Get-WFFarm -WFFarmDBConnectionString $wfManagementCS;
        Write-Host -ForegroundColor White 'Already Exists';
    }
    catch {
        New-WFFarm -WFFarmDBConnectionString $wfManagementCS -RunAsAccount $spServiceAcctName -AdminGroup 'BUILTINAdministrators' -HttpsPort 12290 -HttpPort 12291 `
            -InstanceDBConnectionString $wfInstanceCS -ResourceDBConnectionString $wfResourceCS -CertificateAutoGenerationKey $WFCertAutoGenerationKey;
        Write-Host -ForegroundColor white 'Done';
    }

With both Service Bus and Workflow farms created, it’s time to create a Service Bus host and a Workflow Management Host. WFM adopts a similar architecture to ADFS and SharePoint in that a farm consists of one or many hosts. The more hosts the better availability of service with network load balancing.

Here’s the part script to create a new Service Bus Host…

Write-Host -ForegroundColor white ' - Adding Service Bus host...' -NoNewline;
    try {
        $SBRunAsPassword = ConvertTo-SecureString -AsPlainText  -Force  -String $spServiceAcctPwd;
        Add-SBHost -SBFarmDBConnectionString $managementCS -RunAsPassword $SBRunAsPassword `
            -EnableFirewallRules $true -CertificateAutoGenerationKey $SBCertificateAutoGenerationKey;
        Write-Host -ForegroundColor white 'Done';
    } 
    catch {
        Write-Host -ForegroundColor white 'Already Exists';
    }
    Write-Host -ForegroundColor white ' - Creating Workflow Default Namespace...' -NoNewline;
    $sbNamespace = $dbPrefix + '-WorkflowNamespace';
    try {
        $defaultNS = Get-SBNamespace -Name $sbNamespace -ErrorAction SilentlyContinue;
        Write-Host -ForegroundColor white 'Already Exists';
    }
    catch {
        try {
            # Create new SB Namespace
            $currentUser = $env:userdomain + '' + $env:username;
            New-SBNamespace -Name $sbNamespace -AddressingScheme 'Path' -ManageUsers $spServiceAcctName,$spAdminAcctName,$currentUser;
            Start-Sleep -s 90
            Write-Host -ForegroundColor white 'Done';
        }
        catch [system.InvalidOperationException] {
            throw;
        }
    }

…and finally, we create the Workflow Manager host…

$SBClientConfiguration = Get-SBClientConfiguration -Namespaces $sbNamespace;
    # Add WF Host
    try {
        $WFRunAsPassword = ConvertTo-SecureString -AsPlainText  -Force  -String $spServiceAcctPwd;
        Write-Host -ForegroundColor White ' - Adding Workflow Host...' -NoNewline;
        Add-WFHost -WFFarmDBConnectionString $wfManagementCS `
        -RunAsPassword $WFRunAsPassword -EnableFirewallRules $true -EnableHttpPort `
        -SBClientConfiguration $SBClientConfiguration -CertificateAutoGenerationKey $WFCertAutoGenerationKey;
        Write-Host -ForegroundColor White 'Done';
    }
    catch {
        Write-Host -ForegroundColor white "Already Exists";
    }

At this point, we’ve completed configuration of Workflow Manager, but we’re not quite finished with configuration yet. We need to tell SharePoint about the existence of the WFM farm, which involves installing WFM client on each SharePoint server (you don’t need to install the client if you have installed the WFM on a SharePoint server). Ideally, your WFM farm is independent of SharePoint (servers and databases), but for test purposes you can install the whole lot on one server.

Let’s make sure WFM is working before we configure SharePoint. Open a browser to the following location…

http://WFM-server-name:12291

You should see some XML returned from the service (you may need to run IE as admin for this to work).

<ScopeInfo xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/workflow/2012/xaml/activities">
  <DefaultWorkflowConfiguration /> 
  <Description>Root Scope</Description> 
  <LastModified>2014-05-12T23:17:49.47</LastModified> 
  <LastRevised>2014-05-12T23:17:49.47</LastRevised> 
  <Path>/</Path> 
 <SecurityConfigurations>
 <ScopedSecurityConfiguration i:type="WindowsSecurityConfiguration">
  <Name>Microsoft.Workflow.Management.Security.WindowsSecurityConfiguration</Name> 
  <WorkflowAdminGroupName>BUILTINAdministrators</WorkflowAdminGroupName> 
  </ScopedSecurityConfiguration>
  </SecurityConfigurations>
  <Status>Active</Status> 
 </ScopeInfo>

Our last task is to tell SharePoint 2013 about our new Workflow farm, which we accomplish with the following cmdlet (run this on each SharePoint server):

Register-SPWorkflow -spsite 'http://web-application-url' -WorkflowHostUri 'http://WFM-server-FQDN:12291' -AllowOAuthHttp;

All being well, you should have a Workflow Manager Proxy listed under Manage Service Applications in Central Administration. Note: This proxy gets created whether the Register-SPWorkflow cmdlet succeeds or fails, so remember to remove this proxy after an error. Selecting the proxy and clicking the Manage icon in the ribbon should give you a page that indicates SharePoint is connected to the WFM. You’re good to go and create 2013 workflows.

Some things to note…

My configuration uses HTTP between SharePoint and WFM. In a production scenario, I recommend using HTTPS. In which case, you need to export the auto-generated certificate (from https://WFM-server-FQDN:12290) and import it into SharePoint Central Admin under Security -> Manage Trust. When using SSL, run the same Register-SPWorkflow cmdlet, but change the WFM location to https://WFM-server-FQDN:12290 and drop the -AllowOAuthHttp switch.

Some organizations rely heavily on workflow. Not unlike SharePoint, ADFS, and other farm based services, it’s a good idea to configure WFM with multiple hosts when high availability is important.

Filter Document Lib to Last Published Version

My customer brought up an interesting requirement to filter their document library to show just the last approved versions when content approval and major/minor versions applied.

Any unpublished document – that is a document where the major version is 0 will not show up in the filter. Any document that has a major version number greater than 0 and is in draft or pending status – e.g a document at version 1.1 – will only show a link to the last published version.

Turns out the solution was quite easy and involved just adding some query string parameters…

?IncludeVersions=TRUE&FilterField1=_ModerationStatus&FilterValue1=0&FilterField2=_IsCurrentVersion&FilterValue2=1

The IncludeVersions parameter instructs the list view to show all versions. Then it’s a simple case of filtering on the most current version of each item where not in moderation state.

SharePoint Site Pages, What Are They?

SharePoint Foundation introduced Site Pages. Site Pages are pages created, edited, and customized by end users.  Site Pages are different to Application Pages, which have been around since WSS 3, live in the SharePoint filesystem (hive), and are responsible for back-end functionality (such as site settings etc.).

Site Pages are either un-customized (ghosted) or customized (un-ghosted). The state of a Site Page will determine where the page content resides – on the file system, in the content database or both, and this can sometimes be the topic of confusion.

Un-customized Site Pages

An un-customized (or ghosted) Site Page is one that resides on the file system. Typically, these files live in the TEMPLATESSiteTemplates folder or some location within the TEMPLATES folder within the SharePoint file system. An un-customized page is sometimes referred to as a Page Template.

An un-customized page also maintains a reference in the site collection content database. This reference points to the location of the page in the file system.

An un-customized Site Page may contain inline code because SharePoint assumes a developer, with console access to the SharePoint server, has vetted any inline code or script.

Customized Site Pages

A customized (un-ghosted) Site Page is one that consists of edits made by end users or designers, using SharePoint Designer, SharePoint API, or via download from the SharePoint UI. The edits reside in the content database for the SharePoint site collection.

Whereas an un-customized page maintains a reference to the template on the filesystem in the content database, a customized page retains both page content (the customized page content) as well as the reference to the original template.

Customized Site Pages may NOT include inline code because edits are not controlled by administrators with access to the server console. SharePoint controls this behavior by running all customized page content through a Page Parser, which strips out any inline code.

Sandbox Solution Site Pages

Sandbox solutions do not allow deployment of files to the SharePoint file system, therefore, any Site Page deployed as a module as part of a Sandbox solution deploy ONLY to the site collection content database. Users may customize these pages also, but there is no reference to a location on the file system in the content database.

Page Parsing

SharePoint parses ASPX (both application and site page) content in one of two modes, depending on the page – direct, or safe-mode. The first time a user requests an Application or Un-customized Site Page, SharePoint parses the page content in direct mode. In direct mode, the page content is parsed and compiled and placed into memory cache for faster subsequent requests for the same page.

Customized Site Pages reside in the content database and undergo a stricter parsing method, called safe-mode parsing. In safe-mode, the page content may not contain any inline server code, user and server controls must be registered as safe in the application web.config, and the page is not compiled. Safe-mode pages do not live in memory cache, so their use is a performance consideration.

Note: It is possible to override the behavior of the safe-mode parser by adding <PageParserPath> elements to the <SafeMode> element in the web.config, which enables you to select certain Site Pages that may contain inline server code. However, this is not recommended because it compromises the security of your site collection by allowing end users to include potentially dangerous code in page content.

Yammer Integrated with Office 365

Yammer has become the popular social network for the workplace. Yammer provides a discrete network for organizations looking to engage in social network activity without giving employee participants free reign to network with individuals outside their organization, such as with twitter and Facebook.
Many organizations have moved their SharePoint farms to Office 365 – SharePoint Online. The cloud provides an attractive alternative to self-hosting expensive SharePoint infrastructure on premises. The latest wave of SharePoint Online – wave 15 – includes the Newsfeed and social networking capabilities, consistent with on premises SharePoint 2013.
The baked-in social capabilities of SharePoint 2013/Wave 15 are pretty awesome, and with the proliferation of the SharePoint Newsfeed app for Windows Phone, Android, and iOS, SharePoint social networking is becoming as ubiquitous as Facebook and Twitter in the mobile-sphere. However, Microsoft has not ignored those organizations that went the Yammer route and use SharePoint Online, as Yammer now integrates with SharePoint Online.
If you log into your SharePoint Online administration portal within your Office 365 tenant and click the settings link, you should see the Yammer integration option at the top of the page. Toggling the Enterprise Social Collaboration from Newsfeed (default) to Yammer, takes about 30 minutes to take effect, after which time users of SharePoint Online see the Newsfeed link replaced with a link to Yammer in the top navigation.

Presently, the integration with Yammer is very loose. The Yammer link in the top navigation redirects users to the www.yammer.com home page, where users can sign-in. Your Organization’s Yammer feed is not yet integrated into your SharePoint Online My Site, and the default Newsfeed remains in place. However, this is just the first phase of roll-out, and Microsoft promises single-sign-on and Yammer feeds integrated into the SharePoint Online user interface in the coming months.

For those that cannot wait, there is a free app that will render Yammer feeds within the SharePoint Online UI, which administrators can download and install from the SharePoint App Store.

My organization – Planet Technologies – uses Yammer (we’re a social bunch), so I am quite excited for the next phase of Yammer integration, which will bring Yammer and SharePoint Online together seamlessly.

SharePoint Related Fields

I had an interest request from a client I was working with this week. They wanted
me to create a list for their store tracking business, which consisted of a large
number of columns. No big deal! That is until my client indicated that they wanted
to associate notes with each column for each list item entered into the list.


image

The image attached to this post shows the standard SharePoint New and Edit Form
for a list item. Ignore the fact that this is SharePoint 2007, because my solution
works just as well in SharePoint 2010 and 2013. Notice that each field has a notes
link, which when clicked will display a text box for adding additional notes, as
shown.

You may be thinking “just add additional columns of multiple lines of text to the
list”, and I considered this approach, but my client wanted the ability to add a
new column in their list without having to remember to add an additional column
for the notes. Further, they wanted a way to associate the notes with the column
automatically.

My next instinct was to use custom fields and custom field controls, which turned
out to be the core of my solution. Custom fields and field controls can be a pain
at times, and not always behave as predicted, but fortunately my client only used
single line of text, yes/no, lookup, choice, and date-time columns, so I was able
to derive custom versions of these controls to provide the behavior I was looking
to achieve.

The standard SPField type, from which all custom fields derive, contains a property
called “RelatedField”. This related field contains the title of a related field
to current column. The SPField class also includes some event handlers for the added
and deleting events, which I used to automatically create notes fields whenever
a new column is added to the list.

Let’s start with one of my custom SPField classes, which derives from the SPFieldText
to add custom logic to the stock single-line-of-text field type:

 
    public class CustomFieldText : SPFieldText
    {
        #region Fields

        private readonly SPFieldCollection _fields;

        #endregion Fields

        #region Construction

        public CustomFieldText(SPFieldCollection fields, string fieldName)
            : base(fields, fieldName)
        {
            _fields = fields;
        }

        public CustomFieldText(SPFieldCollection fields, string typeName, string displayName)
            : base(fields, typeName, displayName)
        {
            _fields = fields;
        }

        #endregion Construction

        #region Properties

        public override BaseFieldControl FieldRenderingControl
        {
            [SharePointPermission(SecurityAction.LinkDemand, ObjectModel = true)]
            get
            {
                BaseFieldControl fieldControl = new CustomFieldTextControl();
                fieldControl.FieldName = InternalName;
                return fieldControl;
            }
        }

        #endregion Properties

        #region Methods

        public override void OnAdded(SPAddFieldOptions op)
        {
            CustomFieldHelper.CreateSlaveField(this);
        }

        public override void OnDeleting()
        {
            CustomFieldHelper.DeleteSlaveField(_fields, this);
        }

        #endregion Methods
    }

Looking at the previous class, there really isn’t much to my implementation. My
class implements the standard constructors for an SPField derived class, overrides
the FieldRenderingControl property because I wish to use my own, and overrides the
OnAdded and OnDeleting events, which enables me to detect when a column of my field
type is created or deleted. The interesting code logic exists in my helper class,
as follows:

 
    static class CustomFieldHelper
    {
        private const string RenderFieldSuffix = &quot;_Shaddow&quot;;

        public static void CreateSlaveField(SPField master)
        {
            if (null == master) throw new ArgumentNullException(&quot;master&quot;);
            // Create a shadow field to store the value we want to display
            // in list views.
            var list = master.ParentList;
            if (null == list) return;
            // We only need a shadow copy when associated with a list.
            var relatedFieldName = master.InternalName + RenderFieldSuffix;
            var relatedDisplayName = String.Format(&quot;{0} Notes&quot;, master.Title);
            var sb = new StringBuilder();
            sb.Append(&quot;&lt;Field Type=&quot;Note&quot; ReadOnly=&quot;TRUE&quot; &quot;);
            sb.AppendFormat(&quot;Name=&quot;{0}&quot; &quot;, relatedFieldName);
            sb.AppendFormat(&quot;DisplayName=&quot;{0}&quot; &quot;, relatedFieldName);
            sb.Append(&quot;Sortable=&quot;TRUE&quot; Filterable=&quot;TRUE&quot; &quot;);
            sb.Append(&quot;EnableLookup=&quot;FALSE&quot; SourceID=&quot;http://schemas.microsoft.com/sharepoint/v3&quot;&gt;&quot;);
            sb.AppendFormat(&quot;&lt;FieldRefs&gt;&lt;FieldRef Name=&quot;{0}&quot; /&gt;&lt;/FieldRefs&gt;&quot;, master.InternalName);
            sb.Append(&quot;&lt;DisplayPattern&gt;&lt;HTML&gt;&lt;Column HTMLEncode=&quot;FALSE&quot;/&gt;&lt;/HTML&gt;&lt;/DisplayPattern&gt;&quot;);
            sb.Append(&quot;&lt;/Field&gt;&quot;);
            list.Fields.AddFieldAsXml(sb.ToString());
            var field = list.Fields[relatedFieldName];
            field.Title = relatedDisplayName;
            field.RelatedField = master.Title;
            field.Update(true);
        }

        public static void DeleteSlaveField(SPFieldCollection fields, SPField master)
        {
            if (null == fields) throw new ArgumentNullException(&quot;fields&quot;);
            if (null == master) throw new ArgumentNullException(&quot;master&quot;);
            var relatedFieldInternalName = master.InternalName + RenderFieldSuffix;
            if (!fields.ContainsField(relatedFieldInternalName)) return;
            var field = fields.GetFieldByInternalName(relatedFieldInternalName);
            field.ReadOnlyField = false;
            field.Hidden = false;
            field.Update();
            fields.Delete(relatedFieldInternalName);
        }

        public static void SaveValueToSlave(SPListItem item, string value, SPField master, bool callUpdate)
        {
            if (null == item) throw new ArgumentNullException(&quot;item&quot;);
            if (null == master) throw new ArgumentNullException(&quot;master&quot;);
            if (null == value) return;
            var relatedFieldInternalName = master.InternalName + RenderFieldSuffix;
            var list = item.ParentList;
            var field = !list.Fields.ContainsField(relatedFieldInternalName) ? 
                list.Fields.Cast&lt;SPField&gt;().FirstOrDefault(f =&gt; f.RelatedField == master.Title) : 
                list.Fields.GetFieldByInternalName(relatedFieldInternalName);
            if (null == field) return;
            item[field.Id] = value;
            if (callUpdate) item.SystemUpdate();
        }

        public static string GetValueFromSlave(SPListItem item, SPField master)
        {
            if (null == item) throw new ArgumentNullException(&quot;item&quot;);
            if (null == master) throw new ArgumentNullException(&quot;master&quot;);
            var relatedFieldInternalName = master.InternalName + RenderFieldSuffix;
            var list = item.ParentList;
            var field = !list.Fields.ContainsField(relatedFieldInternalName) ?
                list.Fields.Cast&lt;SPField&gt;().FirstOrDefault(f =&gt; f.RelatedField == master.Title) :
                list.Fields.GetFieldByInternalName(relatedFieldInternalName);
            if (null == field) return &quot;&quot;;
            var obj = item[field.Id];
            return null == obj ? &quot;&quot; : obj.ToString();
        }

        public static Control GetNotesMarkUp(out TextBox notesCtrl, SPField field)
        {
            if (null == field) throw new ArgumentNullException(&quot;field&quot;);
            var ph = new PlaceHolder();
            // Does shaddow field have data?
            var ident = (null != SPContext.Current.ListItem)
                            ? GetValueFromSlave(SPContext.Current.ListItem, field)
                            : null;
            var jsStr = String.Format(&quot;document.getElementById('{0}').style.display='block';&quot;, field.Id);
            ph.Controls.Add(!String.IsNullOrEmpty(ident)
                                ? new LiteralControl(String.Format(
                                    &quot;&lt;a href='#' onclick=&quot;{0}&quot;&gt;&lt;b&gt;Notes&lt;/b&gt; &amp;bull;&lt;/a&gt;&quot;, jsStr))
                                : new LiteralControl(String.Format(&quot;&lt;a href='#' onclick=&quot;{0}&quot;&gt;Notes&lt;/a&gt;&quot;, jsStr)));
            ph.Controls.Add(new LiteralControl(String.Format(&quot;&lt;div id='{0}' style='display:none;'&gt;&quot;, field.Id)));
            var table = new HtmlTable { Width = &quot;100%&quot; };
            ph.Controls.Add(table);
            ph.Controls.Add(new LiteralControl(&quot;&lt;/div&gt;&quot;));
            var row = new HtmlTableRow();
            var header = new HtmlTableCell();
            var headerText = new LiteralControl(&quot;&lt;span class='ms-formlabel'&gt;&lt;H3 class='ms-standardheader'&gt;Enter notes below&lt;/H3&gt;&lt;/span&gt;&quot;);
            header.Controls.Add(headerText);
            row.Controls.Add(header);
            table.Controls.Add(row);
            row = new HtmlTableRow();
            var cell = new HtmlTableCell();
            notesCtrl = new TextBox { TextMode = TextBoxMode.MultiLine, Rows = 6, Width = new Unit(100, UnitType.Percentage) };
            cell.Controls.Add(notesCtrl);
            row.Controls.Add(cell);
            table.Controls.Add(row);
            return ph;
        }
    }

In case you’re wondering why I didn’t just include my helper code in my custom field
class it’s because I created several custom field classes for my client and wanted
to reuse the same code. Since I’m inheriting from SharePoint’s classes I cannot
provide my own base class either, so a static helper class seemed like an easy approach.

The CreateSlaveField method is the most interesting of the previously show class.
This method creates a new Note field and adds it to the list that the master field
associates, it then populates the RelatedField property so I can find the association
later.

The notes field is only created if the master field is associated with a list. In
the case when a site owner creates a site column in a web, my method exits without
creating the slave field, because it’s only pertinent in lists. Since list columns
added from the web site columns gallery are new instances of the same field type,
the OnAdded method is called again when the new column adds to a list and this time
my method creates the slave column.

The DeleteSlaveField method removes the notes slave field when the master field
is removed from a list instance – this is just good house keeping.

When creating the slave field I set it as read only, this is so intrigued users
of the site cannot use the notes field for any other purpose than adding notes via
my custom field control, which brings me to my next class:

    public class CustomFieldTextControl : TextField
    {
        #region Fields

        private TextBox _notesCtrl;

        #endregion Fields

        #region Methods

        protected override void OnInit(EventArgs e)
        {
            CanCacheRenderedFieldValue = false;
            base.OnInit(e);
        }

        protected override void CreateChildControls()
        {
            if (IsFieldValueCached)
            {
                base.CreateChildControls();
                return;
            }
            if (null == Field) return;
            base.CreateChildControls();
            // Add the notes if in edit mode.
            if (ControlMode == SPControlMode.Edit || ControlMode == SPControlMode.New)
                base.Controls.Add(CustomFieldHelper.GetNotesMarkUp(out _notesCtrl, Field));
            // Update the controls with the current value stored.
            if (null != _notesCtrl &amp;&amp; null != Field)
                _notesCtrl.Text = CustomFieldHelper.GetValueFromSlave(SPContext.Current.ListItem, Field);
        }

        public override void UpdateFieldValueInItem()
        {
            Page.Validate();
            if (!Page.IsValid) return;
            base.UpdateFieldValueInItem();
            // do actions after save
            if (null == Field) return;
            CustomFieldHelper.SaveValueToSlave(SPContext.Current.ListItem, _notesCtrl.Text.Trim(), Field, false);
        }

        #endregion Methods
    }

The previous and last class is my custom field control, which does the work of rendering
data from my custom field class. Again, this is a lightweight class, leaving the
heavy lifting to the field helper class.

The CreateChildControls method is called for any ASP.NET UI class (which custom
field controls ultimately derive) to load an child control instances. In this method,
I check that we’re not using a cached version of the control and that we’re in a
new or edit form, since my client didn’t want notes to appear in display only views.
I then inject HTML for the display of the notes text box, which is populated from
the contents of the associated slave control.

The overridden method UpdateFieldValueInItem ensures that the slave field in the
list item receives any text changes applied to the notes field when saving the list
item in the new/edit form.

That’s about it, except for the fldtypes_custom.xml file, which I deploy to the
%HIVE%TemplatesXML folder to register my custom field type(s):

 
&lt;FieldTypes&gt;
  &lt;FieldType&gt;
    &lt;Field Name=&quot;CAMLRendering&quot;&gt;TRUE&lt;/Field&gt;
    &lt;Field Name=&quot;TypeName&quot;&gt;CustomFieldText&lt;/Field&gt;
    &lt;Field Name=&quot;TypeDisplayName&quot;&gt;CustomFieldText&lt;/Field&gt;
    &lt;Field Name=&quot;TypeShortDescription&quot;&gt;Custom Single Line of Text (with Notes).&lt;/Field&gt;
    &lt;Field Name=&quot;ParentType&quot;&gt;Text&lt;/Field&gt;
    &lt;Field Name=&quot;UserCreatable&quot;&gt;TRUE&lt;/Field&gt;
    &lt;Field Name=&quot;AllowBaseTypeRendering&quot;&gt;TRUE&lt;/Field&gt;
    &lt;Field Name=&quot;FieldTypeClass&quot;&gt;CustomSharePoint.CustomFieldText, CustomSharePoint, Version=1.0.0.0, Culture=neutral, PublicKeyToken=cd2be6bc9c119b34&lt;/Field&gt;
  &lt;/FieldType&gt;
&lt;/FieldTypes&gt;