Tag Archives: Security

SharePoint Authentication and Session Management

What is authentication?

1. A security measure designed to protect a communications system against acceptance of a fraudulent transmission or simulation by establishing the validity of a transmission, message, or originator.
2. A means of identifying individuals and verifying their eligibility to receive specific categories of information.

Authentication is essentially the process of validating a user is who they say they are, such that they can gain access to a system – in this context, the system is SharePoint. Authentication is not authorization, which is the process in determine if a known user is permitted access to certain data in the system, after successful authentication.

SharePoint, much like any content management system, relies on user authentication to provide user access to secured content. Pre-SharePoint 2010, SharePoint relied on NTLM, Kerberos, or basic (forms-based) authentication protocols (their discussion out of scope of this text). SharePoint 2010 introduced Claims-based-Authentication (CBA), also present in SharePoint 2013. CBA consists of authentication abstraction, using a Secure Token Service (STS), and identification of users with multiple attributes –claims – not just the traditional username and password pair.

A Secure Token Service implements open standards. A typical STS implementation communicates over HTTPS, and packages user identity information (claim data) via signed and encrypted XML – Secure Assertion Markup Language (SAML). Examples of STS implementations are the STS engine in SharePoint 2010/2013, ADFS, and third party applications build using the Windows Identity Framework.

SharePoint Session Management

A user session in SharePoint 2010/2013 is the time in which a user is logged into SharePoint without needing to re-authenticate. SharePoint, like most secure systems, implements limited lifespan sessions – i.e. users may authentication with a SharePoint system, but they’re not authenticated with the system indefinitely. The length of user sessions falls under the control of session management, configured for each SharePoint Web Application.

SharePoint handles session management differently, depending on the authentication method in play (Kerberos, NTLM, CBA, Forms, etc.). This article discusses how SharePoint works with Active Directory Federated Services (ADFS) – an STS – to maintain abstracted user authentication and user session lifetime. The following is a sequence diagram of the default authentication and session creation process in SharePoint 2010/2013 when using CBA with ADFS.

The following is a summary of the authentication process, shown in the sequence diagram.

  1. A user requests a page in SharePoint from their browser this might be the home page of the site.
  2. SharePoint captures the request and determines that no valid session exists, by the absence of the FEDAUTH cookie.
  3. SharePoint redirects the user to the internal STS – this is important because the internal STS handles all authentication requests for SharePoint and is the core of the CBA implementation in SharePoint 2010/2013.
  4. Since we have configured SharePoint to use ADFS as a trusted login provider, the internal STS redirects the user to the ADFS login page.
  5. ADFS acquires credentials and authenticates the user.
  6. ADFS creates a SAML token, containing the user’s claims, as encrypted and signed.
  7. ADFS posts the SAML token to the internal SharePoint STS.
  8. The Internal STS saves the SAML token in the SAML Token Cache.
  9. SharePoint creates the FEDAUTH cookie, which contains a reference to the SAML token in the cache.
  10. The Internal STS redirects the user back to SharePoint, and then back to the original requested page.

Session Lifetime

The lifetime of a SharePoint session, when using ADFS, is the topic of much confusion. Ultimately, SharePoint determines whether a user has a current session by the presence of the FEDAUTH cookie. The default behavior of SharePoint is to store this persistent cookie on the user’s disk, with fixed expiration date. Before sending a new FEDAUTH cookie back to the user’s browser, SharePoint calculates the expiration of the cookie with the following formula:

SAML Token Lifetime – Logon Token Cache Expiration Window

The above values are important since they govern the overall lifetime of the FEDAUTH cookie, and hence the session lifetime. The following table describes each value and its source:

Configuration Value Description
SAML Token Lifetime This value, in minutes, is provided by the token issuer – ADFS. In the case of ADFS, each Relying Party configuration (one for each instance of SharePoint farm) has this value as part of the configuration.By default, SharePoint sets the session lifetime the same as this SAML token lifetime.

You can change this value using PowerShell and the ADFS command: Set-ADFSRelyingPartyTrust.

E.g.

Add-PSSnapin Microsoft.ADFS.PowerShell

Set-AdfsRelyingPartyTrust –TargetName “Relying Party Name” –TokenLifeTime 10

Logon Token Cache Expiration Window This value, in minutes, is provided by SharePoint STS and governs how long the SAML token remains active in the cache, and therefore how long the associated user session remains alive. For example, if ADFS sets the SAML Token Lifetime to 10 minutes and this value is set in the STS as 2 minutes then the overall SharePoint session lifespan is 8 minutes.

E.g.

$ap = Get-SPSecurityTokenServiceConfig

$ap.LogonTokenCacheExpirationWindow = (New-TimeSpan -minutes 2)

$ap.Update();

IIsreset

Sliding Session

A siding session is one where the session expiration time changes as a user interacts with the system. By default, SharePoint 2010/2013 does not offer sliding sessions. Each new session expires on a fixed time, based on the aforementioned formula, earlier in this text.

Use of a sliding session does not mean that we must compromise security. Should a user become inactive, a sliding session will timeout just as the fixed session, the main difference that a user can extend a sliding session with continued use of the SharePoint system.

Creation of sliding session requires configuration of the Relying Party in ADFS and the SharePoint Logon Token Cache Expiration. The following PowerShell configures the Relying Party to 60 minutes, which is the absolute maximum time that a session remains active should the user become inactive:

Add-PSSnapin Microsoft.ADFS.PowerShell
Set-AdfsRelyingPartyTrust –TargetName “Relying Party Name” –TokenLifeTime 60

The following PowerShell sets the Logon Token Cache Expiration in SharePoint STS, which forces the sliding session lifetime to 20 minutes.

$ap = Get-SPSecurityTokenServiceConfig
$ap.LogonTokenCacheExpirationWindow = (New-TimeSpan -minutes 40)
$ap.Update();
IIsreset

The above settings are only part of the solution. On their own we have a fixed session duration of 20 minutes, determined by the earlier mentioned formula subtracting the logon token cache expiration from the RP token lifetime. To make sure the session renews with continued activity, we must refresh the session (and FEDAUTH cookie), which we can achieve with an HTTP module. The following code is an excerpt to refresh the session with each HTTP request.

Persistent verses Session Cookies

By default, SharePoint stores the authentication/session (FEDAUTH) cookie as a persistent cookie on disk. This allows the user to close and reopen their browser and access SharePoint without having to re-authenticate. This behavior is not always desirable.

Fortunately, we can ask SharePoint to use in-memory cookies (session cookies) for the authentication (FEDAUTH) cookie, as follows:

$sts = Get-SPSecurityTokenServiceConfig
$sts.UseSessionCookies = $true
$sts.Update()
iisreset

Windows Vista UAC – Further Reading

Then and Now

Microsoft Windows XP™ initially creates all user accounts as local administrators. Administrators have full access to system-wide resources and can execute any privileged operation. Microsoft guidelines suggest that users run day-to-day tasks under a least privileged account (LUA), however many users prefer to operate at the administrator level for the following typical reasons:

  • Home users like administrator rights for similar reasons – applications are installed and available immediately without configuration in separate profile or execution restriction.
  • ActiveX controls are glorified COM controls deployable via the Internet and, like COM, require installation. LUA users typically do not receive installation rights, breaking the use of badly designed Active-X controls (controls requiring access to protected areas of the operating system).
  • Reduced dependency on helpdesk support – if users can install their own applications there is a reduced burden on the helpdesk and support group because there is no need for centralized deployment mechanisms (SMS, Group Policy) and/or system administrators to install applications manually.

Ensuring that users operate day-to-day tasks as LUA mitigates the impact of malware on critical areas of the operating system and installed applications. However, standard users find they cannot perform typical configuration tasks (change the system time zone or install a printer) without administration rights. Moreover, some applications will not operate on Windows XP without using the “run-as” option or logging on as an administrator, usually involving special permission changes for legacy applications and opening up security vulnerabilities. Windows 95 and 98 had no security model, so legacy applications initially developed for these platforms that have migrated with subsequent versions may not consider security constraints.

UAC – Under the Hood

Windows Vista supports two types of user accounts – standard users and administrator users. Standard users behave much like the LUA user on Windows XP where protected resources on the platform are restricted without prompt for administrator credentials. Unlike the least privileged account-type on Windows XP, standard users can make more configuration changes than before. Only when standard users attempt to change a system-wide resource setting does Vista prompt for administrator credentials. Administrator accounts operate in one of two modes – filtered or elevated. Standard users receive a standard “filtered token,” denoting reduced permissions, upon logon, whereas administrators receive two tokens – the “filtered token” and a “full access token.” During normal operation, administrators use the filtered token, when attempting to execute privileged operations the Application Information Service – a system service facilitating the elevation of user privilege – will elevate the administrator to the higher full trust token.

Application Manifest Files and Elevation

How does Vista know when to elevate? Firstly, to dispel a myth that elevation can occur at any time during the execution of a process – incorrect. The AIS determines required elevations on a per-process basis – and how exactly does it do that?

The Application Information Service makes some assumptions about certain applications – applications labeled “setup.exe,” “update.exe,” and MSI files (plus a few other criteria) are installation applications and AIS requests administrator full access credentials or confirmation. All other application types execute using the filtered token, unless an accompanied manifest file stipulates otherwise.

What is a manifest file?

A manifest file is an XML file associated with an executable application (EXE), containing metadata about the application, and may include trust information for elevation. The following is an example manifest file:

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?>

<assembly xmlns=”urn:schemas-microsoft-com:asm.v1″ manifestVersion=”1.0″>

<trustInfo xmlns=”urn:schemas-microsoft-com:asm.v3″>

<security>

<requestedPrivileges>

<requestedExecutionLevel level=”requireAdministrator” uiAccess=”true”/>

</requestedPrivileges>

</security></trustInfo>

</assembly>

In the above manifest file, the requestedExecutionLevel stipulates the required level and whether elevation is required. Possible levels of execution are:

  • asInvoker – The application executes at the same level as the standard user filtered token
  • highestAvailable – The application executes at the highest level of privilege the user can obtain
  • requireAdministrator – The application requires administrator full access token privilege

.NET EXE assemblies are associated with manifest files when the manifest has the same name as the executable with a “.manifest” extension. For example, the executable test.exe is associated with the manifest file test.exe.manifest. Embedding of the manifest as a resource is also possible.

WIN32 executables also use a manifest to request elevation, although, unlike managed assemblies, WIN32 manifest files must embed in the executable file. The following information details embedding of a WIN32 manifest file:

Link

Default Behavior

The following is the default behavior for Vista installations:

  • UAC is enabled by default, so users may experience compatibility prompts with legacy applications
  • The first account created during Vista installation is an administrator account (with dual tokens), all subsequent created accounts are standard user accounts
  • The built in administrator account is disabled by default
  • Elevation prompts are displayed on the secure desktop

The Shield Icon

Common practice is to display a “shield icon” on all controls that require elevation. The following image shows the date and time properties – the standard user can make configuration changes, however, if they press the “Change Date and Time” button AIS will prompt for administrator credentials or consent.

Wait a minute! How can an application prompt for elevation mid-process if AIS determines the execution level before execution?

Answer – Vista provides a clever mechanism called the “COM Elevation Moniker,” which is a mechanism in which applications can execute code in a WIN32 COM server, out of process executable, with elevated execution privileges. Further documentation on developing for Vista UAC provides more in depth detail on the COM Elevation Moniker.

Windows Vista – User Account Control

Presently, users of the Microsoft Windows™ operating system have had to face several challenges to secure the integrity of the data residing on their computer. Users have had to cope with the vast slew of malware, including viruses, spyware, and root-kits, which typically cause damage to data and/or applications residing on the user’s desktop pc. As quickly as anti-virus vendors release tools to prevent the threat of virus attack or spyware installation, hackers and script-kiddies release newer and smarter versions to work around the safeguards. Microsoft is constantly battling to produce patches and updates to close security vulnerabilities in their operating systems and applications, and now we live in a time where third-party developers are required to embed security aware code in their applications.

Prior to Windows XP Service Pack 2, the Windows platform did little to protect the user from malware. It was up to the initiative of individual users to install anti-virus and anti-spyware applications, and to keep up to date with the Windows patches and updates. Microsoft heard the cries of its customers, and in 2004 announced the release of Windows XP Service Pack 2. SP2 brought a number of security enhancements to the Windows platform in the flavor of enhanced firewall, Internet Explorer popup blocker, automatic updates, and security warnings about the execution of ActiveX controls from the web.

The existence of Windows XP SP2 was not enough to protect the end-user; SP2 went further to alert the user to suspicious activity from malware, but did not protect the users from their own mistakes. For example, many users fail to acknowledge the importance of the message contained in security prompts and blindly ignore the warnings to accomplish their task. Third party applications and web browsers not taking advantage of SP2 security constraints are still able to download malware from the Internet without detection. In 2005, Song BMG Music Entertainment installed root kit software on their audio CDs to circumvent piracy and to provide Sony with music listener statistics – users running Windows full administrator privileged accounts were susceptible to the root-kit from simply inserting these audio CDs in their CDROM tray.

Most of aforementioned problems with malware have one thing in common – they all operate on the assumption that the interactive user is running with full administrator privileges. By default, Windows XP installs a default “Administrator” account, and most users perform their day-to-day tasks under this account. Use of administrator accounts alleviates execution problems with poorly written applications – software that unnecessarily uses privileged areas of the operating system, provides the convenience of on the spot installation of applications without switching accounts (and sometimes a reboot), and gives the user total control over the operating system. The first step in the direction of securing the Windows platform is to restrict the everyday user to least user privilege – LUA.

Converting to LUA is only half of the battle – many applications (non-XP certified) will not execute properly without administrative privileges. Services and third-party background processes still act as security vulnerability because they execute in higher privileged contexts, and can provide a security hole for hackers to exploit. Microsoft has stepped up to the plate and has provided a potential solution to lessen the security concerns from users of its Windows platform – enter Windows Vista and User Access Control.

Windows Vista – Providing a more secure environment

Security is not a process – it is a mentality, and must be considered from the initial development of software applications, though to user execution. Developers writing software atop the .NET Framework can take advantage of Code Access Security – restrictions applied to code elements for different execution contexts – to protect the user at the application level, and now Microsoft have taken the next step and added enhanced security restriction at the operating system level in the form of User Access Control on the Vista platform.

What is UAC?

 Regardless of whether a particular user has administrator rights, all users logging on to the Vista platform receive a “filtered token” at login time, which prevents access to security sensitive operations. When the time comes to execute a privileged operation, the user must elevate to a higher level of operation.

What does this mean to the end user?

Users without administration rights attempting to execute a privileged operation observe a request for administration credentials. This is akin to the “run-as” operation on Windows XP/2003 where a user can execute a process as another user, except UAC enables elevation for particular privileged operations, not just the execution of an application.

Microsoft refers to this process of elevation request as “over-the-shoulder” credentials.

Users with administration rights also experience the effects of UAC. Since all users, administrators included, login with a filtered token UAC will prompt administrators with a consent dialog before promoting to an elevated token for secure execution.

It is worth noting that Windows determines elevation requirements before a process is executed and if elevation is required the entire process is elevated to the privileged level upon successful OTS credential or administrator consent.

UAC consists of more than just elevation. Effectively, UAC does away with the “Power Users” group, which provided users with administrative privileges to perform basic system tasks while running applications. UAC now enables standard users to perform standard configuration tasks and Windows will prompt for elevation for specific privileged operations.

UAC provides a short-term solution for legacy applications operating in “XP compatibility” mode with a virtual file system and registry. When a legacy application requires write permission to a protected area of the file system or registry, the changes affect a virtual copy allowing the legacy application to function without hurting the operating system. Microsoft intends this solution as short-term as developers begin to author UAC aware applications.

Windows prompts for elevation via a secure desktop to prevent malicious applications from tricking users into requesting elevation without their knowledge – whilst the consent/credential dialog is visible; the user is operating within a secure desktop, preventing any software applications from interacting with the user interface.

Code Access Security – A Primer

Overview

This post serves as a primer for software developers interested in learning about Code Access Security (CAS) in .NET. The following information is not exhaustive of the subject matter and contains the basic overview of Code-Access-Security. Those interested in this subject are encouraged to read further.

The following articles cover code security and are a good follow-up to this post.

http://www.codeproject.com/dotnet/UB_CAS_NET.asp

http://msdn.microsoft.com/msdnmag/issues/05/11/CodeAccessSecurity/default.aspx

http://msdn.microsoft.com/msdnmag/issues/05/11/HostingAddIns/

Shawn Farkas is one of many experts on Code Access Security, and as well as the author of many magazines, he posts regularly on his weblog:

http://blogs.msdn.com/shawnfa/

 

What is Code Access Security?

Most computer users and security experts are accustomed to Role-Based Security (RBS), where particular users belong to specific groups, assigned permissions to protected resources. Windows XP/2003, SQL Server, IIS, and a host of server applications use Role-Based Security to provide access protection.

Code Access Security is different to Role-Based Security in that it restricts access to protected resources at the code level. Coming from a role-based way of thinking, code access security can be a confusing concept because there is no user attempting access in the typical sense. Code Access Security defines a set of permissions and the policy, which defines assignment of those permissions, by evaluating the evidence belonging to the code requesting access.

 

Why should we care about Code Access Security?

Typically, software development and security roles are very distinctive:

Software developers create software to run on workstations and servers, and security experts lock down access at the user level to these workstations and servers.

The above approach has been in place for as long as developers have been creating software and the software has been manipulating secured data; however, this methodology has a few flaws:

  • Deployment of software in the above scheme is troublesome – developers are used to writing and testing software with a full set of permissions. When deploying software, developed in this fashion, in a locked-down environment, the software often fails.
  • The best software developers are not always the best security experts, and vice versa. Software developers hate to work through security constraints and security experts often like to lock down systems to the point where they are sometimes unusable.

Code Access Security is a new way of thinking. Just as industry has learned that performance is not a last minute consideration in the software development lifecycle, neither is security. Code Access Security prevents malicious code penetrating secure systems by detecting insecure code before it executes, and potential security holes be pinpointed to code modules that demand a higher permission set.

With Code Access Security, you can:

  • Restrict what code can do
  • Restrict who can call code
  • Identify code

Code Access Security works hand-in-hand with security design and threat modeling, in that any .NET assembly can be marked as “security transparent.” Security transparent assemblies contain code that does not access protected resources, and is safe to operate in partial trust environments. More on security transport assemblies later in this post.

Some environments, in which custom code may execute, are partial trust. Microsoft guidelines suggest that all ASP.NET installations hosting multiple applications be set at medium trust to guarantee autonomy. Developers writing code for hosted environments will have no choice but to make sure their code runs at ASP.NET medium trust level. The next version of SharePoint (Office 12 Server and WSS 3.0) operates at partial trust out of the box.

 

The Fundamentals

As mentioned in the previous section, Code Access Security does not use user or role identification, so how does Code Access Security in .NET work?

Before execution of verifiable code, the .NET platform determines if the code has permission to complete its function successfully. This process involves collecting information about the code – evidence, determining the required permissions to complete execution by obtaining the current policy for the enterprise/machine/user/app domain. The list below further documents the main constituents to Code Access Security:

  • Evidence is a set of attributes that belong to code. For example, certain .NET assemblies may be strong named and have a particular public key token. Other assemblies may have originated via “Click Once Deployment” at a certain web address, or reside within a particular directory on the file system.

  • Permissions represent access to a protected resource or the ability to perform a protected operation. The .NET Framework provides a number of classes that represent different permissions. For example, if some code needs access to files on disk then a FileIOPermission is required; the ReflectionPermission is required for any code that attempts to perform refection, etc.

  • Permission Set is a collection of permissions. The system defines several permission sets and different assemblies in a .NET application may fall into zero, one or more of these permission sets. The Framework defines a number of default permission sets, including “Full Trust” – a set that contains all permissions, and “Nothing” – a set that contains no permissions.

  • Code Group is mapping of evidence to permission sets. Code groups combine to form a tree where code must exhibit the desired evidence to satisfy membership of the group.
  • Security Policy is a configurable set of rules that the CLR follows when determining the permissions to grant to code. There exist four independent policy levels:

  • Enterprise – All managed code in an enterprise setting
  • Machine – All managed code on a single computer
  • User – Managed code in all processes associated with the current user
  • Application Domain – Managed code in the host’s application domain

What about ASP.NET?

ASP.NET builds atop of Code Access Security and provides five permission sets; each set depicted as a trust level:

  • Full
  • High
  • Medium
  • Low
  • Minimal

Each trust level above contains permissions, ranging from a complete set of permissions – “Full” trust – to limited permissions – “Minimal” trust.

A separate policy configuration file exists for each trust level and packaged with the ASP.NET installation. An ASP.NET application stipulates the level of trust and location of policy file in the application configuration file (web.config):

<trustLevel name=”High” policyFile=”web_hightrust.config”/>

Applications that operate in partial trust (not full trust) and require elevated permissions can run in a higher trust level or by defining custom permissions in a new policy file. If an application only requires a handful of permissions, not present at the current trust level, then it makes sense to define a custom policy and permission set. Increasing the trust level may add many more permissions not required by the application, creating security vulnerability.

 

Applying Code Access Security

Two different kinds of syntax are available when adding Code Access Security to code: declarative and imperative.

Declarative syntax involves applying attributes to methods, classes, or assemblies. The “Just-in-Time” (JIT) compiler reads meta-data generated from these attributes to evaluate these calls.

[FileIOPermission(SecurityAction.Demand, Unrestricted=true)]

public class Foo { … }

Imperative syntax involves the use of method calls to create instances of security classes at runtime.

public class Foo

{

public void MethodOne(..)

{

new FileIOPermission(PermissionState.Unrestricted).Demand();

}

}

Both of the examples above are requesting unrestricted access to the file system. Most of the security permission classes in the .NET framework provide properties to customize the level of access; the FileIOPermission includes properties to permit read/write access to particular files and directories in the file system. The example below permits all access to a particular file by changing the parameters passed to the constructor:

new FileIOPermission(FileIOPermissionAccess.AllAccess,”C:Test.txt”).Demand();

So, what happens when code declares a security permission attribute or instantiates a new permission class imperatively?

All three examples above call a “demand” on the desired permission class. The demand instructs the CLR to walk the call stack of the current process making sure that each method call has the desired permission requested. If one of the calling methods in the stack does not have the permission then the CLR throws a security exception.

Most of the classes in the .NET Framework demand (or link demand) permissions when accessing protected resources. If a developer writes code that uses one of the framework classes, say to access a database or perform reflection, and the developer’s code is running in partial trust, then the developer’s code must the desired permission, otherwise the CLR will throw a security exception.

By default, any code developed against the .NET framework runs as “full trust.” except in the following cases:

  • The developer explicitly creates a sandbox application domain with partial trust
  • Configures application assemblies as partial trust using the .NET Framework Configuration tool
  • Runs the application code in ASP.NET at a trust level other than full
  • Is running the code in some other host application preconfigured to partial trust
  • The code is executed across a network

When operating at “full trust” all security demands made by classes in the framework (or by custom developer classes that are security aware) succeed. Only during deployment to a partial trust environment is there a problem. Developers should get in the habit of developing under partial trust when developing code that access protected resources.

Permission demand is one of several actions that applicable to permission classes, other actions available are:

  • SecurityAction.Demand – All callers higher in the call stack must have the permission specified by the current permission object.

  • SecurityAction.LinkDemand –Only the immediate caller in the call stack must have the permission specified by the current permission object.

  • SecurityAction.InheritanceDemand – Derived classes or overriding methods must have the permission specified by the current permission object.

  • SecurityAction.Assert – If the calling code has the desired permission then the stack walk for permission check stops. Use asserts only when encapsulating code that is known to be secure because callers further up the stack running in partial trust may not be aware of a demand further down the chain. Code containing asserts without the actual permission will allow permission checking to continue up the call stack.

  • SecurityAction.Deny – Callers cannot access a protected resource specified by the permission, even if the caller has permission to access the resource. So if a method in the call stack specifies a deny action and a method further down the chain attempts to access the resource, regardless of whether they have the permission, the method lower in the call stack will fail access.

  • SecurityAction.PermitOnly – Link a deny action only a permit only action specifies that the caller is denied access to all resources except for those defined in the current permission object. Further definition of this action is beyond the scope of this post.

  • SecurityAction.RequestMinimum – Only used within the scope of an assembly, this action defines the set of minimum permissions required for the assembly to execute.

  • SecurityAction.RequestOptional – Only used within the scope of an assembly, this action defines the set of permissions optional to execute (not required).

  • SecurityAction.RequestRefuse – Only used within the scope of an assembly, this action defines a set of permissions that may be requested and misused, and should therefore never be granted, even if the current security policy allows it. Further definition of this action is beyond the scope of this post.

Asserts deserve special consideration because they prevent permission demands from reaching callers higher in the call stack. Asserts are useful when a method is required to call some code that demands higher permission and the caller of the method is in partial trust. For example, a trusted custom assembly with elevated trust could call out to the file system using one of the framework API calls; the framework will demand a FileIOPermission, which must not propagate beyond the level of the custom assembly. Placing assert code around the call to the file system API will ensure that that demand never leaves the scope of the method containing the assert code. The custom assembly must have the FileIOPermission, otherwise the assert code is ignored and demands will continue up the stack to partial trusted code. The following is an example of an assertion code around a call to a method, which demands security permission. Notice the revert call at the end of the code, this revert will cancel the assert code. It is important to limit the scope of assertion so to avoid creating a security vulnerability, place only the code that requires the security permission between the assert call and the revert call.

new FileIOPermission(PermissionState.Unrestricted).Assert();

// Do something that causes a FileIOPermission

CodeAccessPermission.RevertAssert();

 

Transparent Assemblies

Transparent assemblies are .NET assemblies that are free from security critical code. The .NET Framework 2.0 enables developers to define assemblies as transparent so that security audits can rule out these assemblies as potentially security vulnerable. Transparent assemblies voluntarily give up the ability to elevate the permissions of the call stack, and the following rules apply:

  • Transparent code cannot asset for permissions to stop a stack walk from continuing
  • Transparent code cannot satisfy a link demand
  • Unverifiable code is forbidden in transparent assemblies
  • Calls to P/Invoke or unmanaged code will cause a security permission demand

Security transparent assemblies run either at the permission level granted, or at the permission level of the caller, whichever is less.

By default, all assemblies are security critical – the opposite of security transparent – but made into a transparent assembly by adding the following attribute at the assembly level:

[assembly:SecurityTransparent]

The CLR throws a security exception if a transparent assembly attempts to elevate permissions. In cases where the developer wants to make the entire assembly as transparent, except for a few methods, use the following attribute:

[assembly:SecurityCritical]

The attribute named above is a little misleading in that it marks the entire assembly as transparent but allows security critical code. Decorate methods that require elevated as follows:

[SecurityCritical]

public void foo()

{

new FileIOPermission(PermissionState.Unrestricted).Demand();

…..

}

 

Allowing Partially Trusted Callers

By default, strongly named, trusted assemblies obtain an implicit link demand for full trust on every public method of every publicly available class within the assembly. The CLR performs this insertion to protect fully trusted assemblies from misused by attackers. For example, a trusted assembly may have full access to loading a disk file. An attacker realizes that the assembly has not been security audited, and can manipulate the file loaded. The implicit link demand ensures that the attacker cannot execute the method if not running in full trust.

Assuming developers have security audited their code and want to allow partially trusted callers to call a full trusted assembly – the “Allow Partially Trusted Callers Attribute” (APTCA) enables developers to suppress the implicit link demand:

[assembly: AllowPartiallyTrustedCallers]

Developers should take the utmost care when enabling partially trusted callers to call trusted assemblies.

Some APTCA assemblies may still demand or link demand explicit permissions, in which case the addition of the APTCA does not remove the explicit demands, and a security exception generated in partially trusted code.

LUA and Windows XP

For those of you that read my blog often (or talk to me directly), you’ll know that I am constantly advocating that users operate their computers under a least-privileged user account – LUA.  Many have taken my advice, of not running day-to-day operations under an administrator account, or account with elevated privileges (yes, that includes accounts in the Power Users group on Windows).  Most MAC and Linux users know this concept already, but there are still a staggering number of Windows users who still insist that they need elevated privileges to operate their PC. 

Well, you no longer need to take my word for granted, Microsoft has recently published a white paper on the merits of operating as LUA, and the paper can be downloaded here.  If you’re a Windows XP user, I strongly advice that you read this paper, it’ll open your eyes to how easy it can be to prevent spy ware, viruses, and other malicious code on the Windows platform.

Many thanks to Robert Hurlbut for bringing this paper to my attention via his blog.

Sony is still advising users to install their rootkit…

Quoted from:  http://cp.sonybmg.com/xcp/english/howtouse.html

To install the software on this disc, you
must have Administrator rights on your computer.
Administrator rights are typically the default
setting for home computers, however, in many
work environments it is not the default setting.
If you do not have Administrator rights, log
out of your account and log in as an Administrator
.

The above statement should cause a light bulb to illuminate in your head. 

Q: Why would a least privileged user (LUA) require administrative permissions to play an audio CD on a Windows/Mac computer?
A: Because it is trying to install something nasty on your computer.

Yet another reason why I am an advocate for LUA.

Avoiding Sony’s DRM Rootkit

It may not have escaped your attention that Sony has been featured in
the news a lot recently, concerning proven allegations about Sony BMG
installing DRM root kits on Windows computers, belonging to consumers:

http://news.bbc.co.uk/2/hi/technology/4400148.stm

Essentially, root kits are malicious pieces of software that are
installed in the lower-levels of the Windows operating system, which
can hide from anti-spy ware and anti-virus checkers.  Sony
claim that they employed the use of root kits to install digital rights management
software on Windows PCs to limit the damage to the corporation as a
result of piracy.  Consumers believe that Sony has gone too far in
their efforts.  The root kit was originally discovered by Mark
Russinovich
after running “root kit revealer” – an application,
engineered by Sysinternals, to find root kits on a Windows platform –
on his computer.

So, how do you avoid Sony’s root kit, and any other root kit that might be lurking in software?

Operate your PC under LUA.  Root kit installers need access to low-level OS functions, drivers, and possibly the kernel to operate – non of these areas are available when running as LUA

LUA will not protect you from root kits hidden in software that you
actively install as an administrator, but will prevent passive
installers from burying root kits in your Windows operating system
without your knowledge.  So, you will still need to be diligent
when installing software (know where the software came from, is it
reputable?, is there any known press about the use of root kits
associated with the software vendor?), but will not have to worry so
much about hidden software being installed when you plop an audio CD or DVD in your
computer.

Debugging ASP.NET as non-admin (LUA)

See my previous post about LUA, and why it is a good idea.  Today
I managed to get ASP.NET 2.0 to debug correctly, using Visual Studio
.NET 2005 under LUA.  Thanks to Andrew Duthie’s post.  I needed to tweak my system a little, here are my steps in digest (II6 only):

1. Create a new user group (Control Panel, Administrative Tools, Computer Manager, Local Users and Groups), called ASPNETDebug.
2. Add the LUA user to the ASPNETDebug group.
3. Add the LUA user to the IIS_WPG group.
4. Modify the following local account policies (Control Panel,
Administrative Tools, Local Security Policy, Security Settings, Local
Policies, User Rights Assignments):

  • “Adjust memory quotas for a process” – add the ASPNETDebug user group.
  • “Replace a process level token” – add the ASPNETDebug user group.

5. Modify the NTFS permissions on the following directories, and add the ASPNETDebug group with modify permissions:

  • %windows%temp
  • %windows%Microsoft.NETFramework%framework version%Temporary ASP.NET Files

6. Create a new application pool in IIS 6 (not based on any other app pool).
7. Change the identity of the newly created application pool to the LUA.
8. Change the app pool of ASP.NET web site application being debugged to the newly created app pool.

Running as Non-Admin

I have recently converted my developer workstations over to running as
non-admin.  I was inspired to move over to a least-privilege user
account after sitting in on a talk by Randy Hayes – president for the
CMAP (Central Maryland Association of .NET Professionals) user
group.  The principle theme of Randy’s talk was about better
protecting your Windows machine from spy ware and viruses by running as
a non-admin.

Since being part of the audience for Randy’s talk I have been preaching
the need to run as a least-privilege user account (LUA) to all my
friends, family and work colleagues, so I decided to write up a post on
the subject.

I am not too proud to announce that Randy’s talk changed the way in
which I think of security on the Windows platform, and this post is
testament to his teaching.  With a few exceptions, most of the
details in this post are from Randy’s talk.

The Problem
Your Windows computer is under attack!  If you take a fresh
install of Windows XP, sans-service pack and patches, and then connect
it directly to the Internet, within seconds your machine will likely be
compromised by a virus or spy ware applications.  Installation of
service packs, use of a firewall and network address translation (using
a router) can all help, but what about malicious code that gets
downloaded to your PC by you?

Each web site that you visit, from your computer, has the potential to
host malicious code, which is downloaded, installed and executed
without you even knowing about it.  If you are not careful about
opening email attachments from unknown senders, you could also be
opening yourself up for attack.

I hear the same complaints when I speak to peers and family members –
“My Windows machine is running and slow and/or swamped with
viruses”.  Conversely, when I speak to Macintosh and Linux users,
I do not hear quite as many complaints – why is that?  The answer
has nothing to do with Windows having a larger user base, but more
likely because Windows is easy to penetrate due to the default user
account holding administrator privileges.

A Potential Solution
Industry has an answer to the mass amounts of spy ware and virus
applications that attack the Windows operating system, in the form of
utilities, which scan your computer and remove malicious code that has
been detected.

There are so many different anti-virus and anti-spy ware utilities to
choose from.  Some are better than others, some are free, some are
expensive, some require subscription, some do not, but they all have
suffer from one inherent problem – Utilities are only effective in
detecting known malicious code.  So what about malicious code that
we do not yet know about?

As fast as developers can develop code to detect known anti-virus and
spy-ware, the faster new breeds of malicious code are invented and
released on the Internet.  This leaves your machine open to attack
while you wait for the next release of service pack. 

A Better Solution
A better solution involves lowering that attack service on your
computer – running as LUA.  When you operate your day-to-day tasks
under and account with administrator privileges the attack surface
consists of:

•    Your operating system files
•    Your application files
•    Your machine registry
•    Your personal files
•    Your personal registry

Switching over to a LUA immediately restricts the attack service to the following:

•    Your personal files
•    Your personal registry

This is because the LUA, by default, does not have write access to operating system and application files.

In an ideal world your personal files and personal registry would be
protected from attack also, however, all is not as bad as it
seems.  Most spy ware and virus applications are after attacking
your operating system and applications – rendering your machine
unusable.  Personal files can (and should) be backed up, in the
event of machine failure or attack, as can the user registry.  In
a worst case scenario, if a virus attacked your personal files and
personal registry all that is required is to delete your work files,
delete the user profile and create a new one.  If your operating
system or applications are affected, then you are looking at repaving
your entire machine.

How to tell if you are an admin in Windows:

•    Right click the start button, if you see “explore all users” you ARE an admin
•    Double click the clock in the system try, if the date/time applet appears the you ARE an admin
•    Right click the “My Computer” icon on the desktop,
click computer name.  If you see a “change” box then you ARE an
admin

How to run as LUA
•    Remove your user account from the
“Administrators” group.  If you are using the default
“Administrator” account, then create another low privileged user for
your day to day tasks.
•    Never use the “Power Users” group – even though
this group is not the “Administrators” group, users that belong to this
group still have administrative privileges across your machine.
•    If you are part of a corporate domain and the only
administrative account on your machine is your day-to-day user account (many
corps disable the main “Administrator” account) , then be sure to
create a local admin account on your machine before revoking the
administrative privileges of your day-to-day account.  This will
ensure that you have at least one an administrative account on your machine, which
can be used via the “run-as” command.

NTFS is your friend
NTFS is a system that manages your file system, and includes management
of file security.  If your file system is using FAT/FAT32 you will
need to convert to NTFS to take advantage of file security. 

Chances are that you may not have messed with the default security
permissions that were applied to operating system and application files
when Windows was installed. This being the case, your operating system
and application files will be protected from malicious code when
running as LUA.  If, however, you have made changes to NTFS
security and wish to restore permissions to the default Windows
installation settings, execute the following statement at a command
prompt:

secedit /configure /cfg %windir%repairsecsetup.inf /db secsetup.sdb /verbose /areas FILESTORE

Warning: The
above command will reset all of the file security permissions on your
operating system drive, so you shall need to be running as an
administrator, and be aware that any changes made to file security
permissions after you installed Windows will be lost.

Objections
•    “I do not want to be restricted”
    o    Neither will malicious code
    o    You will spend all your time
updating the signatures of your anti-spy ware and anti-virus utilities

•    “Some of my applications do not work as non-admin”
    o    Find out why, some effort may be
required to get apps to work as non-admin, but the secure peace of mind
pay off is worth the effort.
    o    Call the manufacturer and DEMAND that they make their application work under LUA
    o    Avoid software that does not carry the “designed for Microsoft Windows XP”

•    “I hate logging out to install software or perform an administrative configuration”
    o    Get used to using the “run-as”
option (right click shortcuts with the left shift key down)
    o    In commercial organizations it
is common practice to log on as a domain admin to install and configure
software, but office users do not all have the
domain password. 

•    “Some of my developed code does not execute under LUA”
    o    This is a good opportunity to
take a look at your code and find out why it requires administrative
rights to execute.  If you can get your code to work as LUA then
it will most likely deploy better, and require limited hands-on
installation when moving it to a production environment.

Where can I find out more information?
•    Randy Hayes’s presentation slides can be downloaded from here
•    www.non-admin.com is a new web site being set up by
Randy to educate non-technical readers on configuring their computer as
LUA