Code Access Security – A Primer

Overview

This post serves as a primer for software developers interested in learning about Code Access Security (CAS) in .NET. The following information is not exhaustive of the subject matter and contains the basic overview of Code-Access-Security. Those interested in this subject are encouraged to read further.

The following articles cover code security and are a good follow-up to this post.

http://www.codeproject.com/dotnet/UB_CAS_NET.asp

http://msdn.microsoft.com/msdnmag/issues/05/11/CodeAccessSecurity/default.aspx

http://msdn.microsoft.com/msdnmag/issues/05/11/HostingAddIns/

Shawn Farkas is one of many experts on Code Access Security, and as well as the author of many magazines, he posts regularly on his weblog:

http://blogs.msdn.com/shawnfa/

 

What is Code Access Security?

Most computer users and security experts are accustomed to Role-Based Security (RBS), where particular users belong to specific groups, assigned permissions to protected resources. Windows XP/2003, SQL Server, IIS, and a host of server applications use Role-Based Security to provide access protection.

Code Access Security is different to Role-Based Security in that it restricts access to protected resources at the code level. Coming from a role-based way of thinking, code access security can be a confusing concept because there is no user attempting access in the typical sense. Code Access Security defines a set of permissions and the policy, which defines assignment of those permissions, by evaluating the evidence belonging to the code requesting access.

 

Why should we care about Code Access Security?

Typically, software development and security roles are very distinctive:

Software developers create software to run on workstations and servers, and security experts lock down access at the user level to these workstations and servers.

The above approach has been in place for as long as developers have been creating software and the software has been manipulating secured data; however, this methodology has a few flaws:

  • Deployment of software in the above scheme is troublesome – developers are used to writing and testing software with a full set of permissions. When deploying software, developed in this fashion, in a locked-down environment, the software often fails.
  • The best software developers are not always the best security experts, and vice versa. Software developers hate to work through security constraints and security experts often like to lock down systems to the point where they are sometimes unusable.

Code Access Security is a new way of thinking. Just as industry has learned that performance is not a last minute consideration in the software development lifecycle, neither is security. Code Access Security prevents malicious code penetrating secure systems by detecting insecure code before it executes, and potential security holes be pinpointed to code modules that demand a higher permission set.

With Code Access Security, you can:

  • Restrict what code can do
  • Restrict who can call code
  • Identify code

Code Access Security works hand-in-hand with security design and threat modeling, in that any .NET assembly can be marked as “security transparent.” Security transparent assemblies contain code that does not access protected resources, and is safe to operate in partial trust environments. More on security transport assemblies later in this post.

Some environments, in which custom code may execute, are partial trust. Microsoft guidelines suggest that all ASP.NET installations hosting multiple applications be set at medium trust to guarantee autonomy. Developers writing code for hosted environments will have no choice but to make sure their code runs at ASP.NET medium trust level. The next version of SharePoint (Office 12 Server and WSS 3.0) operates at partial trust out of the box.

 

The Fundamentals

As mentioned in the previous section, Code Access Security does not use user or role identification, so how does Code Access Security in .NET work?

Before execution of verifiable code, the .NET platform determines if the code has permission to complete its function successfully. This process involves collecting information about the code – evidence, determining the required permissions to complete execution by obtaining the current policy for the enterprise/machine/user/app domain. The list below further documents the main constituents to Code Access Security:

  • Evidence is a set of attributes that belong to code. For example, certain .NET assemblies may be strong named and have a particular public key token. Other assemblies may have originated via “Click Once Deployment” at a certain web address, or reside within a particular directory on the file system.

  • Permissions represent access to a protected resource or the ability to perform a protected operation. The .NET Framework provides a number of classes that represent different permissions. For example, if some code needs access to files on disk then a FileIOPermission is required; the ReflectionPermission is required for any code that attempts to perform refection, etc.

  • Permission Set is a collection of permissions. The system defines several permission sets and different assemblies in a .NET application may fall into zero, one or more of these permission sets. The Framework defines a number of default permission sets, including “Full Trust” – a set that contains all permissions, and “Nothing” – a set that contains no permissions.

  • Code Group is mapping of evidence to permission sets. Code groups combine to form a tree where code must exhibit the desired evidence to satisfy membership of the group.
  • Security Policy is a configurable set of rules that the CLR follows when determining the permissions to grant to code. There exist four independent policy levels:

  • Enterprise – All managed code in an enterprise setting
  • Machine – All managed code on a single computer
  • User – Managed code in all processes associated with the current user
  • Application Domain – Managed code in the host’s application domain

What about ASP.NET?

ASP.NET builds atop of Code Access Security and provides five permission sets; each set depicted as a trust level:

  • Full
  • High
  • Medium
  • Low
  • Minimal

Each trust level above contains permissions, ranging from a complete set of permissions – “Full” trust – to limited permissions – “Minimal” trust.

A separate policy configuration file exists for each trust level and packaged with the ASP.NET installation. An ASP.NET application stipulates the level of trust and location of policy file in the application configuration file (web.config):

<trustLevel name=”High” policyFile=”web_hightrust.config”/>

Applications that operate in partial trust (not full trust) and require elevated permissions can run in a higher trust level or by defining custom permissions in a new policy file. If an application only requires a handful of permissions, not present at the current trust level, then it makes sense to define a custom policy and permission set. Increasing the trust level may add many more permissions not required by the application, creating security vulnerability.

 

Applying Code Access Security

Two different kinds of syntax are available when adding Code Access Security to code: declarative and imperative.

Declarative syntax involves applying attributes to methods, classes, or assemblies. The “Just-in-Time” (JIT) compiler reads meta-data generated from these attributes to evaluate these calls.

[FileIOPermission(SecurityAction.Demand, Unrestricted=true)]

public class Foo { … }

Imperative syntax involves the use of method calls to create instances of security classes at runtime.

public class Foo

{

public void MethodOne(..)

{

new FileIOPermission(PermissionState.Unrestricted).Demand();

}

}

Both of the examples above are requesting unrestricted access to the file system. Most of the security permission classes in the .NET framework provide properties to customize the level of access; the FileIOPermission includes properties to permit read/write access to particular files and directories in the file system. The example below permits all access to a particular file by changing the parameters passed to the constructor:

new FileIOPermission(FileIOPermissionAccess.AllAccess,”C:Test.txt”).Demand();

So, what happens when code declares a security permission attribute or instantiates a new permission class imperatively?

All three examples above call a “demand” on the desired permission class. The demand instructs the CLR to walk the call stack of the current process making sure that each method call has the desired permission requested. If one of the calling methods in the stack does not have the permission then the CLR throws a security exception.

Most of the classes in the .NET Framework demand (or link demand) permissions when accessing protected resources. If a developer writes code that uses one of the framework classes, say to access a database or perform reflection, and the developer’s code is running in partial trust, then the developer’s code must the desired permission, otherwise the CLR will throw a security exception.

By default, any code developed against the .NET framework runs as “full trust.” except in the following cases:

  • The developer explicitly creates a sandbox application domain with partial trust
  • Configures application assemblies as partial trust using the .NET Framework Configuration tool
  • Runs the application code in ASP.NET at a trust level other than full
  • Is running the code in some other host application preconfigured to partial trust
  • The code is executed across a network

When operating at “full trust” all security demands made by classes in the framework (or by custom developer classes that are security aware) succeed. Only during deployment to a partial trust environment is there a problem. Developers should get in the habit of developing under partial trust when developing code that access protected resources.

Permission demand is one of several actions that applicable to permission classes, other actions available are:

  • SecurityAction.Demand – All callers higher in the call stack must have the permission specified by the current permission object.

  • SecurityAction.LinkDemand –Only the immediate caller in the call stack must have the permission specified by the current permission object.

  • SecurityAction.InheritanceDemand – Derived classes or overriding methods must have the permission specified by the current permission object.

  • SecurityAction.Assert – If the calling code has the desired permission then the stack walk for permission check stops. Use asserts only when encapsulating code that is known to be secure because callers further up the stack running in partial trust may not be aware of a demand further down the chain. Code containing asserts without the actual permission will allow permission checking to continue up the call stack.

  • SecurityAction.Deny – Callers cannot access a protected resource specified by the permission, even if the caller has permission to access the resource. So if a method in the call stack specifies a deny action and a method further down the chain attempts to access the resource, regardless of whether they have the permission, the method lower in the call stack will fail access.

  • SecurityAction.PermitOnly – Link a deny action only a permit only action specifies that the caller is denied access to all resources except for those defined in the current permission object. Further definition of this action is beyond the scope of this post.

  • SecurityAction.RequestMinimum – Only used within the scope of an assembly, this action defines the set of minimum permissions required for the assembly to execute.

  • SecurityAction.RequestOptional – Only used within the scope of an assembly, this action defines the set of permissions optional to execute (not required).

  • SecurityAction.RequestRefuse – Only used within the scope of an assembly, this action defines a set of permissions that may be requested and misused, and should therefore never be granted, even if the current security policy allows it. Further definition of this action is beyond the scope of this post.

Asserts deserve special consideration because they prevent permission demands from reaching callers higher in the call stack. Asserts are useful when a method is required to call some code that demands higher permission and the caller of the method is in partial trust. For example, a trusted custom assembly with elevated trust could call out to the file system using one of the framework API calls; the framework will demand a FileIOPermission, which must not propagate beyond the level of the custom assembly. Placing assert code around the call to the file system API will ensure that that demand never leaves the scope of the method containing the assert code. The custom assembly must have the FileIOPermission, otherwise the assert code is ignored and demands will continue up the stack to partial trusted code. The following is an example of an assertion code around a call to a method, which demands security permission. Notice the revert call at the end of the code, this revert will cancel the assert code. It is important to limit the scope of assertion so to avoid creating a security vulnerability, place only the code that requires the security permission between the assert call and the revert call.

new FileIOPermission(PermissionState.Unrestricted).Assert();

// Do something that causes a FileIOPermission

CodeAccessPermission.RevertAssert();

 

Transparent Assemblies

Transparent assemblies are .NET assemblies that are free from security critical code. The .NET Framework 2.0 enables developers to define assemblies as transparent so that security audits can rule out these assemblies as potentially security vulnerable. Transparent assemblies voluntarily give up the ability to elevate the permissions of the call stack, and the following rules apply:

  • Transparent code cannot asset for permissions to stop a stack walk from continuing
  • Transparent code cannot satisfy a link demand
  • Unverifiable code is forbidden in transparent assemblies
  • Calls to P/Invoke or unmanaged code will cause a security permission demand

Security transparent assemblies run either at the permission level granted, or at the permission level of the caller, whichever is less.

By default, all assemblies are security critical – the opposite of security transparent – but made into a transparent assembly by adding the following attribute at the assembly level:

[assembly:SecurityTransparent]

The CLR throws a security exception if a transparent assembly attempts to elevate permissions. In cases where the developer wants to make the entire assembly as transparent, except for a few methods, use the following attribute:

[assembly:SecurityCritical]

The attribute named above is a little misleading in that it marks the entire assembly as transparent but allows security critical code. Decorate methods that require elevated as follows:

[SecurityCritical]

public void foo()

{

new FileIOPermission(PermissionState.Unrestricted).Demand();

…..

}

 

Allowing Partially Trusted Callers

By default, strongly named, trusted assemblies obtain an implicit link demand for full trust on every public method of every publicly available class within the assembly. The CLR performs this insertion to protect fully trusted assemblies from misused by attackers. For example, a trusted assembly may have full access to loading a disk file. An attacker realizes that the assembly has not been security audited, and can manipulate the file loaded. The implicit link demand ensures that the attacker cannot execute the method if not running in full trust.

Assuming developers have security audited their code and want to allow partially trusted callers to call a full trusted assembly – the “Allow Partially Trusted Callers Attribute” (APTCA) enables developers to suppress the implicit link demand:

[assembly: AllowPartiallyTrustedCallers]

Developers should take the utmost care when enabling partially trusted callers to call trusted assemblies.

Some APTCA assemblies may still demand or link demand explicit permissions, in which case the addition of the APTCA does not remove the explicit demands, and a security exception generated in partially trusted code.

2 thoughts on “Code Access Security – A Primer

Comments are closed.