From K5Wiki
< Projects
Revision as of 08:37, 23 May 2017 by Simo (talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
This is an early stage project for MIT Kerberos. It is being fleshed out by its proponents. Feel free to help flesh out the details of this project. After the project is ready, it will be presented for review and approval.

NOTE: The actual project is now hosted and documented at: https://pagure.io/gssproxy

This project is about creating a mechanism by which it is possible to proxy accepting and initializing a security context to a separate trusted system service without giving access to long term keys to an application. The proxy would be created at the mechglue layer so that any mechanism can be proxied.


We have identified two main scenarios that benefit from this approach.

One is the ability to lock long term keys (especially the host/ key when the krb5 mechanism is used) into a trusted service on the system so that "less" trusted applications do not have direct access to the keys. This is a form of privilege separation that allows better control of the keys and reduces the attack surface of the system when it comes to gaining access to such keys.

The other is about trusting data being carried through, for example, krb5 tickets. In systems that use signed authorization data, it is fundamental for a "trusted" service to directly handle accepting a context so that it is impossible for an application to "fake" Authorization data by having access to the host keys and therefore being able to sign authorization data blobs. As an example think of a user accessing a FTP daemon and sending a user credentials (MS-PAC/PAD) in the ticket. A trusted service need to be able to verify that the KDC actually did sign this data in order to trust its authenticity and allow the system to create a user out of this data. A way to fully trust this data when it is signed by the long term key is to not allow the FTP service to have access to the long term keys, so that a subverted service is not allowed to create fake data and sign it in order to perform privilege elevation attacks.


In order to be able to implement this proxy it is fundamental to be able to create an IPC mechanism to transfer data from the process handling the GSSAPI exchange to the trusted service holding the keys and negotiating the security context.

This IPC mechanism should use a defined protocol so that independent implementations of the trusted service can be created.

Interface stability either at an API level (if an endpoint library is created) or at the protocol level is highly desirable but not fundamental.

Considerations on the design

There are a few challenges that need to be considered for this project.

Access privileges

Currently access to credentials is determined base on file permissions. If the user/application have access to the keytab/credentials then it is allowed to use them for any purpose. By adding an IPC mechanism we need to also add a mechanism to determine what privileges the calling in process have.

One way to do this is by replicating the permission model on keytab files as permission on the directories containing the unix socket. One directory per credential/keytab with file system permissions et so only processes belonging to the user owner can access the IPC pipe. On some systems it is also possible to inspect the credentials of the connecting process via special system calls on the socket. On those systems it is possible to decide to use a single socket and perform access control in the daemon.

What to proxy ?

There are a few ways the proxy code can be designed.

a) Proxy everything unconditionally in any case b) Have a flag or method call that can tell which mechanisms to proxy c) Proxy on a per application based on per application configuration

Each of them may be desirable but there are a few drawbacks with each approach:

With (a) it may turn out that the mechanism will need access to long term secrets even after the security context has been established. That could be handled by passing in the long term keys with the export/import credentials function that is needed to transfer session keys. This would render privilege separation useless though. Another option would be to provide extensions so mechanism can delegate these operation to the proxy, but this looks tricky.

With (b), mechanisms that cannot easily be proxied could be marked as such and not proxied. The issue here is that there are meta-mechs like SPNEGO that would have to deal with the saem flags and change behavior dynamically based on which mech is being probed. It is not unconceivable to make meta-mechs smart enough to do that but probably too much for a first release.

The problem with (c) is that it may require application changes to add the desired option, and in general you want a global policy. However a simplistic method to check whether to proxy or not could be for the GSSAPI to check if direct access to credentials is available and skip proxying in case direct access is possible.

RPC mechanism

In order to proxy data around the IPC pipes need to be able to (un)marshal data back and forth. There are a few options I can see to handle messaging.

After careful considreation it has been decided to use SUNRPC/XDR as the transport. The actual proxy server does not register to a RPC endpoint mapper but otherwise uses full RPC format on the wire and XDR encoding.

The actual protocol has been assigned number 400112 and is being documented in the gss-proxy project

The resons for choosing RPC/XDR were multiple.

  • It is a wellknown encoding with a reasonable description language and a compiler for RPC stubs.
  • Most krb5 projects already have bultin facilities to handle this protocol.
  • Most kernels also already have support for handling SUNRPC/XDR from the NFS code.

The last point is relevant due to the fact that the Gss-Proxy concept aligns quite well also to the Kernel case where supporting GSSAPI authentication for NFS or CIFS server/clients is valuable but embedding a whle gss library is excessive. In this case using the gss-proxy protocol directly allows to use basic gssapi services easily without having to implement a full gssapi library in kernel. A proof of concept implementation for using part of the protocol in the Linux kernel has been implemented and posted to LKML.

Additional Considerations

The GSSAPI design is currently completely synchronous. By adding an IPC communication venue we increase the risk of stalling the application by blocking on socket operations while waiting for the trusted service to be scheduled and perform the needed operations. Although this is not always desirable as it add latency, other parts of GSSAPI can add latency during accept/init phases. Some mechanism for example may perform blocking netowrk operations like DNS lookups as well. Considering that the accept/init phases usually represent a very tiny amount of time compared to the life of a secured connection and considering that most of the time the connection setup already suffers from network latencies we think this is acceptable at this stage. Application can mitigate issues with blocking operations by confining GSSAPI related handling into a separate thread.



The two main hooks for this functionality will be implemented in gss_accept_sec_context() for accepting 3rd party initiate security negotiations and gss_init_sec_context() for initializing a security context on the application behalf.


  • Introduce new RPC library with stub generator/IDL compiler
  • Define the RPC interfaces to be implemented
  • Introduce a new meta-mechanism that implements the client side of the proxy
  • Create a simple server program for testing purposes


See https://pagure.io/gssproxy


Interposer mechglue plugin mechanism


Have a working prototype for the 1.11 release

Testing Plan

Built client and server code as part of the GSS-Proxy project.