Monday, 3 September 2012

ndg_oauth for PyPI

After a long silence this is a catch up on things OAuth-related.  Firstly, a follow up to announce that ndg_oauth is now finally moved to PyPI for its first 'official' release.  There are separate client and server packages.  The latter containing the functionality for both authorisation and resource server:

http://pypi.python.org/pypi/ndg-oauth-client/
http://pypi.python.org/pypi/ndg-oauth-server/

Before any more on that though I should really say something about the recent news with OAuth 2.0, something that colleagues nudged me about.  It's disappointing but then maybe signs were there.   When I first became familiar with the draft for 2.0, it looked like it would be easier to implement and it would meet a broader set of use cases.  Two things appealed particularly: the simplification of the message flow and the use of transport layer instead of message-level security.   Having had experience wrestling with different implementations of WS-Security to get them to interoperate and particularly XMLSec digital signature, it was relief to not to have to deal with that in this case.  Also, transport layer security is established in Grid security and federated identity worlds and so is a known quantity.

All the same, the reported problems with OAuth 1.0 and signatures seems a little surprising.  Consistent  canonicalisation of a HTTP header for OAuth 1.0 seems a much easier prospect vs. XML signature. It hardly seems such a difficult requirement to implement.   Message level security also provides some protection for the redirects in use: securing at the transport layer will give me secure channels between user agent and authorisation server and user agent and OAuth client but the content can still be tampered in between these interactions at the user agent itself.

Our use of OAuth has definitely been taking it beyond the simple web-based use cases that I've seen for 1.0.  When I say 'our' I'm thinking of its application for Contrail and some interest shown by others in the academic research community.  In our case, we can exploit version 2.0 to address the multi-hop delegation required to fit together the various actors in the layered architecture for Contrail or it could be applied to manage delegation across multiple organisations in a federated infrastructure like ESGF.  These cases need support for non-browser based clients and for actions executed asynchronously, removed from direct user interactions.  These are in effect enterprise use cases for the academic communities involved.

As I looked over the draft specification with colleague Richard and he started an implementation it became clear that there was opening of possibilities to support many use cases but with it a concern that when it came to an implementation there were gaps to fill in and choices to be made about how best to implement.  It was clear from that it would need careful profiling if it was to be of wider use in our communities but this is familiar ground in geospatial informatics and I'm sure elsewhere to.

A consensus is badly needed but I think that's practicable and can be a process.  By making an implementation, and working with other collaborators we are making the first steps towards standardising a version 2.0 profile for use with a Short-Lived Credential Service.  This was the original motivating driver - to provide a means for delegation of user X.509 certificates.  So returning to the ndg_oauth implementation, it aims to fit this purpose but at the same time provide a baseline of more generic functionality that others can extend and build on.  More to follow! ...


Tuesday, 24 April 2012

OAuth Service Discovery


This picking up from where I left off in any earlier post about OAuth.  To recap, this fronts a short-lived credential service with an OAuth 2.0 interface. This is now being integrated into Contrail described in this paper just submitted following ISGC.   There are few more design considerations to give some thought to though for its wider deployment. As well as use for managing delegation around entities in a cloud federation with Contrail, we are applying this to CEDA's applications and web services. We have a full Python implementation of both client and server side for OAuth 2.0 which I hope to push out to PyPI soon. We're using this to secure our CEDA OGC Web Processing Service and and Web Map Service. Here though, I want to focus on how we might integrate with federated identity management system for the Earth System Grid Federation (ESGF). This is foremost given our involvement with CMIP5.

It's firstly a question of service discovery, something that need not be addressed for Contrail. The Contrail system consists of federation layer which abstracts a set of underlying cloud provider interfaces. Users can manage resources from each of these collectively through the federation's interface. The access control system manages identity at the federation level. A user authenticates with a single federation IdP. When it comes to the OAuth delegation interface, there are single OAuth Authorisation server and Resource server instances associated with the IdP.

Now consider, ESGF. There are many IdPs integrated into the federation, so how does the OAuth infrastructure map to this? Imagine, for example a Web Processing Service hosted at CEDA. A user invokes a process. This process will access secured data from an OPeNDAP service. The WPS will therefore need a delegated credential. When the user requests the process then, the WPS triggers the OAuth flow redirecting their browser to an OAuth Server but how does the WPS know where that service is? A simple approach would be to configure the WPS with an endpoint in its start up settings. This server could be:
  1. a global one for the whole federation – clearly this won't scale given the deployment across many organisations
  2. Associated with the given Service Provider – in this case CEDA
  3. Associated with the user's IdP
2. could work but now consider the other steps in the flow: the user is prompted to authenticate with the OAuth server and approve the delegation i.e. allow the WPS to get a credential on their behalf. In a federated system, the user could be from anywhere, they will need to sign in with their IdP. In this case then, the user enters their OpenID at the OAuth Authorisation server. Already, this is sounding complicated and potentially confusing for the user: they've already been redirected away from the WPS to a central OAuth Authorisation server at CEDA, now they will be redirected again to their IdP probably somewhere else.

Also, consider the approval process. Rather then having to approve the delegation every single time the user invokes the WPS process, they would like the system to remember their approval decision. The OAuth authorisation server could record a set of user approvals associated with a profile stored their. However, if we manage approvals at CEDA, this information will only be scoped within the bounds of CEDA's services. If the user now goes to use services at another service provider, they will need to record a fresh set of delegation approvals. Not a smooth user experience.

Turning to 3., this could scale better. All approval decisions would be centralised with the OAuth Authorisation server associated with their IdP.  However, there is now a service discovery problem. Looking at the protocol flow again, the WPS does not know the location of the user's OAuth Authorisation server endpoint.  Given ESGF's already established use of OpenID, an obvious solution is to leverage Yadis.   During OpenID sign in, the Relying Party HTTP GETs the user's OpenID URL returning an XRDS document.  This contains the service endpoint for the OpenID Provider.  This has already been extended for ESGF to include the MyProxy and Attribute Service endpoints.   I'm told that there's some debate in the community about choice of discovery technology with SWD (Simple Web Discovery) as an alternative.  Given ESGF's legacy with XRD it makes logical sense to add in the address for the OAuth Authorisation server as an entry into the XRDS document returned.  

<?xml version="1.0"; encoding="UTF-8"?>
<xrds:XRDS xmlns:xrds="xri://$xrds" xmlns="xri://$xrd*($v*2.0)">
  <XRD>
    <Service priority="0">
      <Type>http://specs.openid.net/auth/2.0/signon</Type>
      <Type>http://openid.net/signon/1.0</Type>
      <URI>https://openid.provider.somewhere.ac.uk</URI>
      <LocalID>https://somewhere.ac.uk/openid/PJKershaw</LocalID>
    </Service>
    <Service priority="1">
      <Type>urn:esg:security:myproxy-service</Type>
      <URI>socket://myproxy-server.somewhere.ac.uk:7512</URI>
      <LocalID>https://somewhere.ac.uk/openid/PJKershaw</LocalID>
    </Service>
    <Service priority="10">
      <Type>urn:esg:security:oauth-authorisation-server</Type>
      <URI>https://oauth-authorisation-server.somewhere.ac.uk</URI>
      <LocalID>https://somewhere.ac.uk/openid/PJKershaw</LocalID>
    </Service>
    <Service priority="20">
      <Type>urn:esg:security:attribute-service</Type>
      <URI>https://attributeservice.somewhere.ac.uk</URI>
      <LocalID>https://somewhere.ac.uk/openid/PJKershaw</LocalID>
    </Service>
  </XRD>
</xrds:XRDS>
Bringing it together, the delegation flow would start with the Service Provider present an interface to enable the user to select their IdP.  This could be by entering a full OpenID URL or preferably picking one from a list of trusted IdPs.  The SP could then GET the URL and extract the OAuth Authorisation Server endpoint from the XRDS document returned.  From there, the standard OAuth flow would proceed as before.

Thursday, 9 February 2012

New Python HTTPS Client

I've added a new HTTPS client to the ndg_* list of packages in PyPI.  ndg_httpsclient as it's become has been on a todo list for a long time.  I've wanted to make use of all the features PyOpenSSL offers with convenience of use wrapping it in the standard httplib and urllib2 interfaces. 

Here's a simple example using the urllib2 interface.  First create an SSL context to set verification of the peer:
 
>>> from OpenSSL import SSL
>>> ctx = SSL.Context(SSL.SSLv3_METHOD)
>>> verify_callback = lambda conn, x509, errnum, errdepth, preverify_ok: preverify_ok 
>>> ctx.set_verify(SSL.VERIFY_PEER, verify_callback)
>>> ctx.load_verify_locations(None, './cacerts')

Create an opener adding in the context object and GET the URL.  The custom build opener adds in a new PyOpenSSL based HTTPSContextHandler.

>>> from ndg.httpsclient.urllib2_build_opener import build_opener
>>> opener = build_opener(ssl_context=ctx)
>>> res = opener.open('https://localhost/')
>>> print res.read()


The above verify callback above is just a placeholder.  For more a comprehensive implementation ndg_httpsclient includes a callback with support for checking of the peer FQDN against the subjectAltName in the certificate.  If subjectAltName is absent, it defaults to an attempted match against the certificate subject CommonName.

The callback is implemented as a class which is a callable.  This means that you can instantiate it, configuring the required settings and then pass the resulting object direct to the context's set_verify:

>>> from ndg.httpsclient.ssl_peer_verification import ServerSSLCertVerification
>>> verify_callback = ServerSSLCertVerification(hostname='localhost')

To get the subjectAltName support I needed pyasn1 with some help from this query to correctly parse the relevant certificate extension.  So adding this into the context creation steps above:
 
>>> from OpenSSL import SSL
>>> ctx = SSL.Context(SSL.SSLv3_METHOD)
>>> verify_callback = ServerSSLCertVerification(hostname='localhost')
>>> ctx.set_verify(SSL.VERIFY_PEER, verify_callback)
>>> ctx.load_verify_locations(None, './cacerts')

The package will work without pyasn1 but then you loose the subjectAltName support.  Warning messages will flag this up.  I can pass this context object to the urllib2 style opener as before, or using the httplib interface:

>>> from ndg.httpsclient.https import HTTPSConnection
>>> conn = HTTPSConnection('localhost', port=4443, ssl_context=ctx)
>>> conn.connect()
>>> conn.request('GET', '/')
>>> resp = conn.getresponse()
>>> resp.read()

A big thank you to Richard for his help getting this package written and ready for use.  Amongst other things he's added a suite of convenience wrapper functions and a command line script.