Considering Various Active Directory and Oracle Identity Manager Integration Options

There are a number of different ways of integrating different versions of Microsoft’s Active Directory (including ADFS & FIM) with different versions of Oracle’s Identity Management suite. Unfortunately for the implementer there is very little published architecture best practice covering identity migration / integration. This is surprising because of both vendors’ large market share and the annual number of organisations’ switching products or adding new features using the other vendors software. As an example the following migration / integration options are available when moving from AD to Oracle.

  • You can choose to keep the existing AD as a master identity repository and use Oracle Identity Manager connector between the two products.
    • The connector supports Active Directory and Active Directory Lightweight Directory Services (AD LDS), formerly known as Microsoft Active Directory Application Mode (ADAM) as either a managed target resource or as an authoritative (trusted) source of identity data for Oracle Identity Manager
    • Depending on this approach you may wish to synchronise user’s password from Microsoft Active Directory (AD) to Oracle Identity Manager (OIM) then you must install Microsoft Active Directory Password Synchronization connector

Continue reading “Considering Various Active Directory and Oracle Identity Manager Integration Options”

Why the Future of Identity is OpenID Connect and not SAML

This blog is part of a series comparing the implementation of identity management patterns in SAML and OpenID Connect:

Future of Identity Federation is OpenID Connect

Identity management is an enabler for networked services whether web browser, mobile or smart-tv applications or the internet of things. The increase in services will create an increase in passwords without mechanism for sharing & trusting identities. eGovernment services require a higher level of identity verification than the social authentication capabilities of Twitter & Facebook connect. The future of eGovernment Identity is an interoperable authentication and authorisation capability that can support higher levels of identity verification.

The importance of interoperability amongst identity solutions is that it will enable individuals to choose between and manage multiple different interoperable credentials. Futhermore service providers will choose to accept a variety of credential and identification media types. “Identity Solutions will be Interoperable” is a guiding principle of the US National Strategy for Trusted Identities in Cyberspace (NSTIC) which is a White House initiative for both public & private sectors to improve the privacy, security, and convenience of online transactions.

SAML is insufficiently interoperable to be the future standard for identity management federation. SAML is limited in its ability to support mobile & smart-TV applications and requires the implementation of a complex Broker Service in order to support multi-service provider & multi-IdP use cases.

OpenID Connect will most likely supersede SAML for all eGovernment externalised identity management. OpenID Connect is a lightweight identity verification protocol built on top of modern web standards (OAuth 2.0, REST and JSON) superseding OpenID 2.0. OpenID Connect allows a service provider (Relying Party) to select between a variety of registered or discovered identity providers. OpenID Connect can satisfy all of the SAML use cases but with a simpler, JSON/REST based protocol.

SAML OpenID Connect Comparison
SAML 2.0 & OpenID Connect comparison

Continue reading “Why the Future of Identity is OpenID Connect and not SAML”

Identity Broker Service in OpenID Connect: Supporting Multiple Identity Providers & Service Providers

This blog is part of a series comparing the implementation of identity management patterns in SAML and OpenID Connect:

Identity Broker Service in OpenID Connect

In an earlier blog post (Identity Broker Service in SAML) described how to support connections between multiple service provides and multiple identity providers by building an Identity Broker Service. This service presents the user with a list of identity providers supported by the service provider and then forwards a <saml:AuthnRequest> to the appropriate identity provider. The broker then maintains this connection and returns a <saml:Response> from the identity provider back to the service provider. The service provider accepts the <saml:Response> and trusts the end user. In order to build this model using SAML the identity broker service requires development and deployment to the internet and the sharing of keys between all service providers and identity providers.

Using OpenID Connect the same function can be built without the need for an intermediary broker service. This is because in OpenID Connect is designed with the user being able to select their preferred identity provider. The Identity Provider, also known as the OpenID Provider, renders the authentication challenge and gains user approval before sharing user attributes. OpenID Connect performs authentication to log in the End-User or to determine that the End-User is already logged in. OpenID Connect returns the result of the Authentication performed by the Server to the Client in a secure manner so that the Client can rely on it, hence the Client is called Relying Party (RP).

OpenID Connet without Hub

Continue reading “Identity Broker Service in OpenID Connect: Supporting Multiple Identity Providers & Service Providers”

Identity Broker Service in SAML: Supporting Multiple Identity Providers & Service Providers

This blog is part of a series comparing the implementation of identity management patterns in SAML and OpenID Connect:

Identity Broker Service in SAML

A federated organisation may have multiple distinct services (service providers) where each service is protected under a distinct trust domain. The same organisation may wish to trust multiple external & internal identity providers and allow the end user to select their preferred identity provider. Furthermore the same federated organisation may require greater levels of certainty for specific services and may wish to limit the available identity providers for a specific service or enforce step-up authentication on the identity provider. This pattern is useful for governments and enterprise’s wishing to move away from a Push Model for Enterprise Identity Architecture.

In order to support multiple services and multiple identity providers and possible multiple rules an Authentication Broker Service is required. This model is often known as either a Hub Service or Chained Federation. The following sequence diagram explains how the pattern would working using <saml:AuthnRequest> (SAML 2.0) and <saml:Response> between four parties (User Agent, Service Provider, Authentication Broker Service and Identity Provider):

SAML Hub Service

 

  1. The User Agent access a specific Service (There can be N+ service providers depending on the organisation)
  2. The Service Provider sends a <saml:AuthnRequest> to the registered Authentication Broker Service (limitation that an SP must be mapped to one Broker)
  3. The Authentication Broker Service holds a list of Identity Providers trusted by the Service Provider and returns this list to the User Agent
  4. The User Agent selects their preferred Identity Provider provided as a list by the Broker
  5. The Broker service generates a new <saml:AuthnRequest> which it forward to the selected Identity Provider
  6. The Identity Provider challenges the user agent
  7. The Identity Provider authenticates the user agent
  8. The IdP returns the <saml:Response> to the Broker for the authenticated principal
  9. The Broker returns the <saml:Response> to the Service Provider (which may choose to match against any mapped identity)
  10. The Service Provider grants access to the User Agent

Note a slightly different pattern would be to pass a reference to a SAML artefact between the Broker and the SP. This would use the <saml:ArtifactResolve> element in the message passed back from the Identity Provider. This pattern would require a direct service between the SP and the IdP to resolve the attributes in the artefact. This pattern extension is only recommended when the authentication request can be deferred when multiple profile attributes are required from the identity provider.

Example: UK Government Identity Assurance Hub Service SAML 2.0 implementing the OASIS SAML V2.0 Identity Assurance Profile

Nomenclature: Terminology differences between OpenID Connect & SAML

OpenID Connect Simple Sequence Diagram

The OpenID Connect protocol, in abstract, follows the following steps.

  1. The RP (Client) sends a request to the OpenID Provider (OP).
  2. The OP authenticates the End-User and obtains authorization.
  3. The OP responds with an ID Token and usually an Access Token.
  4. The RP can send a request with the Access Token to the UserInfo Endpoint.
  5. The UserInfo Endpoint returns Claims about the End-User.

These steps are illustrated in the following diagram:

OpenID Connect Sequence Diagram

OpenID Connect & SAML nomenclature

Comparison of OpenID Connect with OAuth2.0 & SAML2.0

The following is a high level feature comparison between OpenID Connect 1.0, OAuth 2.0 & SAML 2.0

  • OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
  • OAuth 2.0 focuses on client developer simplicity while providing specific authorisation flows for web applications, desktop applications, mobile phones, and living room device
  • SAML 2.0 provides a standard for exchanging authentication and authorisation data between security domains using an XML-based protocol which uses security tokens containing assertions to pass information about a principal between an identity provider and a service provider.

Usage:

  • OpenID Connect allows a user to authenticate to an on-device App, a service or a site using an identity established with an Identity Provider (IdP).
  • Integration of OAuth 1.0 and OpenID 2.0 required an extension. In OpenID Connect this OAuth 2.0 capability is built into the protocol itself.
  • Mobile apps don’t have access to the HTTP POST body which is required in SAML to post the token back to the Service Provider. As such SAML 2.0 has a native app (yes you could use a blended app) limitation

APIs:

  • All three have extensive libraries (OAuth libraries, OpenID Connect libraries, simple SAML PHP library)
  • OpenID Connect is REST based encapsulating JSON Web Tokens while SAML is XML based
  • OpenID Connect performs many of the same tasks OpenID 2.0, OAuth2.0 and SAML, but does so in a way that is standardised and API-friendly.
  • OpenID Connect can also be extended to include more robust mechanisms for signing and encryption

Tokens (Signing & Encryption):

  • OpenID Connect is REST based encapsulating JSON Web Tokens which do not only sign the payload (does not encrypt it)
  • SAML is XML based and supports signing & encrypted certificates
  • OAuth 2.0 uses bearer token (similar to cookies) which do not require a bearer to prove possession of cryptographic key material (proof-of-possession). Risks for the enterprise.

Feature Comparison:

OpenID Connect 1.0:

  • ✔ SP & IdP Initiated Login
  • ✔ High Security Identity tokens (JSON Web Token)
  • ✔ Collects user’s consent before sharing attributes
  • ✔ Token contains user identity information
  • ✔ Distributed & Aggregated Claims
  • ✔ Dynamic Introductions (client discovery & on-boarding)
  • ✔ Session Timeout (future)

OAuth 2.0:

  • ✔ SP & IdP Initiated Login
  • ✘ High Security Identity tokens (uses bearer token which have no proof of possession)
  • ✔ Collects user’s consent
  • ✘ Token contains user identity information
  • ✘ No Distributed & Aggregated Claims
  • ✘ No Dynamic Introductions (client discovery & on-boarding)
  • ✘ No Session Timeout

SAML 2.0:

  • ✔ SP & IdP Initiated Login
  • ✘ Does not support embedded applications
  • ✔ High Security Identity tokens (e.g. X.509)
  • ✘ Not Responsible for collecting user’s consent
  • ✘ Token contains user identity information
  • ✘ No Distributed & Aggregated Claims
  • ✘ No Dynamic Introductions (client discovery & on-boarding)
  • ✘ No Session Timeout

NASCAR problem in authorisation server selection

An aim of OpenID Connect is to solve the problem of death by a thousand passwords by allowing the user to select their  identity provider including ones that the relying party has never heard of through Dynamic Registration. A problem of allowing the user to select their identity provider is that the authentication challenge page needs to show all the registered identity providers.

Nascar Problem

This means that this page becomes like the NASCAR problem. The NASCAR problem describes the jumble of branding icons on websites, e.g. 3rd party sign-in/login options or sharing buttons. It is dubbed the NASCAR problem because of these clusters of 3rd party icons/brands on websites resembles the collages of sponsorship decals covering NASCAR racing cars. It’s a problem because such clusters of icons/brands cause both visual noise and people to be confused, overwhelmed or unlikely to remember individual icons, especially as the clusters seem to grow with the introduction of new 3rd party identity/profile/social sites and services.

When using OSelect Identity ProviderpenID Connect, it’s likely that the client will both have buttons for popular servers as well as a text field for user entry of an email address or URL. (OpenID Connect does not directly solve the “NASCAR” problem).

How therefore to solve the NASCAR problem in OpenID Connect?

The client will need to either limit the number of registered authorisation servers supported by their service or provide a mechanism for selecting from a larger list of identity providers. Furthermore if the OpenID Connect Dynamic Registration capability is enabled a form field for the authorisation server’s URL needs to be provided as a last option below the existing identity providers.

Limiting the number of supported identity providers maybe easier for commercial sites. In a public sector / government scenario there may be legal reasons why the number of authorisation servers cannot be constrained. As a future example it may become necessary for a government service provider to support all retail banks operating in that country acting as authorisation servers (e.g. PAYM in the UK) and all the mobile network operators acting in that country (e.g. MobileConnect). In this example there may be more that 50 different authorisation providers and the user may have an existing registration with multiple identity providers. It is here that presentation of identity provider is important and mere alphabetic listing maybe insufficient.

An Identity Management System in TOGAF: How to Fit IdM to ADM?

The TOGAF Architecture Development Method (ADM) is designed to be sufficiently generic to cover all types of IT programmes. This generalism means that the ADM method can support both organisation and governmental identity management projects. This blog post, as part of a series on identity management in TOGAF, shall cover the best fit of the ADM to a IdM project and will try to answer the following questions:

  • What is the best way to realise Identity Management (capability & project) within TOGAF’s ADM?
  • Is TOGAF a suitable Enterprise Architecture model for something as generic and security conscious as identity management?

What is the best way to realise IdM in ADM? The ADM may be generic but it depends upon co-operation with more specific industry verticals (e.g. TeleManagement Forum (TMF) in telecoms) for a more detailed realisation of a technology architecture. Identity Management is as equally inclusive as TOGAF and hence equally applicable across all industry verticals. TOGAF defines four architecture domains (BDAT): Business, Data, Application, and Technology. And the TOGAF Architecture Development Method describes a process for “deriving an organisation-specific enterprise architecture that addresses business requirements”. As such the ADM requires manipulation to fit your organisation and any specific programmes. In an earlier post in this series (IdM Stakeholders, View and Concerns) I produced a set of building blocks necessary for producing a successful identity management system. For traceability I have mapped these building blocks to the four BDAT architecture domains and to the various phases in the ADM. As part of this mapping I have described which building blocks need for an identity management system to be delivered early.

BDAT

Building Blocks to be delivered earlier that normal within the TOGAF ADM:

  1. Phase B: Business Architecture & Business Architecture Requirements
  2. Phase C: Information Systems Architecture & Application Architecture: Vendor selection & Component list
    • It is crucially important when working with a technology that may have multiple components to be able to explain each component in terms of its business capabilities.
    • Depending on the software vendor the component names and areas of responsibility will change. For example Oracle & ForgeRock’s definition of Access Management differs.
    • The business case should have already have been completed at this stage but it is important to split out responsibility to each component.
    • The test cases can be promoted at this stage so that both component independent and project overall tests are defined early.
  3. Phase D: Technology Architecture & Technology Architecture: Physical arch & deployment model
    • It is critical to promote the necessary environments for a IdM implementation. These are required for testing and patching and for integrating each individual system to which access will be granted.
    • In a federated architecture working with 3rd parties then for each third party a set of test & contract agreement environments will be needed
    • If the architecture is a partial migration then environments may need to be consolidated
    • As the IdM system is a security component it is recommended to introduce good deployment policies at this stage, for example separation of package naming and deployment.
  4. Phase F: Information Systems Architecture & Technology Architecture: Security Policies
    • Security risks must have been identified and the responsibility for the identity management system must be agreed early
    • If the solution is a migration project then the legacy security policy from implementation must be compared with the organisation’s policies and the ambition for secured systems

Conclusion: Is TOGAF a suitable Enterprise Architecture model for something as generic and security conscious as identity management At an organisational level an identity management programme is normally initiated to provide a business enablement capability (e.g. SSO or Federation), a legal requirement (e.g. Healthcare) and / or a security capability (e.g. reduced DDoS exposure). At a national / government level an identity management projects vary in scope and identity ownership architecture. Austria’s Zentrales Melderegister (ZMR) or Denmark’s Det Centrale Personregister are examples of identity owning identity management projects working to enable access to eGovernment services. The UK Cabinet Office’s Identity Assurance programme has the same goals but externalises the identity lifecycle to trusted identity providers. Furthermore some programmes such as the UK land registry are not immediately concerned with individual identity but have a strong identity management capability encapsulated within a larger data structure. For both organisational & governmental IdM deployments the TOGAF ADM model can be applied. The TOGAF ADM model is designed to provide an organisational enterprise architecture capability. However TOGAF ADM can be applied to just IdM because Identity Management is a sufficiently complex area and the number trusted identity providers increases. The ADM is suitable for an Identity Management programme but overall enterprise architecture should be applied to all organisational data & security capabilities.

An Identity Management System in TOGAF: stakeholders, concerns, views & viewpoints

This article is how I would deliver an Identity Management architecture and implementation in accordance with the Open Group’s TOGAF architecture development method. This post is based on my personal experience as a digital enterprise architect and as a solution architect implementing indenting management, master data management and security systems. I intend this to be part of a series on applying TOGAF and using IdM as an example. In this first post I will only describe the functional capability necessary for an IdM system and will focus on the TOGAF definitions for stakeholder, concern, view and viewpoint.

System:

Firstly the Identity Management (IdM) implementation will be referred to as the TOGAF system. This system has stakeholders who have concerns. The stakeholder has a view of the system which is taken from their viewpoint. These definitions allow a business architecture and architecture building blocks to be created for the identity management system and used as part of the TOGAF ADM.

Stakeholders:

The non-exhaustive stakeholder list for an IdM system include: system owner (e.g. Business sponsor), system maintainers (e.g. Individuals who manage the solution), user maintainers (e.g. Individuals who manage the users & their permissions within the directory system), security governance (e.g. Individuals who validate and sign off the architecture and the software according to the organisations security principles), system developers (e.g. Individuals responsible for coding, packaging and deploying the system) and project team members (e.g. Individuals responsible for the software delivery and maintenance of the IdM system).

I’ve not described them here but the organisation’s HR function who manage the organisation’s joiners and leavers (also promotions & changes) will be key stakeholders if the IdM system is to be used to manage the employees (potentially both internal and external).

The end-user stakeholder is the most important for an identity management system because their concern will massively influence the software solution. For example it may not be necessary for an organisation to maintain the customer end-user’s credentials these saving the organisation large costs in terms of registration and password reset capabilities. In the case of an eCommerce implementation it may make more sense to trust the OpenID Connect capability of PayPal’s (other authentication providers too) Login with Paypal rather than enforcing a local registration for each customer. Choosing external IdP trust as the identity provider does not remove all of the existing stakeholders because system ownership, system maintenance and security concerns remain valid concerns. The stakeholder list depending upon the solution requires review as part of the ADM process.

Stakeholders can be categorised by type such as corporate, system, end-user and project. I often prefer to avoid type categorisation as the type can quickly become a pseudonym for the stakeholder themselves and as such dilute their importance and the quality of their concern. It is though useful to have a repeatable organisational list of stakeholders as these process are repeatable through multiple organisational software deliveries.

Concerns:

Concerns and requirements are not necessarily synonymous because concerns are conceptually larger groupings of requirements that encapsulate the key interests of the stakeholder. Crucially because a concern is a key interest it determines the acceptance of the system. Acceptance criteria must be made against a specific requirement in order to objectify the sign-off process otherwise the system implementer will be at mercy to the vagaries of emotional acceptance.

For a successful Identity Management system implementation concerns are crucial but thankfully also fairly obvious. For example the security governance stakeholders (naturally the most risk averse people on earth apart from England cricket captains) would have the following concerns: security testing coverage including penetration testing, cryptography, encryption and digital signing concerns, risk management concerns such as password management and policy, assurance, availability and administration concerns.

It is important to capture concerns jointly with the stakeholder but to own the process of concern capture if the actual stakeholder or stakeholder function has not defined previously defined their concerns. It is most likely that the security governance function will have pre-published and regularly maintained documentation covering organisational security policy. I have regularly seen security governance stakeholders try to enforce that internal and external password policies should be the same. For example the security governance function will require a much stronger password management policy than maybe necessary or appropriate for a customer facing website. This is a good example of where security concerns and customer concerns will clash. The best approach here for the enterprising architect is to preposition the customer evangelist for this conflict.

 Views:

A view is a representation of a system from the perspective of a related set of concerns.  A view consists of architectural documentation (e.g. diagrams, requirements coverage, PowerPoint) showing stakeholders not only how their concerns are being met but also how they are being met. Remember that different stakeholders will have different views of the IdM system such as the security governance team member will have a different view to that of the customer evangelist.

The following are the architectural artefacts that I regularly produce for an identity management system implementation and how I use them to manage concerns (note that these artefacts will be phased and not all are immediately necessary for business stakeholders):

  • Requirements ( I will cover these in more detail later in this series but it is sufficient here to say that identity management is a complex area in which many business owners will have little technical expertise. I advise against over elaboration of technical IdM requirements following a “system shall…” approach and rather focus on the business reasoning for technology and standard selection and also the UX wireframing / mock-ups (depending on the appropriate level of detail) for login & registration processes)
  • Technology standards. Selection and reasoning for standards are pre-solution design prerequisites. What are you protecting and how? Are you SAML, Kerberos, Shibboleth, OAuth, or OpenID Connect? More importantly why and can you explain why?
  • Security policies. Hopefully these will have already been defined in your organisation. If not start immediately.
  • User model diagram for entities & attributes necessary as part of the directory system and identity repository (it is important that the business owner is involved in this stage especially as they often enjoy & appreciate it). This artefact is a key architectural building block for the IdM system and requires regular review if scope changes.
  • User lifecycle processes. This is the cradle to grave information that requires a human architecture to manage and maintain the system. Are you outsourcing or keeping in house? What bits can you split off? How does any of this differ from your existing systems? Have you got business agreement, budget and guarantee that it will be supported?
  • Vendor selection. Do you have a preferred vendor, do you require an RFI / RFP? If yes is your stakeholder list appropriate? Are the captured concerns sufficient to differentiate between vendors.
  • Component list: description of all the various software components involved. This is useful and appreciated especially when certain vendors have balkanised their licensing of IdM components. What & why for each component needs to be succinctly described.
  • Physical architecture & deployment model (preferably in UML) is probably the most important technical architecture artefact in an IdM system deployment. It is critical to know how directory services are being located and exposed to other systems covering authentication and authorisation. If the deployment is a CoTS (most likely as no-one would write this stuff from scratch anymore) product deployment then one of your stakeholders would often be the vendors professional services team and the system implementers concern will be the ratification of this architecture.
  • Sizing & capacity model. These are very difficult to produce as an organisation will ask for as much as possible accord g to budget. It’s obvious to refer to previous models but as identity management maybe a new function or a large master data management driven consolidation then previous models may not apply. I would ask both the vendor and system implementer to help.
  • Test Model. Testing for identity management is complex and critical. At an early stage it may be sufficient to define security policies and encryption / signing technologies. It is important to have all the necessary testing and patching environments available ahead of time. A review of testing tools is as early-mid phase activity.
  • Patching & support model. This is both a pre & post go-live activity. The pre go-live a model is required for how patches are downloaded and applied and tests run. Normally a parallel environment is required. Also licence support guarantees are necessary from the vendor.
  • Architecture roadmap. More importantly a business roadmap is required because identity management is an area that is often degraded to being just an enabler rather than a clear discrete but involved function of your business. If your organisation is conducting a data cleansing exercise through master data management then how exactly does this fit with the identity management roadmap?
  • System delivery roadmap. This is not the enterprise architects ownership but it must encompass the architecture perspective because, for example, if there is a data cleansing / consolidation exercise then the identity management system delivery will often be split between various functions and directory service actions and the component deliveries will need to be spaced out to allow approval and validation steps.

From all of these artefacts different views can be collated to provide a concern compliance model. This can also be used by the architecture board as part of architecture compliance reviews. Always remember to try and start objective if you are producing these artefacts and try to mark your own homework before the architecture governance board marks it for you. Confidence is critical in a security product implementation and it is far easier lost than ever won back.

Viewpoints:

A viewpoint defines the perspective from which a view is taken. The metaphor given by the Open Group is relationship between viewpoint and view is analogous to that of a template and an instance of the completed template. In my view this does not capture the subjective nature of the stakeholder whose concerns are more than a checklist template. The template analogy does not convey the importance of good documentation and presentation as part of an enterprise architect’s skill set. The viewpoint differs from the concern as it is the stereotype of the stakeholder. The experienced stakeholder will regularly raise concerns that should be raised by other stakeholders. These concerns may need capturing or have already been captured but some can be waived if the relevant stakeholder has provided reason. For this reason the view presented to the stakeholder must be from a viewpoint that mirrors their relevant concern.

views and viewpoints for an identity management system
views and viewpoints for an identity management system

Above I provided a set of the architecture building blocks relevant to an IdM system implementation. Here they are mapped to the relevant viewpoint.

Trade-offs to make between concerns:

It is the role of the enterprise architect to balance competing concerns. The most obvious within an identity management implementation is the competition between identity as an enabler and security as a constraint. The identity management project can quickly end up being responsible for other systems security. This forces the system into being a blanket security provider and always returns an architecture that is as only strong as its perimeter defence. It is therefore important to gain quick business agreement that identity and security are not the same thing and security is a concept for which all organisation stakeholders have to take personal responsibility. This way the identity management system can address the concerns for which it is best suited such as identity management and access provisioning.

ADM Cycle & ordering of architecture artefacts:

The architecture building blocks that are described above are developed in TOGAF between the ADM Phases A through D. I do not specifically disagree with the TOGAF ordering but of all the artefacts produced for an IdM implementation it is the security audit that takes the longest and the physical and deployment architecture that needs to be brought forward where possible.

Conclusion:

In TOGAF the primary question the EA needs to answer is does the architecture address all of the concerns of the stakeholders?

Remote Control Soufflés: Challenge of M2M Authentication & Authorisation and Mobile data offloading

Some M2M devices will always connect to the internet using a fixed network connection / Wifi and others will always connect using a mobile network connection using an eUICC but there will be some that will offer both wifi and mobile network. It is these devices that will need to support wifi offloading where possible.  It is for these devices where providing a standard API gateway and AuthN & AuthZ capability will be most complex.

For example, my oven is always positioned in my kitchen and connects to the wifi network to allow me to view inside by a mobile app so that I don’t have to open the oven door during the fifteen minutes a soufflé takes to rise that would cause the temperature to change and my soufflé to collapse. This way I can inspect and control the temperature remotely. It also mean I have an excuse to check my phone during boring dinner parties. Only my app is paired to the oven so only I am authenticated and authorised to remotely check on my soufflé thus there is no potential risk of a malicious guest could accessing my oven app and destroy the soufflé by changing the temperature.

Remote viewing would decrease flop rate
An M2M oven with embedded camera would decrease flops

The majority of my home m2m devices will be static devices, I rarely travel with my oven, and these will in the majority of cases be Wifi enabled. Unfortunately I cannot guarantee wifi coverage throughout my architect’s ivory tower so some mobile internet devices will need to connect over 3G/4G (for example the BBQ in the lower field). The problem for my oven and BBQ manufacturers is that they would need to support both Wifi and the GSMA standard for M2M / smart device SIMs (eUICC). It would then be responsibility of the m2m device to support wifi offload where available.

Authorisation may be necessary when the function of the device is shared amongst a group with one or many people acting as the super administrator. If I sell my oven all of my authentication and authorisation permissions have to be removed from the M2M device but as I will likely buy a new oven with more soufflé capacity I would like to keep my existing settings.  Furthermore if my soufflé skills increased I may take a job in Paris and would need to reregister my oven’s eUICC or wifi connection. In this case I would definitely want to keep all of my authorisation permissions and maybe grant further permissions for all the extra soufflés I’d be baking.

Device resale and device portability are supported by the eUICC specification as they are necessary for widespread adoption of M2M devices. What is less supported is a common standard for AuthN & AuthZ that would allow me to keep my device preferences when I either move with or my devices or sell them and replace them with newer devices.

This is where OpenID Connect may be useful as it enables profile information on top of the authorisation model provided by OAuth 2.0. OpenID Connect 1.0 extends OAuth 2.0 so the client can verify claims about the identity of the end user, get profile information about the end user, and log the user out at the end of the OpenAM session. OpenID Connect also makes it possible to discover the provider for an end user, and to register client applications dynamically. OpenID connect services are built on OAuth 2.0, JSON Web Token (JWT), WebFinger and well-Known URIs.

It remains to be seen whether OpenID Connect will be integrated with the standards for eUICC as part of the GSMA Mobile Connect. Furthermore it will need to be supported by the wifi offloading devices (e.g. my BBQ’s manufacturer) as the standard for all M2M AuthN & AuthZ. It seems likely at first that device authorisation and later home M2M gateways will implement proprietary technologies and will maintain identity in individual walled gardens. My architecture ivory tower has a few of those too.