A 5G Data Fabric

Most 5G deployments will not be greenfield. But a successful 5G deployment is not limited to simply deploying new radio on existing sites. It requires a new approach to telecom IT that can both simplify the telco’s estate and prepare for the new business opportunities of 5G. A complete data fabric (for 5G or for everything) will support both the business opportunities and the network complexities of 5G.

A data fabric includes all of the necessary data services for operating a mobile network and providing connectivity and ‘beyond connectivity’ services. This means offering the many different persistence storage toolsets to your business logic layer (as represented by micro-services in docker). An application can then utilise the most appropriate persistence technology for their requirements. For example, this could mean exposing a RDBMS for structured data, a Graph for modelling topologies and document storage for persisting YANG documents.

Data Fabric for a Micro Service Architecture

The business value of the data fabric is that it allows the clever telco to disassociate their software requirements’ from the data plane. Thus enabling a micro-service architecture that can manage a virtualised network; and then on top of that expose services to their customers.

Key data fabric use cases for 5G include:

  • A network planning architecture for geo-planning cell site deployments including in-building
  • A network topology architecture that can model a highly complex network and enable Self Organising Networks
  • A time series streaming architecture that can model events coming off a network and a customer’s deployments and enable effective Machine Learning driven autonomic improvements
  • A network orchestration architecture for a virtualised network (full or partial)
  • A network slice management and guarantee architecture (with support for a blockchain based service level guarantee)
  • A subscriber data management architecture for unified value added services and subscription

The following is my description of a logical data fabric for a 5G implementation. I am publishing it because it can help network operators to push their software vendors to decouple the software’s logic from its data persistence. Below are all the logical tools needed:

  1. RDBMS for ACID based transactions that are useful for physical inventories, managing subscription updates (less so reads) and all other structure data
  2. Graph database for modelling network topologies, relationships and dependencies. Very useful for machine learning, root cause analysis and spotting previously unknown interconnected loops between items
  3. Wide column database for dealing with unstructured extensible datasets that include all the different devices supported on a 5G network. Very useful within IoT and customer network experience.
  4. OLTP NoSQL database for offline analytical processing including network topology efficiency modelling and network performance analysis as part of an ITIL Problem / Change Management process
  5. Document datastore for managing Infrastructure as Code and Virtual Network Function deployment descriptors in the form of YANG documents. Useful in blockchain contracts and services.
  6. In Memory Datastore for fast reads and data caches
  7. Geo-spatial database for modelling RAN deployments and radio propagation. Incredibly important as RAN efficiencies have a major bottom line impact. Increasingly need to support in-building information for small cell deployments. Needs to work together with other radio technologies including 5G.
  8. Time series database for performance monitoring which can be implemented within the customer network experience function of a wide column database and with use of Grafana and Prometheus

Some 5G Data Fabric Use Cases:

RDBMS Database Graph Database Wide Column OLTP Document Data Store In Memory Database Geo-spatial Database
Network Plan & Build and Analysis Y Y Y
Physical Network & Static InventoryY     Y
Virtual Network & Dynamic Inventory   Y     Y    
Fast Read Inventory Y   Y 
Streaming Fast Analysis YY    
Offline Event Analysis Y Y
Subscription Management & EntitlementsY    Y 

In conclusion, most telcos have bought siloed commercial off the shelf products for individual specific use cases. This has meant that the telco has often only used as little as 40% of the intrinsic value of their commercial software licences. The cost of building 5G will be high, and the greater share of the prize will go to the most agile operators. It is therefore incumbent on mobile operators to drive the greatest efficiencies from their software investments.

5G is a great driver for change. The most effective 5G operators will be those that can get their data architecture right first time. Telecom operators must start moving to a data fabric.

Bringing IT (OSS) all together

I try and fit components together logically so that they can make the most of what the technology offers. I work predominantly in the OSS world on new access technologies like 5G and implementations like the Internet of Things. I want to achieve not just the deployment of these capabilities but to also to let them operate seamlessly.  The following is my view of the opportunity of closed-loop remediation.

For closed-loop remediation there are two main tenets: 1. you can stream all network event data in a machine learning engine and apply an algorithm like K-Nearest Neighbour  2. you can expose remediation APIs on your programmable network.

All of this requires a lot of technology convergence but: What’s actually needed to make everything convergent?

ClosedLoop

Let’s start with Streaming. Traditionally we used SNMP for event data, traps & alarms and when that didn’t work we deployed physical network probes. Now it’s Kafka stream once implementations where a streams of logs of virtualised infrastructure and virtualised functions are parsed in a data streaming architecture into different big data persistence.

The Machine Learning engine, I’m keenest of FlinkML at the moment, works on the big data persistence providing the largest possible corpus of event data. The ML K-NN can analyse network behaviour and examine patterns that are harder for human operation teams to spot. It can also predict timed usage behaviours and scale the network accordingly.

I am increasingly looking at Openstack and Open Source Mano as a NFVO platform orchestrating available virtualised network functions. The NFVO can expose a customer facing service or underlying RFSs. But to truly operate the ML should have access to the RFS layer. This is the hardest part and is dependent upon the underlying design pattern implementation of the Virtual Network Functions. This though is a topic for another blog post.

 

 

 

M2M ARPU Requires A Reference Architecture

The GSM Association (GSMA) put the M2M market size at $1.2tn revenue with 12 billion connected mobile devices by 2020. These numbers alone are enough to excite the most conservative of operators and wobble the subscriber-centric business models that currently prevail. The existing model adopted by MNOs that the more subscribers it has, the more successful and profitable it is considered to be, is about to be tested by this massive new market. This is mainly because the Average Revenue Per User (ARPU) in the M2M business is on average below ten cents per device, but on the other hand the connection density can be virtually endless. So success will depend on how dynamically the CSP reacts to provide new and flexible platforms to support the every-day new devices, applications and verticals that M2M will address.

Because of the low ARPU and massive market multiplier many MNOs should be prepared for a shake-up of their OSS which will have to fulfil and provision at bulk and at low cost.

IPv6 addressing will also make M2M services not just a mobile proposition, but applications that can work seamlessly across both mobile and wired broadband connections. eUICCs and wifi hand-off will have to be included in the new OSS. Furthermore Near Field Communication will require its own billing model.

Never before has a reference architecture been so required for M2M.

All of this does not just apply to the MNOs anymore.

BSS for the IoT: You Don’t Have To Be A Mobile Network Operator To Do It

The Internet of Things is not predicated on mobile or fixed-line operators. It is predicated on the value derived from the interplay between different sensors and actuators. In the history of mobile telecommunications it was the mobile network operators who provided a service that brought together radio waves and handset manufacturers. The success of mobile telecommunications has led to a 93.5% global saturation rate (source Informa) with the conglomerate operators China Mobile Vodafone. Airtel and Verizon etc being the big winners.

Continue reading “BSS for the IoT: You Don’t Have To Be A Mobile Network Operator To Do It”

A Scottish Safe Harbour for Identity Management

The Data Protection Directive (officially Directive 95/46/EC) regulates the processing of personal data within the European Union and also provides the criteria for Safe Harbour privacy for companies operating within the European Union. The Safe Harbour regulations  forbid sending of customer’s personal data to countries outside the European Economic Area unless there is a guarantee that it will receive adequate levels of protection. There are no Safe Harbour considerations for EU companies with services deployed to Scotland while Scotland is part of the UK and when Scotland has become independent of the UK and joined the EU as an independent country. However there may be a period of time between Scotland becoming independent and joining the EU (as an independent country) when Safe Harbour requirements really matter. At this time no EU company will have a Safe Harbour agreement with the newly independent Scotland. Therefore any company with Identity Stores (or business systems containing personal data) deployed in Scotland will be in breach of the Data Protection Directive. Scotland Id Store 7 PNG

Continue reading “A Scottish Safe Harbour for Identity Management”

Identity Broker Service in SAML: Supporting Multiple Identity Providers & Service Providers

This blog is part of a series comparing the implementation of identity management patterns in SAML and OpenID Connect:

Identity Broker Service in SAML

A federated organisation may have multiple distinct services (service providers) where each service is protected under a distinct trust domain. The same organisation may wish to trust multiple external & internal identity providers and allow the end user to select their preferred identity provider. Furthermore the same federated organisation may require greater levels of certainty for specific services and may wish to limit the available identity providers for a specific service or enforce step-up authentication on the identity provider. This pattern is useful for governments and enterprise’s wishing to move away from a Push Model for Enterprise Identity Architecture.

In order to support multiple services and multiple identity providers and possible multiple rules an Authentication Broker Service is required. This model is often known as either a Hub Service or Chained Federation. The following sequence diagram explains how the pattern would working using <saml:AuthnRequest> (SAML 2.0) and <saml:Response> between four parties (User Agent, Service Provider, Authentication Broker Service and Identity Provider):

SAML Hub Service

 

  1. The User Agent access a specific Service (There can be N+ service providers depending on the organisation)
  2. The Service Provider sends a <saml:AuthnRequest> to the registered Authentication Broker Service (limitation that an SP must be mapped to one Broker)
  3. The Authentication Broker Service holds a list of Identity Providers trusted by the Service Provider and returns this list to the User Agent
  4. The User Agent selects their preferred Identity Provider provided as a list by the Broker
  5. The Broker service generates a new <saml:AuthnRequest> which it forward to the selected Identity Provider
  6. The Identity Provider challenges the user agent
  7. The Identity Provider authenticates the user agent
  8. The IdP returns the <saml:Response> to the Broker for the authenticated principal
  9. The Broker returns the <saml:Response> to the Service Provider (which may choose to match against any mapped identity)
  10. The Service Provider grants access to the User Agent

Note a slightly different pattern would be to pass a reference to a SAML artefact between the Broker and the SP. This would use the <saml:ArtifactResolve> element in the message passed back from the Identity Provider. This pattern would require a direct service between the SP and the IdP to resolve the attributes in the artefact. This pattern extension is only recommended when the authentication request can be deferred when multiple profile attributes are required from the identity provider.

Example: UK Government Identity Assurance Hub Service SAML 2.0 implementing the OASIS SAML V2.0 Identity Assurance Profile

Nomenclature: Terminology differences between OpenID Connect & SAML