A Reference Architecture for Cloud Operational Support Systems

Most telecoms operators have multiple stove piped networks each with a specific historic associated OSS. All CSPs want the agility of Web Scale firms and view OSS and Cloud provision as complementary technologies. The challenge for CSP is to move from legacy vertical pillars to a horizontal platform model. Trying to achieve this with a simple OSS refresh will be a mere shim. For CPSs to be revolutionary they must consider the viability of a Cloud OSS as a way of externalising the orchestration & management of their network resources.

Currently it’s quite easy to find major components of a SaaS BSS (for example Salesforce). However it is much hard it is much harder to find an equivalent within the OSS domain. The primary reason for lack of SaaS in this domain is the nicheness of OSS (discussed previously here IoT Don’t Need No OSS). This nicheness is changing as AWS, GCP and Azure offer essentially offer IoT OSS. There’s currently no ONAP SaaS; but I wouldn’t be surprised if ONAP matured into a SaaS offering at some point. The other major areas of concern are security which can be mitigated through policy & control. Lastly there are concerns around throughput / latency of Resource Performance Management which is a specific topic covered later.

There’s also increasing CSP interest in Open Source OSS (OSS2 maybe?) with Open Source Mano, ONAP and the new TM Forum ODA architecture (for which I’m partly responsible). These OSS’ provide functions that are componentised in their design.

I’ve personally be looking at putting together a best of breed architecture based OSM, ONAP and some Netflix OSS on a cloud-hosted environment to support multiple operational networks. In doing this work I’m trying to understand the following questions:

  1. What is a suitable logical architecture for a Cloud OSS?
  2. And if it can’t all be externally hosted then what would be a suitable logical hybrid architecture?

In order to answer these questions let’s decompose the functions of OSS and compare which parts are most suitable for being cloud hosted. Let’s break it down (using eTOM’s service and resource domains) into nine logical packages for further investigation.

Functions for Cloud OSS

I’ve categorised by Cloud Nativeness (how easy is it to port these functions to the Cloud and how many SaaS offerings are available) against Network Interactivity (be it throughput of data, proximity to element managers). It is fairly self-evident that certain functions are cloud native (service management) whilst others (order to activation) require both close deployment to the network and have specific security constraints.

By grouping the logical functions we end up with three groups: Cloud Native Solutions (those that already run well in the cloud), Not Cloud Native Solutions (those that can’t be externalised to the Cloud) and a middle group of Either / Or that could be either internally managed or externalised.

OSSCloudGraph

The Either / Or group is the newest area covering Machine Learning, Autonomics and MANO for NFV / SDN. These could be either natively deployed (for example a local deployment of FlinkML on top of a performance management solution) or a cloud hosted solution (e.g. Google Cloud Platform’s TensorFlow deployment

Cloud Native Solutions: 

Service and Incident Management systems include perennial favourites Service Now, BMC Remedy & Cherwell.  These tools as cloud hosted solution require feeds from alarm management systems. Whilst the architecture orients itself to data streaming and machine learning the incident management system handles less tickets and works more on auto-remediation. This model necessitates the closed loop remediation function to sit within the network. I would expect a streaming flow in and out of the network boundary and this will obviously be the biggest of the pipes (and the most risky). Network & Service Operations provides a specialism for service & incident management and includes the resource alarm management, solutions like EMC Smarts & IBM Netcool increasingly offer cloud based operation consoles for alarm management tools.

Field Management systems together with Resource Plan and Build are easily managed from a public cloud. These systems have limited access to the operational network and normally have to manage internal and 3rd party resource to complete field operations. Systems like ESRI and Trimble fit in this space. These systems predominantly need access to resource and service inventories, and resource tools (such as HR systems, maps and skills bases).

Strategy systems are an interesting case of specialist planning, delivery and product lifecycle tools with eTOM. They cover service development & retirement, capability delivery and strategic planning. These functions are all equally loosely coupled to the network so require inventory detail, resource detail and a big data store of network performance. But they can be hosted externally and are not mission critical systems. So for our OSS these should be Cloud Native.

Not Cloud Native Solutions:

Order 2 Activation are the Activation systems for management of the network which are either subscription based or resource activation. Distinction here between provisioning controller and the intent based network choreographer (passing intent and policy to the network)

Performance Management Real time operational systems predominantly taking data streams from the network require local deployment as network functions predominantly require low latency if incidents are to be immediately managed.

The Interesting Either / Or Group:

MANO for NFV / SDN can either be a localised solution or can be cloud hosted in the case of a master orchestrator implementing intent based models.  This model makes sense when the orchestration involves third party service orchestration. This is partially covered by the TM Forum ODA.  The challenges would be organising the split of VNF Management with NFV Orchestration. The security controls will need to avoid the attack vector to the client VNF Manager running inside a CSPs network. 

It is likely that CSP’s will investigate this model going forward as they look to benefit from the opportunity of providing Mobile Edge Compute as an integrated PaaS.

Machine Learning & Autonomic Remediation is partially dependent upon the NFVO cloud architecture as remediation needs services to be exposed in order to implement remediation. If the NFVO is already cloud hosted then remediation is a natural continuation of its capabilities. The Machine Learning capability is a driver for the remediation engine constantly looking for situational improvements for specific conditions. Machine Learning can be deployed locally on a CSPs own infrastructure or use the scaling capabilities of tools like TensorFlow on GCP. The decision CSPs make here will be about scaling the intelligence to provide usable conditions that can be implemented within the remediation engine. A CSP with good skills in this area will have a technology advantage.

Next Steps:

I will be updating this stream as I believe there is a genuine future for a Cloud Native OSS. So please keep following this blog and ping me @apicrazy if you’re on the same journey.

 

 

 

12 Reasons Why Cloud OSS hasn’t happened so far

I am regularly asked why there are so few Cloud OSS, or OSS as a Service, options when AWS / GCP and Azure all have IoT plays. I have also wondered why no systems integrator has deployed ONAP on AWS (or other). The following are the main reasons why I think such an option has not yet become popular for CSPs & vendors.

12 Reasons Why Cloud OSS hasn’t happened so far:

  1. Network operators are risk averse
    • That’s a very good thing as CSPs protect your data in flight and at rest. Security is critical for CSPs. However, this does not mean that a Cloud OSS cannot be used just that the appropriate security measures need to be in place
  2. Network operators have customers that are even more risk averse
    • That’s a very good thing too and CSPs have to take account of their customers requirements. However, a private cloud or a public cloud can be secured in the same way as a private data centre. The OSS must make sure that it is not persisting customer data or exposing network functions.
  3. Cloud OSS creates another attack vector and dude we’ve got enough of those
    • We sure do. But internally hosted OSS is itself a risk / attack vector. The benefit of Cloud OSS is that it should allow a simplification / reduction of the number of OSS stacks within the CSP
  4. OSS must be internal because of Data regulation and on-shoring / safe-harbouring of data
    • OSS systems should not be persisting customer data (EVER even Static IP addresses!). So, data regulation requirements will only have limited application. OSS data must be secured at rest and in transit. The low latency requirements of OSS will require near hosting.
  5. Few network operators have sufficient levels of virtualised network functions
    • This is changing rapidly and 5G technologies will be predominantly virtualised
  6. The cost of the OSS is always a low proportion of the costs of the network
    • This is true but does not stop the need to gain greater platform efficiencies.
  7. Moving to the cloud will not wipe away the legacy
    • Of course, it won’t but it will help focus of the future and pass management of VNFs to a single master. PNF management will always be a challenge.
  8. The OPEX model is not always beneficial
    • This is true but OSS stovepipes are not cheap. Best of breed SaaS will help spread the cost and not create a lock in to a single technology version.
  9. It’s the OSS, those guys don’t move quickly
    • A classic refrain but not a reason not to move to a Cloud OSS
  10. The streaming data pipe will be too fat and the latency will be too slow to fix items quickly
    • This is a genuine concern and will required a data pipeline architecture with streaming inside the network and OSS components residing outside. Intent based programming with specific levels of management at the different layers will be key to answering the low latency requirement. Especially when control is part of a network slice management function.
  11. The BSS will never be in the Cloud
    • Salesforce, GCP, AWS, Pega, Oracle Cloud, Azure are all changing that model. Especially in the IoT space.
  12. The OSS will never be in the Cloud
    • Watch this space….

 

Bringing IT (OSS) all together

I try and fit components together logically so that they can make the most of what the technology offers. I work predominantly in the OSS world on new access technologies like 5G and implementations like the Internet of Things. I want to achieve not just the deployment of these capabilities but to also to let them operate seamlessly.  The following is my view of the opportunity of closed-loop remediation.

For closed-loop remediation there are two main tenets: 1. you can stream all network event data in a machine learning engine and apply an algorithm like K-Nearest Neighbour  2. you can expose remediation APIs on your programmable network.

All of this requires a lot of technology convergence but: What’s actually needed to make everything convergent?

ClosedLoop

Let’s start with Streaming. Traditionally we used SNMP for event data, traps & alarms and when that didn’t work we deployed physical network probes. Now it’s Kafka stream once implementations where a streams of logs of virtualised infrastructure and virtualised functions are parsed in a data streaming architecture into different big data persistence.

The Machine Learning engine, I’m keenest of FlinkML at the moment, works on the big data persistence providing the largest possible corpus of event data. The ML K-NN can analyse network behaviour and examine patterns that are harder for human operation teams to spot. It can also predict timed usage behaviours and scale the network accordingly.

I am increasingly looking at Openstack and Open Source Mano as a NFVO platform orchestrating available virtualised network functions. The NFVO can expose a customer facing service or underlying RFSs. But to truly operate the ML should have access to the RFS layer. This is the hardest part and is dependent upon the underlying design pattern implementation of the Virtual Network Functions. This though is a topic for another blog post.

 

 

 

5G, Iaas and Mobile Edge Computing

Mobile Edge Computing (MEC) is a key piece of the 5G architecture (or 5G type claims on a 4G RAN). MEC can already make a huge difference in video latency and quality for video streaming multiple feeds within a sporting environment. For example Intel, Nokia and China Mobile video streams of the Grand Prix at Shanghai International Circuit.

A 5G mobile operator will be introducing virtualised network functions as well as mobile edge computing infrastructure. This creates both opportunities and challenges. The opportunities are the major MEC use cases included context-aware services, localised content and computation, low latency services, in-building use cases and venue revenue uplift.

The challenges include providing the Mobile Edge Compute Platform in a virtualised 5G world. Mobile operators are not normally IaaS / PaaS providers so this may become a challenge.

The ETSI 2018 group report Deployment of Mobile Edge Computing in an NFV environment describes an architecture based on a virtualised Mobile Edge Platform and a Mobile Edge Platform Manager (MEPM-V). The Mobile Edge Platform runs on NFVI managed by a VIM. This in turn hosts the MEC applications.

MECETSI

The ETSI architecture seems perfectly logical and reuses the NFVO and NFVI components familiar to all virtualisations. In this architecture the NFVO and MEPM-V act as what ETSI calls the Mobile Edge Application Orchestrator” (MEAO) for managing MEC applications.  The MEAO uses NFVO for resource orchestration and for the element manager orchestration.

The difficulty still lies in implementing the appropriate technologies to suit the MEC use cases. Openstack (or others) may provide the NFVI and Open Source Mano (or others) may provide the NFVO; however what doesn’t exist is the service exposure, image management and software promotion necessary for a company to on-board MEC.

If MEC does take off what is the likelihood that AWS, GCP and Azure will extend their footprint into the telecom operators edge?