12 Reasons Why Cloud OSS hasn’t happened so far

I am regularly asked why there are so few Cloud OSS, or OSS as a Service, options when AWS / GCP and Azure all have IoT plays. I have also wondered why no systems integrator has deployed ONAP on AWS (or other). The following are the main reasons why I think such an option has not yet become popular for CSPs & vendors.

12 Reasons Why Cloud OSS hasn’t happened so far:

  1. Network operators are risk averse
    • That’s a very good thing as CSPs protect your data in flight and at rest. Security is critical for CSPs. However, this does not mean that a Cloud OSS cannot be used just that the appropriate security measures need to be in place
  2. Network operators have customers that are even more risk averse
    • That’s a very good thing too and CSPs have to take account of their customers requirements. However, a private cloud or a public cloud can be secured in the same way as a private data centre. The OSS must make sure that it is not persisting customer data or exposing network functions.
  3. Cloud OSS creates another attack vector and dude we’ve got enough of those
    • We sure do. But internally hosted OSS is itself a risk / attack vector. The benefit of Cloud OSS is that it should allow a simplification / reduction of the number of OSS stacks within the CSP
  4. OSS must be internal because of Data regulation and on-shoring / safe-harbouring of data
    • OSS systems should not be persisting customer data (EVER even Static IP addresses!). So, data regulation requirements will only have limited application. OSS data must be secured at rest and in transit. The low latency requirements of OSS will require near hosting.
  5. Few network operators have sufficient levels of virtualised network functions
    • This is changing rapidly and 5G technologies will be predominantly virtualised
  6. The cost of the OSS is always a low proportion of the costs of the network
    • This is true but does not stop the need to gain greater platform efficiencies.
  7. Moving to the cloud will not wipe away the legacy
    • Of course, it won’t but it will help focus of the future and pass management of VNFs to a single master. PNF management will always be a challenge.
  8. The OPEX model is not always beneficial
    • This is true but OSS stovepipes are not cheap. Best of breed SaaS will help spread the cost and not create a lock in to a single technology version.
  9. It’s the OSS, those guys don’t move quickly
    • A classic refrain but not a reason not to move to a Cloud OSS
  10. The streaming data pipe will be too fat and the latency will be too slow to fix items quickly
    • This is a genuine concern and will required a data pipeline architecture with streaming inside the network and OSS components residing outside. Intent based programming with specific levels of management at the different layers will be key to answering the low latency requirement. Especially when control is part of a network slice management function.
  11. The BSS will never be in the Cloud
    • Salesforce, GCP, AWS, Pega, Oracle Cloud, Azure are all changing that model. Especially in the IoT space.
  12. The OSS will never be in the Cloud
    • Watch this space….

 

Bringing IT (OSS) all together

I try and fit components together logically so that they can make the most of what the technology offers. I work predominantly in the OSS world on new access technologies like 5G and implementations like the Internet of Things. I want to achieve not just the deployment of these capabilities but to also to let them operate seamlessly.  The following is my view of the opportunity of closed-loop remediation.

For closed-loop remediation there are two main tenets: 1. you can stream all network event data in a machine learning engine and apply an algorithm like K-Nearest Neighbour  2. you can expose remediation APIs on your programmable network.

All of this requires a lot of technology convergence but: What’s actually needed to make everything convergent?

ClosedLoop

Let’s start with Streaming. Traditionally we used SNMP for event data, traps & alarms and when that didn’t work we deployed physical network probes. Now it’s Kafka stream once implementations where a streams of logs of virtualised infrastructure and virtualised functions are parsed in a data streaming architecture into different big data persistence.

The Machine Learning engine, I’m keenest of FlinkML at the moment, works on the big data persistence providing the largest possible corpus of event data. The ML K-NN can analyse network behaviour and examine patterns that are harder for human operation teams to spot. It can also predict timed usage behaviours and scale the network accordingly.

I am increasingly looking at Openstack and Open Source Mano as a NFVO platform orchestrating available virtualised network functions. The NFVO can expose a customer facing service or underlying RFSs. But to truly operate the ML should have access to the RFS layer. This is the hardest part and is dependent upon the underlying design pattern implementation of the Virtual Network Functions. This though is a topic for another blog post.

 

 

 

5G, Iaas and Mobile Edge Computing

Mobile Edge Computing (MEC) is a key piece of the 5G architecture (or 5G type claims on a 4G RAN). MEC can already make a huge difference in video latency and quality for video streaming multiple feeds within a sporting environment. For example Intel, Nokia and China Mobile video streams of the Grand Prix at Shanghai International Circuit.

A 5G mobile operator will be introducing virtualised network functions as well as mobile edge computing infrastructure. This creates both opportunities and challenges. The opportunities are the major MEC use cases included context-aware services, localised content and computation, low latency services, in-building use cases and venue revenue uplift.

The challenges include providing the Mobile Edge Compute Platform in a virtualised 5G world. Mobile operators are not normally IaaS / PaaS providers so this may become a challenge.

The ETSI 2018 group report Deployment of Mobile Edge Computing in an NFV environment describes an architecture based on a virtualised Mobile Edge Platform and a Mobile Edge Platform Manager (MEPM-V). The Mobile Edge Platform runs on NFVI managed by a VIM. This in turn hosts the MEC applications.

MECETSI

The ETSI architecture seems perfectly logical and reuses the NFVO and NFVI components familiar to all virtualisations. In this architecture the NFVO and MEPM-V act as what ETSI calls the Mobile Edge Application Orchestrator” (MEAO) for managing MEC applications.  The MEAO uses NFVO for resource orchestration and for the element manager orchestration.

The difficulty still lies in implementing the appropriate technologies to suit the MEC use cases. Openstack (or others) may provide the NFVI and Open Source Mano (or others) may provide the NFVO; however what doesn’t exist is the service exposure, image management and software promotion necessary for a company to on-board MEC.

If MEC does take off what is the likelihood that AWS, GCP and Azure will extend their footprint into the telecom operators edge?

 

 

 

 

Some Questions After Quad-Play

I work as an architect at a big telco that has recently become a quad-player. Part of my job is to think of what services come next. My previous interest has always been distributed computing, either networking or large data-sets. Also as part of my job I attend IT conferences on the internet of distributed devices.

My key questions & my current thoughts are:

  • What will become the distributed identity standard for device authentication?
    • OpenID Connect (OIDC) (like SAML) is not an AuthN mechanism but extends the OAuth2.0 model. The identity attribute API can be used for profile loading to define a user’s identity onto the device. This can be a lightweight equivalent of a SIM Profile & also support the eUICC flows for ownership switch (similar to a Profile Content Update Function)
    • Any AuthN & identity solution must support the limitations of loading profiles on smaller memory devices & requiring an authN flow over HTTP.
  • What will be the numbering & addressing standard for massively distributed devices?
    • This is more of an open question relating to the history of the service so that eUICC enabled devices will require an international mobile subscriber identity and LPWA & WIFI enabled devices will require a MAC addressing / IPv6 registry with the service provider.
    • The support for these addressing mechanisms and near field communication devices will have an impact of the network operator’s OSS IT architecture.
    • The GSMA proposal for eUICC uses the START-IMSI required for profile loading which supports roaming and allows for profile swap on change of ownership.
    • IPv6 offers a highly scalable address scheme. It provides 2128 unique addresses, which represents 3.4 × 1038addresses.  In other words, more than 2 Billions of Billions addresses per square millimetre of the Earth surface. It is quite sufficient to address the needs of any present and future communicating device.
    • 6LoWPAN provides a simple and efficient mechanism to shorten the IPv6 address size for constrained devices
  • Will the smart device co-ordination be through an embedded chip-set in the main home internet router?
    • Probably not but I would have said probably not 5 years ago and I still have not seen Zigbee co-ordinators or Thread border routers catch on as stand-alone devices.

I’ve not been blogging for a while, too much work is not an excuse, but will be updating more on these topics soon.

A Reference Architecture for the Internet of Things

The Internet of Things requires multiple Reference Architectures which can map capabilities to specific technology domains. This is a challenge because there is no single unifying industry definition for the Internet of Things. For the purpose of this presentation it is assumed that:

  • “Things” have semantic representation in the Internet
  • “Things” can be acted upon in a structured manner (e.g., status, capabilities, location, measurements) or can report in structured data or can communicate directly with other “Things”
  • “Things” may be active (e.g., Zigbee sensor) or passive (e.g. RFID tag)
  • Different “Things” may use multiple protocols to communicate with each other and the internet

M2M Protocols

There are many different usable protocols for communication with M2M devices for the Internet of Things. Specific protocols are more appropriate for different devices (e.g. memory & power profiles) and specific protocols are more appropriate for different communication needs (e.g. State Transfer Model & Event Based Model)

M2M ARPU Requires A Reference Architecture

The GSM Association (GSMA) put the M2M market size at $1.2tn revenue with 12 billion connected mobile devices by 2020. These numbers alone are enough to excite the most conservative of operators and wobble the subscriber-centric business models that currently prevail. The existing model adopted by MNOs that the more subscribers it has, the more successful and profitable it is considered to be, is about to be tested by this massive new market. This is mainly because the Average Revenue Per User (ARPU) in the M2M business is on average below ten cents per device, but on the other hand the connection density can be virtually endless. So success will depend on how dynamically the CSP reacts to provide new and flexible platforms to support the every-day new devices, applications and verticals that M2M will address.

Because of the low ARPU and massive market multiplier many MNOs should be prepared for a shake-up of their OSS which will have to fulfil and provision at bulk and at low cost.

IPv6 addressing will also make M2M services not just a mobile proposition, but applications that can work seamlessly across both mobile and wired broadband connections. eUICCs and wifi hand-off will have to be included in the new OSS. Furthermore Near Field Communication will require its own billing model.

Never before has a reference architecture been so required for M2M.

All of this does not just apply to the MNOs anymore.

BSS for the IoT: You Don’t Have To Be A Mobile Network Operator To Do It

The Internet of Things is not predicated on mobile or fixed-line operators. It is predicated on the value derived from the interplay between different sensors and actuators. In the history of mobile telecommunications it was the mobile network operators who provided a service that brought together radio waves and handset manufacturers. The success of mobile telecommunications has led to a 93.5% global saturation rate (source Informa) with the conglomerate operators China Mobile Vodafone. Airtel and Verizon etc being the big winners.

Continue reading “BSS for the IoT: You Don’t Have To Be A Mobile Network Operator To Do It”

Why the Future of Identity is OpenID Connect and not SAML

This blog is part of a series comparing the implementation of identity management patterns in SAML and OpenID Connect:

Future of Identity Federation is OpenID Connect

Identity management is an enabler for networked services whether web browser, mobile or smart-tv applications or the internet of things. The increase in services will create an increase in passwords without mechanism for sharing & trusting identities. eGovernment services require a higher level of identity verification than the social authentication capabilities of Twitter & Facebook connect. The future of eGovernment Identity is an interoperable authentication and authorisation capability that can support higher levels of identity verification.

The importance of interoperability amongst identity solutions is that it will enable individuals to choose between and manage multiple different interoperable credentials. Futhermore service providers will choose to accept a variety of credential and identification media types. “Identity Solutions will be Interoperable” is a guiding principle of the US National Strategy for Trusted Identities in Cyberspace (NSTIC) which is a White House initiative for both public & private sectors to improve the privacy, security, and convenience of online transactions.

SAML is insufficiently interoperable to be the future standard for identity management federation. SAML is limited in its ability to support mobile & smart-TV applications and requires the implementation of a complex Broker Service in order to support multi-service provider & multi-IdP use cases.

OpenID Connect will most likely supersede SAML for all eGovernment externalised identity management. OpenID Connect is a lightweight identity verification protocol built on top of modern web standards (OAuth 2.0, REST and JSON) superseding OpenID 2.0. OpenID Connect allows a service provider (Relying Party) to select between a variety of registered or discovered identity providers. OpenID Connect can satisfy all of the SAML use cases but with a simpler, JSON/REST based protocol.

SAML OpenID Connect Comparison
SAML 2.0 & OpenID Connect comparison

Continue reading “Why the Future of Identity is OpenID Connect and not SAML”

Zepp Sensor for Golf and Tennis: An Example of a Good App Strategy

People who know mzepp unite know that I am equally bad at both golf and tennis.

Because I’m keen on gadgets (and excuse all purchases as research into the internet of things), I had to purchase the new Zepp Golf sensor. The golf sensor attaches to the back of my golf glove and tracks my slow slices and off tempo hooks. I then purchased the tennis adapter which fits to the bottom of my tennis rack to track my off tempo serves and my slow sliced backhands.

What I find most interesting is how the sensor can be used for multiple sports. According to Zepp’s own documentation it is simple to swap between sports:

Your sensor will work with all 3 Zepp Sports Apps: Baseball, Tennis, and Golf. Simply download the mobile app of choice and attach the sensor to the appropriate racket, bat, or golf mount. To use the sensor for a different sport, connect your sensor to your mobile device and open the sport app of your choice. The app will ask you if you wish to change the sensor to the new sport mode. Select OK to begin change process

I really like this simplicity. Just download the app for the sport you’re going to play to the device or devices you use. I could imagine other firms over engineering the mobile applications so that they all linked to a single user account and a single self service operation would provision each individual sport.

Zepp’s model works well because the user just downloads what they need and can then work on cranking up the power of their forehands. Just wish mine would sometimes go in.