Economic Analysis of mmWave Fixed Wireless Access as an Alternative for FTT/x

FWA is not a new idea with 5G and has been available to anybody tethering since 3G. FWA is comparable to Fibre-to-the-Home as both are connectivity solutions for the edge of the network. 5G mmWave (~25Ghz and above) is promising an alternative to FTTH, with 1Gb per second download speeds. It is therefore worth understanding the technologies and engineering necessary to make FWA a viable or better alternative to fibre.

Verizon has targeted FWA as an alternative to FTTx with its 5G Home service launched across Houston, Indianapolis, Los Angeles and Sacramento in October 2018. Verizon estimates the 5G mmWave FWA addressable market to include 30 million premises. To be successful Verizon’s FWA has to be cheaper than the delivery of FTTx and will have to overcome some quite considerable engineering challenges. These include the roll-out of multiple 5G antennas with small-cell front-haul for extended coverage, the deployment of external to home 5G receivers, a distributed core that can host Mobile Service Edge and CDNs close to the 5G Cell Towers, and a new 3GPP Release 16 Core that can support network slicing for the 28Ghz spectrum.

FWA logical architecture

The above diagram shows a logical architecture for a 3GPP Release 16 compliant new mobile core connected through multiple distributed sites connected to radio site gNodeBs delivering FWA service to the home. A new core is not fully necessary, as Verizon are launching already using their channel coding, multiplexing and interleaving technologies. A new mobile core will be advantageous in guaranteeing the QoS for mmWave FWA slices.

The majority cost for FWA is in the delivery of the radio network and mmWave antenna. Higher costs will always be incurred if RAN planning has not been optimised and necessitates 5G small cell in-fill. For this reason mmWave may be better deployed as new sites in a standalone Model 2x configuration. Other costs include upgrading the mobile core but this cost is shared with other 5G use cases. Spectrum licencing is another important cost. Currently mmWave licence spectrum is relatively available, hence lower cost, and more extremely high frequency is being released by national regulators.

To be competitive FWA must be economically viable against fibre delivered to the home. This includes internet peering & CDNs. In regulated territories like the UK that already have Local Loop Unbundling the competitor CSP can consume service from the distributed site. This has been part of the US regulatory framework since the US Telecommunications Act of 1996 that requires ILECs to lease local loops to competitors (CLECs). In an all fibre model the cost of connection is to the premise (FTTP) or home (FTTH). If regulatory dark fibre or open ducts are in place then the competing CSP can consume those services at a regulatory defined price. In the UK that model is only being developed after initial regulatory challenges and in the US the FCC has not extended enforcement of dark fiber offering since 2014. It is therefore suitable for a US mobile carrier to consider 28Ghz as a more efficient distribution mechanism than FTTH if there are no regulated dark fibre or open-duct solutions available. It is also worth considering that the civils part of the delivery of fibre (the dotted FTTH line in the below diagram) can cost as much as 90% of the total service delivery cost.

Simplified FTTH Architecture

A final comparison between FTTH and FWA:

  • Same Costs: Network spine, backhaul and equivalent equipment are the same for FTTH & FWA
  • Higher FWA Costs: The spectrum licence costs are unique to FWA but due to spectrum availability may not be prohibitive, power & cooling costs are higher for FWA and the maintenance cost of FWA should be higher for exposed antennae
  • Higher FTTH Costs: The only cost that is higher with FTTH is the civils part of delivery. This cost can be very high because of the complexity of getting wayleaves and permissions and digging up roads.
  • In conclusion, FWA should be more efficient and cheaper service to deliver as long as the network planning is accurate and does not necessitate continual modification based on further cell deployments.


Monitoring Micro-Service Applications across Hybrid Clouds using Istio service mesh multi-clusters, Kiali observability, Zipkin tracing, Prometheus events and Grafana visualisations

Most enterprises have complex application deployments across their own internal data centres and commercial clouds. I am using Google Cloud Platform and AWS in this example. Where I work, we traditionally monitored logs and configured alarms for network and infrastructure monitoring. This approach was disjointed and slow to react. The enterprise moved to cloud hosting with elastic scalability a few years ago which led to multiple stove pipes of monitoring capability and a heavy dependency on VPC interconnects. We wanted to move to a multi-cloud environment whilst maintaining the benefits of a centralised technology operations centre.

We quickly realised that we had specific workloads running in different environments with no common mechanism for monitoring & reporting. This led us to examine open-source monitoring architectures based on Netflix’s Keystone Pipeline. Our requirements were for a universal data visualisation and observation of our application based on Grafana, Zipkin and Kiali.

Logical architecture and open source technologie

This architecture is based on open-source projects that we can use across GCP, AWS and internally. Everything is predicated on Docker containers and Kubernetes container orchestration. Istio provides the policy and load-balancing functions of a service mesh and GRPC provides the low latency integrations between the micro-services. These technologies provide the enablers for the monitoring & visualisation capabilities of Kiali, Zipkin and Grafana.

The following diagram shows the open-source component architecture to support different internal data centres (one for IT running Pivotal and one for mobile network IT running Openstack), Google App Engine and AWS Kubernetes service EKS on EC2. This logical architecture has the intention of a single pane of glass for service management toolkit technologies.

Open Source Monitoring Toolset across Hybrid Clouds

To achieve a single pane of glass across multi clouds requires the need of a aggregation function that can integrate the control plane of multiple Kubernetes container orchestrations. Istio achieves this by supporting multicluster deployments across hybrid clouds by deploying a control plane to each Kubernetes cluster. Kiali can provide service mesh observability of a Istio multi-cluster environment. A Helm variable global.remoteZipkinAddress can be used to connect Zipkin distributed tracing to the Istio cluster.

All of this together enables a Kubernetes control plane on each hybrid cloud environment to be interconnected to the master visualisation technology operations centre environment.

The traffic flow of a Kube ingress allows the ELB using GRPC to integrate multiple clusters where the Prometheus collection agents are deployed. These can then be aggregated together through the Prometheus server in the logical control plane.

Note that the HELM Tiller deployments to each cluster support the multi-cluster control plane as described here.

Kubernetes and Istio Mixer Control Plane for Multicluster Deployments

Prometheus provides the time series of events for the multiple clusters that can then be queried by any Grafana server which treats storage backends as time series data (Data Source). Each Data Source has a specific Query Editor that is customized for the features and capabilities that the particular Data Source exposes. Grafana can also consume StackDriver, CloudWatch and Ceilometer for Openstack.

In conclusion:

  • Istio, Helm & Tiller can manage a multi-cluster hybrid cloud deployment
  • moving to a hybrid cloud requires a visualisation of complex integrations which is where Istio and Kiali service mesh observability are strong
  • hybrid cloud monitoring can be achieved by deployment of agents including Prometheus collection agents to individual clusters and connected to a Prometheus server which in turn is rendered by a Grafana server
  • Zipkin provides distributed tracing and integrates with the Istio managed cluster

One point not described is the requirement for a technical inventory that describes the individual micro-services and the toolsets that can be deployed to each container, but i’ll save for another blog.

Finally, there are technology alternatives to Kiali, Zipkin, Grafana and Prometheus such as included Logstash & ELK, FluentD and commercial solutions like Datadog.

5G and TM Forum Digital Transformation Middle East

I’m talking at the TM Forum Middle East Digital Transformation event https://dtme.tmforum.org/speakers/charles-gibbons/ on 5G. It’s great to be invited to share my knowledge of 5G architecture and delivery. I will be covering the roll out of 5G service in the UK and will be specifically covering how knowledge share is critical for successful implementations of 5G.

EE is launching 5G in the UK in 2019 across 16 cities: https://newsroom.ee.co.uk/ee-announces-5g-launch-locations-for-2019/

EE coverage design_FINAL


Focus on 5G Monetisation and the business value and the need for Open APIs for an ecosystem architecture. Telcos do not have a domain right to provide IoT services over 5G. It is important that all CSPs support open APIs for their 5G services including TM Forum, GSMA OpenAPIs, ETSI Mobile Edge Compute APIs, NIST and other more commercial offerings.


BT & EE’s First To 5G Trial in Canary Wharf

BT has started its first live UK trial of 5G based technology in Canary Wharf Square. This is a high capacity zone test as Montgomery Square includes a London Underground entrance and high rise offices. The footfall is in excess of 150k people per day.

High capacity zone testing is a critical part of EE’s 5G launch program, with the first phase of its 5G roll-out targeting “hotspots” across the UK – the places that have the greatest number of people using the most mobile data.

The test hardware and spectrum are much closer to the final commercial deployments that will begin in 2019. Key to the test is a successful FCAPS deployment for live monitoring and reporting on the site and its associated backhaul. BT & EE’s handle 15 million network reporting events a day as part of their streaming architecture.

Enterprise Architect’s Guide to Cloud Licencing Models

cloudsaas

Moving to cloud licencing models, including SaaS, does not become less difficult and with the possible proliferation of services can become difficult for the Enterprise to govern. As with any type of licence agreement the Enterprise must know the agreement they have signed, the implications of the licensing model and the interaction on other 3rd party contracts. Monitoring of Service and Usage is paramount. The monitoring must relate back to the agreement and be within the dominion of the Enterprise. Every element of your organization’s software licensing must be managed under an onsite software agreement; but it must also include agreements for the software potentially being used externally as well.

Enterprise Architecture must understand the types of licencing models in the Cloud and how the effect the Enterprise and its customers. The following blog describes my experiences with cloud licences and the different models:

  • IT Cost as a Percentage of Revenue: Optimal Spend
    • Many Enterprises use IT Cost as a Percentage of Revenue to understand the OPEX costs of their IT against corporate revenue. This model works for larger enterprises with stable revenues.
    • For the start up the services can be used immediately and the model can scale according to demand. The challenge can be that it is difficult to scale on utilisation if the revenue decreases and therefore IT Cost as a Percentage of Revenue can peak.
    • Even within start-ups the Enterprise Architect must be aware of the ability to divest as well as invest in new technologies.
  • Hosted vs. On Premises: Software Asset Management
    • One of the biggest advantages of moving to Cloud or SaaS based applications is the reduced hardware infrastructure and personnel costs required to run business applications. An externally hosted infrastructure or more pertinently a hybrid model requires the inventorying of hardware, applications and licences.
    • New Software License Optimization tools are required that allow organizations to accurately inventory virtualized cloud environments
    • In a hosted model the software and infrastructure licence costs are bundled. Normally the costs are competitive but in certain scenarios such as storage it is possible to find a better deal through internal hosting. The Enterprise Architect must logically decompose the physical architecture to understand the optimal cloud deployment model and to consider as part of the Enterprise’s cloud architecture.
  • Subscription vs. Perpetual: Licence model cadence
    • The perpetual licencing model is well understood; the Enterprise has formal RFPs and set renewal date for Perpetual licences. The cadence with a Cloud model is faster. Subscriptions renew monthly and the Enterprise needs to ensure they are not over-spending or heading towards over-spend on a monthly period.
    • The Enterprise Architect must manage their IT estate of Cloud services closely because the barrier to entry to the Cloud is much lower than with perpetual licences. Without formal RFPs, the Enterprise will enter into multiple subscriptions for the same services or will licence services that may be underused.
    • The role of the Enterprise Architect for cloud governance is critical; without strong governance the precedent of point cloud solutions can spread across the Enterprise.
  • Usage-based Software License Models: Pay for what you eat but you’ve got to rent the plate
    • Cloud has made usage-based pricing more popular and seem simple at inception become increasingly more complex as your Enterprise’s requirements develop.
    • Usage based pricing models are complex as the cost to serve does not always align to the cost to use and determining the value of the service can then become very complex.
    • The Enterprise Architect provides benefit by understanding the value of the Software Licence Models. The EA needs to be familiar with the different types of software licencing models and their pitfalls. This includes both the licencing models and the legal and regulatory possible issues.
  • Accurate Forecasting of Costs: Roadmap use
    • On-premise perpetual licences provide predictable pricing and no surprises. The accurate forecasting of future spend in the Cloud is a challenge as the pricing models can change, usage changes, and there are not as many controls over growth or capacity demands. Enterprises need to be much more diligent about making sure their licensing costs are optimized, transparent, and predictable.
    • The Enterprise Architect has the foresight on the system roadmap and must understand the Cloud usage model. Here the EA must work closely with the finance team to predict the expected growth in the licencing model and to have a strategic roadmap for key scenarios.

 

Edge SDN as a Service

Not all micro-services can be stateless lambda functions. Some services must maintain state. A good example is the management of autonomous vehicle platooning functions across multiple radio network sites.

A challenge for this distributed statefulness is if the stateful micro-services are running in a specific container then how does the SDN controller manage networking to a specific container? This requires attaching the SDN networking at the container rather than the host level. Something that is possible with Amazon EC2 Container Service

If Tier-1 telcos are serious about providing Network as a Service or Edge Compute as a Service then they must provide the join between data centre and network operator. To do this they can either be the edge landlord to Amazon, Google and Facebook. Or if they are truly ambitious they need to provide a SDN Edge

Charles Gibbons is talking about Future of NFV / SDN at Digital Transformation World this week in Nice: