OSS Class of Service

 General  Comments Off on OSS Class of Service
Jun 072013
 
Today’s topic is “OSS Class of service”, a term probably has not introduced before.

After the data boom that we saw in fixed/mobile networks, many new concepts popped up. Cloud computing, big data, VoIP and M2M are some of these. With the introduction of OTT operators in the market, the services that are delivered became much more complex. Yesterday’s dump pipe services are not dump anymore.

During this shift, quality of  service that has been provided became the major focus of the operators. Customers demand continuous, consistent quality throughout their experience. We have now complex OSS tools such as SQM and SLA Management Systems to track the overall quality of the service.

But there is a major issue that rise from the very bottom level: device level. All of the OSS rely on the device and/or EMS systems as their main data source. The device sends SNMP traps or expose statistics through files / SNMP MIBs. On a typical day, the statistics and traps flow smoothly to the OSS platforms and necessary steps can be performed.
But what about a condition where the utilization of the device itself goes beyond expected limits? What if the CPU utilization reaches up to 99%? In theory, everything will work fine, and if set, a high utilization threshold alarm will be sent to OSS. But from the very field experience, I know that this does not work like this.

The device or EMS stops NBI processing to favor customer data traffic. It may not send traps, it may not process performance metrics. But this information is very critical to OSS systems. The OSS systems are not just monitoring tools. They act upon the quality violations, so they need to be up-to-date every time.

The device vendors should introduce a dedicated processing and memory space that should not be interrupted by other processes. When the main CPU goes high, the OSS plane should continue working smoothly. The other way around, when a configuration management platform requests a full dump of inventory, the system should not crash.

The device / EMS configurations can easily be introduced dedicated processing power/memory space parameters. Today’s vendors invest heavily on virtualization technologies so a virtual NBI manager can also be created on the devices.

This dedication can even be applied to the device/interface level. So, Interface A, which carries traffic that belongs to a VIP customer can be set to (for example) Gold OSS profile while the others are set to Best Effort OSS profiles.

A fault proof OSS is possible from the device level. And fault proof OSSs will let the service providers to provide what they promised.

OSS and Security

 Security  Comments Off on OSS and Security
May 062013
 

Today, I’d like to talk a little bit about security and it’s implications on our OSS systems. As OSS are seen mostly “internal” to our organization, most of the time, an OSS system is not security hardened, before going into production. We open up the ports SNMP 161,162 in our firewalls(if any) from the devices to management systems, we open up http:80 or https:443 between our OSS systems for different kinds of API Access.

In the OSS/J article, I mentioned about different kinds of information that can be fetched from an OSS/J enabled platform. Think of an inventory system for example. If I have the correct credentials, I can have the whole network inventory from this system including the customer information. Or I can trigger a service activation within the service activation platform without notifying the order management platform, excluding the Billing system involvement.

As you can see, the access to OSS platforms and their APIs can have the risk to expose your intellectual assets to the outside world and also allows internal  fraud  to occur.

If a malware, running on an admin PC has some kind of access to these sensitive APIs, it can easily transfer this information to the Internet (to the bot manager) to be used in further attacks or information trading.

The communication between device and the EMS is also important. Most of the times, this communication is done within the management network which is also reused by administrator PC’s for telnet/shh  to the devices.  The only protection at the end devices side is the password protection. The passwords should be complex enough to have alphanumeric, special characters and numbers. The password selection process is usually done based of best practices for the telnet/ssh side, however this is not the case for SNMP. Most of the time we will face “easy  to guess” SNMP community strings that can be cracked in a brute force attack. We also face SNMP SET enabled on devices where there is no reason for it, creating another  serious vulnerability.

Another  thing to consider is the management networks. In especially big organizations, management activities are outsourced to different 3PP entities.  The admin PC’s in these entities VPN to the management network to set-up/manage end devices. Since these PC’s are not subject to the companies end user security policy rules, this could be a possible backdoor for bots or hackers to the internal information.

We should always keep in mind the security implications of changes in our OSS infrastructure. We should apply security policies for accessing these systems and keep scheduled security scans for new or possible vulnerabilities. We should protect our management networks, especially it is shared by multiple companies. The management networks should be logically segmented by Service Activation/Resource Provisioning tools. The Access logs should always be collected, and correlated in a central location and reviewed by security personnel.

The OSS security is becoming more important as we utilize more “open” interfaces for management and reporting. Since they are “open”, everybody including the hackers will know how to reach the information. As long as we apply the necessary security controls, we can continue enjoying the interoperability and flexibility that has been delivered by standard OSS interfaces.

Strategies for the OSS/BSS Integrator

 Strategy  Comments Off on Strategies for the OSS/BSS Integrator
Mar 142013
 

Integration plays a very important role for the success of an OSS/BSS project. Lots of parties(sw houses, neps, integrators) offer integration services however few of them are able to deliver within scope, on time or within budget. Knowing the importance of  integration in an OSS/BSS project, customers began to be more selective.

In this post, I will mention some steps that needs to be considered by the Integrator , who aims to sell services to Telcos. OSS/BSS(especially OSS) services, are hard to sell as they require know-how, footprint in the target customer and enough money in  budget to cope with long sales cycles.

Here are my proposed 10-steps to success;

Step 1: Offer multiple solution offerings.

Pre-pack your solution offerings based on;

  • Best of breed products
  • Implementation experience
  • Local existence
  • Best practices expertise (TMForum, ITIL, TOGAF)

This will improve your flexibility.

Step 2: Add consultancy in your integration project

You cannot sell OSS/BSS consultancy alone! Add consultancy in several phases of the integration project delivery cycle to feed your consultancy. Good consultants will also improve customer trust.

Step 3: Run POCs

Run POCs and finance them from your presales budget. You may limit the scope to decrease costs however, customer should never be involved in these calculations. POC’s are effective strategy to;

  • develop long term relationships
  • sneak inside
  • getting rid of the RFP
  • fit for purpose (technical, cultural, financial)

Step 4: Train your customer

OSS/BSS is too complex that it cannot be understood by the senior executives. Talk in their language!
OSS/BSS should be positioned strategically. Train the customer and create awareness on different levels;

  • Strategic/Revenue focused (time and dollar)
  • Technology focused (tools and processes)

Define your metrics!

Step 5: Invest in Knowledge Management

The goal is to reduce the engagement cycle by reuse.

Create an RFP KB should include:

  • Solution description
  • Pricing !!!
  • Lessons learned

Cross-business knowledge share should be improved. Business critical data and know-how that resides on other lines of businesses should be available for re-use. Share your data!

Step 6: Score your Vendors

Maintain Vendor Scoring based on;

  • Financial figure
  • Way of working (flexibility)
  • Local existence (country list)
  • References
  • Promising Strategy

Step 7: Score your Customers

Maintain Customer Scoring based on;

  • Should we invest in them?
  • Are they interested?
  • Short term, long term strategies.
  • Previous interactions.
  • Way of working

Step 8: Score your Product Catalog

Maintain Product Scoring based on;

  • Price
  • Integration costs
  • Vendor’s score
  • Feature set (by domains)
  • Customizable
  • Typical implementation period
  • TCO

Product Scoring should be done both for 3PP and company owned products.

Step 9: Follow New Trends

  • Expand to IT! (Exp. Datacenter management)
  • Expand  to CEM!
  • Watch the consolidations(Fix-mobile, NW-IT)

Step 10: Marketing

Your role as an Integrator should be marketed. For this purpose, you can;

  • Run a separate web site just focusing on integration.
  • Issue TM Forum business cases.
  • Hire globally known consultants.
  • Create micro-blogs.
  • Create public communities.

Are you ready for SLA Management?

 SLA Management  Comments Off on Are you ready for SLA Management?
Feb 282013
 
This should be one of the top questions of the mature operators that provide corporate services to it’s customers. Customers are demanding more quality, less downtimes while the network capacity has been under pressure of these smart phone facebook traffic, M2M transactions etc. In such circumstances, it is hard to commit on a bit rate, or downtime percentage. Of course you can say that I will have 10 days of downtime maximum, but nobody will buy it. Or from other way around, if we commit that we wont’ have more than 1 mins of dowtime in a month, most probably we will fail and fell into a penalty situation which will not cover the extra gains we had by our next generation SLA offering.

So, if we want to provide SLA management we need to measure first. After that, we can predict. What we need to measure is most probably on our hands already. We have plenty of KPIs in our PM platform, lots of resource and service impact alarms in our FM and nicely enriched tickets in the TT platforms.

After having the KQI set which will be the base for the given SLAs, we need to identify the KPIs that will feed those KPIs. Before moving to the nature of the KPI data itself, we should measure the health of our OSS data flows. That is to say, if I collect KPI data from the datasource every 5 mins, how many times in a day I encounter “no data in the datasource” or “TCP connect failure” type of scenarios? These kind of questions will reveal my OSS performance. OSS Performance is very important. (That’s why in some SQM systems, we monitor OSS and its collection processes.) If we run short on performance in the collection section, we should fix this first before moving to KPI value elaborations.

If the collection is running fine, then we need to start baselining the KPI data. Baselining is provided by most of the PM tools in the form of some OTS reports. We can look at these to see how best we can perform throughout a reporting period. This manual process is effective but the automatic one will be better. We can push the PM vendors to provide a mechanism to export that prediction data to somewhere so that we can use it as a KPI to compare with our current delivered thresholds. If the system finds that we are approaching the red zone, it can open up a change / TT to trigger a capacity upgrade process for involved resources in the SLA delivery chain.

This is the technical side of the story. There’s another side, which seems to me a little bit harder to deal with: the business.
You need to train the business people on concepts. What’s SLA, KPI, KQI, Baselining, Product Offering, CFS, RFS … Probably they wont cooperate in your SLA initiative that much if they do not understand anything. They should be able to calculate the risks, convert the risks to dollar amounts, and create an SLA offering taking care of these amounts. SLA offerings should also cover the internal SOC organization support costs. The sales guys should never have the option to play with the parameters in the offerings. These should be fixed with the baselines and issued as SLA templates.

The analysis work also involves the 3rd parties. If we rely any third parties along our service delivery path, we should baseline their performance also. We should sign-up necessary underpinning contracts with them taken into account our SLA objectives.

SLA Management, should be a bottom-up process. Top-down approaches will be too risky and most decision makers will not approve your project. A well planned SLA Management process can bring additional revenue as well as a huge competitive advantage.

Common Data Model: Does it worth implementing?

 Frameworx, NGOSS, OSS/BSS Transformation  Comments Off on Common Data Model: Does it worth implementing?
Oct 302012
 

As the name implies, common data model or CIM, is a business (and technical) model of entities that are used during a telecommunication operator’s activities. As the businesses and the operational functions become complex, integrations between tools that are handling those also become complex. Telecommunications, probably the most complex business among the others has been suffering this since it’s beginnings.

In Telco, there are hundreds of entities that can be involved in a business process. And most of the time, these entities are managed by different OSS or BSS components. In order for the process to flow, two or more entities should be shared as information carriers.

The multi-vendor structure of the Telco sector introduces problems in here. The OSS/BSS vendors use entities that are conceptually same but technically different. A customer entity has a customerName attribute in one tool, the same attribute is called “cName” in the other tool. Following the same example, the first tool holds a customer type attribute in Integer type, while the other holds the same in a String type. Obviously, there should be some conversions between these two systems.

To put the understanding of these entities to a common ground, common data model studies have been around since the beginning of the sector. The aim is to simplify the inter-tool integrations. The last concrete effort on hand is the TMForum Shared Information Data Model (SID).

A common understanding requires, abstraction and abstraction means the loss of the details. We can see the loss of the details in the attributes level in the SID model for example. SID, tries to avoid common attribute names rather it tries to spread them to atomic entities. This approach makes the model involve lots of new entities.

TMForum says, SID is an informational model , not a data model , so it rather should be customized to be aligned with the organization. This alignment is done via class extensions or attribute additions. (which different vendors may decide to implement separately)

When I am designing(and developing) a brand new OSS tool, I can use this common model for sure. If the tool I will communicate in the business process uses the same entities(not customized ones), they can communicate (slightly) seamless. But is this the case? Look at the off-the-shelf tools around the Telco business. None of them implements a common information model. Some vendors offering multiple tools in the OSS domain, however implement their own propriety models to improve integration between their components but these are propriety.

Look at the big OSS vendors. Most of these vendors are not interested in this common world. They don’t have OSS/J adapters natively supported for example. They rely on small partners to provide and sell the adapters separately.

Common information model, will dramatically reduce the system integration costs which are not welcomed by these vendors. And this CIM thing will kill an argument like “Buy also TT from us, not from another vendor. Because our TT has seamless integration with our PM tool which we cannot guarantee with the other tool”.

In today’s environment, If I wrote that CIM based brand new tool, I still need to write an adapter that will do the necessary conversions and mappings on the “legacy” tool’s side. (The conversions and mappings can be implemented on the bus if I use an ESB). Even in SID-SID communications, if one side has done some “custom” things such as addition of new classes, the other side should be aligned to that change on the adapter or bus level.

In any case you will write adapters, you will do mappings, you will add validation rules. There will always be some integration effort and if you are not living in the ideal “all SID” world, your additional efforts to aligning to a CIM (SID etc.) will multiply the costs and risks.