Network Operators and Open Source Software

(How) do network operators use/contribute to/$upport open source software? The final report from the pilot project and survey are available in whole or in part: https://possie.techark.org/all-the-details-network-operators-oss-study/ .

OSS and You: love it, hate it? never gave it serious thought?

(This was previously posted on https://possie.techark.org)

As software becomes increasingly important in the world of network operations, Open Source Software is also gaining more attention. But, network engineers are typically focused on, well, networks, not software. While network operators may want OSS tools, they don’t always get involved in its development.

As I outlined in my lightning talk at NANOG in June (video, slides), that seems unfortunate — it would be good to ensure that network operators are engaged in ensuring the tools they get are the ones they need. So, I’ve put together a survey to try to better understand the “friction points”: what works, or doesn’t work, for network operators in the realm of OSS.

I’d love to gather your input in this survey! Follow the link below and make your survey contribution. I’ll be summarizing the input and sharing the results, so that we can all get a better perspective on what’s driving network operators with respect to OSS.

Operators and Open Source: tell us!

Food for thoughts: Networks and (open) software

Multivendor networking seems like sound business planning. It also seems like software is an inevitable component to that, even as network engineers have traditionally not been software-keen.

Ignoring that reality may be more expensive than just the cost of networking hardware — as the industry growth is in the software-savvy companies.

From https://www.lightreading.com/nfv/nfv-strategies/the-eye-watering-cost-of-multivendor-networks/d/d-id/754469 :

Nearly everyone insists that culture, and not technology, is the big problem. Telco employees are not used to handling software and even less familiar with the working practices of a typical software firm. Their technical staff think Python is a non-venomous snake and still use acronyms that became unfashionable at the same time as permed hair. Their commercial models are misaligned. Their sales and marketing departments understand “aaS” as something you sit on.

Changing all this could prove extremely costly. Whether operators try to retrain existing members of staff or introduce new talent into the workforce, the process could also take years. And time is certainly not on their side. A tsunami of data traffic on telecom networks has not brought a surge in revenues with it. The Internet companies riding that wave are a growing threat. “There are some very software-centric companies out there and if we want to be competitive we need to come up with things much faster than in the past,” said Deutsche Telekom’s Seiser. If they cannot, telcos may pay a much heavier price than the cost of any transformation.

Good Reads: The Mythical Economic Model of Open Source

This article has a thoughtful perspective on the question of business models and Open Source Software. But, I found it interesting in that it also explores the tension between the roles of “Developer” and “User”. No business model works if it isn’t about delivering value to Users or Customers; and yet he argues that open source is written by developers for developers.

Key takeaway — get involved?

“The Mythical Economic Model of Open Source”, by James Bottomley: https://opensource.org/node/1009

Every network is a snowflake

The question often arises:  why is it so hard to get  Internet-wide critical technology innovations into the real world?   The mind quickly leaps to lay blame:  perhaps the proposed solutions are somehow defective.  If, after successive iterations of redesign discussions,  better approaches don’t surface,  there is a collective resigned sigh that the real problem must be a lack of a “killer app” to drive uptake.

However, what this glosses over is the fact that every network is a snowflake.  No two are the same.   While strong solutions and compelling applications may provide motivation for deployment of needed technologies, there is always a certain amount of necessary custom work to be done in order to adopt a new approach within a given network.     That may be hard, in terms of engineering, and it most certainly will require business motivation in commercial networks.   So, critical technology innovations tend to get deployed patchily, and over time, across different networks in the Internet.

Snowflakes are individual, complex, and beautiful

Before we conclude that we should seek conformity in networking (melt that snowflake!), let’s review some of the ways in which the Internet’s support for diversity and individuality has been a strength.

Internetworking brought us a global network because the focus was on specifying how networks could connect to each other, allowing data to travel between them without making undue suppositions about what individual networks did with data inside their own borders.    I.e., existing networks could be connected together.

It has also allowed networks to be developed to support a wide range of purposes, with widely divergent structures to support them.  Networks are tools, and network elements are components that can be fit together and managed to support whatever activity its operator desires.

A network built for an enterprise has to support the enterprise’s business activity (e.g., individuals communicating, accessing particular services), and the enterprise can elect to support or prevent network uses as they are, or are not, appropriate for their business.  While an ISP may seem to have the same purpose — allowing individuals to communicate and access various services — the relationship to those users is completely different, as is the physical layout of the network.  An ISP can’t tell users which services they aren’t allowed to use during business hours (as long as there are some tendrils of network neutrality in effect), doesn’t have an a priori expectation of what individuals will want to do, and the network has to span many neighbourhoods, with access from each address within them, in order to be viable.  That’s very different than an enterprise network, which might be expected to support a handful of buildings on a few campuses.      Those are just two general types of networks — there are many more — networks that connect other networks, networks within homes, mobile data networks, and networks in data centres, academic networks, to name a few.

When it comes time to do an Internet-wide upgrade, such as deploying IPv6, network differences play an important role in determining an operator’s perspective on the importance of deployment, and the level of effort required.  While a business with an enterprise network is in a position to know what equipment on the network is IPv6 capable or not, and to map out an upgrade plan, the enterprise network is  fairly static in size (i.e., not requiring large swaths of new addresses on a regular basis), and the need for IPv6 support may not be apparent, from the perspective of managing business expenses.   An ISP may have control over the connecting box in their customer’s premises, or they may not (depending on the network’s business choices, and whether customers expect to be able to buy their own devices).  The ISP operator likely have little awareness of the capabilities of the customer equipment attached directly to their network, and exactly no detail on the capabilities of the customer’s in-home devices.  Will it work for the customer?  To deploy IPv6, an ISP has to support it internally in its core, and at all the network notes spread throughout all the neighbourhoods they support.   However, assuming business is good, they do have to be able to provide usable Internet addresses to a growing set of customers, and in today’s reality, that means there is a business driver for supporting IPv6.

While that gives a glimpse into the reasons why there rarely is a uniform path to deploying new technologies across the diverse networks that make up the Internet, it should also provide a reminder of why supporting diversity is valuable.   Apart from differences of purpose, networks have “grown up” differently because of local factors across the world (geography, resource allocation policies)  or history (e.g., transformed telco monopoly, expansion of a multinational, or stitched together from the acquisition of one or more networks of similar or smaller size).    These evolutions would be a lot harder, if not impossible, if we didn’t have an Inter-network.

So, while we may be frustrated at seeming lack of progress on getting important technologies deployed,  we shouldn’t start by assuming they are somehow wrong-minded.  We live in a complex (networking) world, and the complexity does us more favours than not.  If we didn’t need and want what this diversity gives us, the Internet never would have trampled over such uniform experiences as MiniTel and AOL.

 

 

NOMA: Static snapshot and nexts

A while back, I wrote a guest blog for APNIC on the topic of the first results from the NOMA pilot work with RIPE NCC’s Atlas framework.

I concluded: “Nevertheless, while these results are pretty preliminary, they do highlight the value of the in-network perspective on IPv4 and IPv6 performance, and motivate further study. For instance, it seems the performance of IPv6 is better when measuring to a “near” target. One hypothesis is that performance improvement is because transit networks are not as friendly to IPv6 traffic as access networks. Other hypotheses are also possible, and only testing will tell.”

In a world where IPv6 and IPv4 network connectivity and routing are very clearly not symmetric, that testing could be really valuable, if for no other reasons than to show progress with IPv6 deployment, salute the networks that have made the progress, and help identify issues remaining.

A network operator perspective on IPv6 performance | APNIC Blog

A network operator perspective on IPv6 performance | APNIC Blog

Guest Post: The Network Operator Measurement Activity (NOMA) platform explores the possibility of making use of data within constituent networks for Internet health metrics measurements.

Source: blog.apnic.net/2017/09/29/network-operator-perspective-ipv6-performance/

 

Mind Your MANRS!

The Internet Society has been working on Mutually Agreed Norms for Routing Security (MANRS) for a few years, and they recently funded some industry research to gain insights into network operators’ and enterprises’ requirements and plans around routing security.

MANRS

The report itself is definitely worth a read (see references below).  Particular results that I think are of interest for both MANRS and URSA are:

  • that enterprises are also concerned about address spoofing and route hijacking; and
  • the apparent disconnect between operators’ expectations of customers’ routing security interests and the enterprises expressed willingness to prefer network services that provide better security.

The first should be a really important driver for getting operators to step up and implement the best practices that are at the heart of MANRS.  Also, it should help focus attention and interest in URSA’s efforts to get agreement on rational next steps in selecting and deploying routing security technologies.

The second is a bit of a puzzle, but perhaps best interpreted as an opportunity for operators to understand that customers are interested and willing to pay to support the right thing being done.

The Internet Society overview of the report is here: https://www.routingmanifesto.org/resources/research/

The full report itself is available here: https://www.routingmanifesto.org/wp-content/uploads/sites/14/2017/09/451_Advisory_BW_MANRS_InternetSociety_10375.pdf

 

Data! NOMA gets air (time) in Budapest

NOMA at RIPE 74

Today I had the opportunity to talk to the RIPE meeting crowd about my use of the RIPE NCC Atlas measurements infrastructure to simulate the NOMA v6 health metric measurement.  NOMA is based on operators instrumenting their networks.  The RIPE Atlas infrastructure, with its probes distributed throughout a variety of networks, is a good platform for illustrating what could be done, with live (if somewhat limited) data.

Continue reading

Routing security: work with what you’ve got!

It seemed like there would be little appetite for discussing next steps in routing infrastructure authentication and verification after the DDoS attack on Dyn (October 2016), when it became clear that large scale attacks are feasible without spoofing IP addresses, hijacking prefixes, or otherwise falsifying Internet infrastructure numbers and routing. Already a tough sell to get operators to consider incremental (let alone architectural) updates to do origin authentication and some manner of routing announcement verification, the Dyn attack provided a clear and present danger that would not be addressed by such updates, so why bother with them?

Continue reading

NOMA Measurements Template (Media)

This  is the persistent reference page for the NOMA Measurements Template document.  Please use this page’s URL to refer to the document:  http://www.techark.org/noma-measurements-template/

Current version of the document:    http://www.techark.org/wp-content/uploads/2016/12/20161208-NOMA-Measurements-Template.pdf