Software-driven network

This note is the first one in a small bunch of articles about SDN-sided tendencies already affecting networking or will affect soon.
A lot of network engineers are still thinking that practical SDN as far away from networking as far IS-IS from to be called “simple”. But in reality, networking SDN is almost knocking on the door and looks like there is no way to evade tentacles.

All written below is just a mine vision and I easily could be wrong. Feel free to comment and share your own vision.


Shall we try to describe at the beginning what the Software Defined Networking actually is?

SDN it is all about centralized orchestration – to conquer and command or do not conquer, but to command anyway.
An essential part of SDN is a dedicated brain, the point presented as console or GUI, an incumbent who defines rules locally and delivers the defined one to subordinates.

We are witnesses of 2nd SDN wave. The first wave was about virtualization as a culture and exposed servers mainly.
You would like to ask where is a brain in the 1st wave? Well, the answer is VMWare vCenter/vCloud/Orchestrator as an example. To a networking guy, the most interesting part here is vDS – Virtual Distributed Switch. From a single point an administrator might configure NICs, switching points of presence, VLANs or even more like QoS if vDS had replaced by Cisco Nexus 1000v. From the single point, an almost whole bundle of options important to end hosts.
Within the second wave, vendors are trying to apply extremely successful experience with virtualization to the rest.
Eventually, customers are encouraging these aspirations because everyone in a scheme will benefit from a wider approach portfolio.


The most worth SDN-related things in networking to be noted are following:

  • Overlay networks a.k.a. app tunneling – like VMware NSX;
  • [SD-WAN] Software defined Wide Area Network – like Juniper Contrail or Alcatel Nuage;
  • [NFV] Network Function Virtualization – like vMX, CSR 1000v.

Those tendencies can be easily avoided until you do not work in a medium sized service provider or in charge of a big data center, but even though they will influence adjacent spheres.
Long story short, let’s take a look at NFV. In a simple way or to be honest in any possible way it is just a VM performing some networking functions (L2, L3 or higher) which can be counted as any other VM – an example to be installed on an x86 server.

– Is it SDN?
– Not yet.

Another one reason why it calls “software-defined” or rarely “software-driven” is because vendor specific chips a.k.a. ASIC are simply not existed on x86 architecture. So a variety of technical features and performance are fully dependent on software effectiveness.

As networking is not anymore enslaved by vendor’s silicon and any network device thirst can be quenched by OVA template installed on a powerful server with proper network cards, then a customer allowed using any combination of VMs to achieve the desired. For example, Cumulus Linux for DC “switches”, Cisco vASA as DC firewalls, Juniper vSRX on a public edge and Juniper vMX as a border. Why not? It’s just VMs, right?

The main problem is a management.

Hold your breath. We are getting closer to the subject of an article and essential parts of management.

As mentioned above, the only way to allow all described bunch of networking software to be called SDN is to add dedicated management point.
Don’t be confused, network devices and the point do not share control plane, all of them are functionally independent, but all of them are managed as a single service.

Let me rephrase. Imagine, you have about 50 devices overall: firewalls, routers, switches and all of them are exist to serve the only one purpose – forward packets between points A-Z, with some restrictions of course.
Can we consider the whole described infrastructure as a hive? A change made on one side of a net usually appears somehow on others – add new VLAN and there shall be more configuration to allow that to work. After all, if that network is fully converged and levels are dependent on each other, then the whole infrastructure must be managed as one organism!

The punchline: should the infrastructure with a huge amount of network operating systems functioning in a completely different manner be managed as a single entity? Are you insane?

Brave future

Tomorrow I would probably agree about insanity, but today a future seems to be quite positive.

In reality, what options do we have to manage some multi-vendor or multi-OS infrastructure?

  • CLI – handmode is not an option anymore:
    • if one part of organism has been changed the other should be changed automatically;
    • is vendor-specific;
    • no consistency feedback;
    • one-by-one commands insertable manually.
  • SNMP – well, SNMP has failed already as a management option. We could gather some telemetry via strictly defined OIDs and receive some messages via SNMP trap, but that’s all the protocol may provide to us.

The future is not looking so positive anymore until the industry has something to fill such lack of instruments.


And names of our saviors are in the section’s name.

YAML (wiki: eng|ru) – is a human-readable data serialization language. This language has been chosen as a unique format to represent an array of variables in a short way. For network engineers, this is a smooth way to get in automation without a requirement to dive deep on programming. Examine the examples at Wikipedia pages – looks much easier than python script, isn’t it?

NETCONF (wiki: eng|ru) – the network configuration protocol defined by RFC6241. In a nutshell, this protocol allows to install, manipulate, and delete the configuration of network devices.
Furthermore, and it is a very sweet moment – NETCONF uses SSH as a transport. Secure, reliable, TCP.
OK, check how to enable NETCONF on pretty any Juniper device:

That’s it.

The synergy of YAML and NETCONF is following:

  • the template of actions described on YAML, for example – a system hostname change: host, port, an authentication data, an action;
  • NETCONF performs <edit-config> action and pushes configuration as a YANG data structure (XML tree format) via RPC.

It’s time to come back to our organism – a configuration change (not all of them ofc) in one point should cause changes in others. In a traditional way, the set of configuration changes in order to implement a thing should be presented as a subset of commands for some devices – that means every single command in a list with tens of commands is completely independent of each other. That’s why rollback plans are compiled with any implementation plan. In SDN-era, that set of configuration should be presented as a transaction. All or nothing – configure all devices at once or rollback and show me why it cannot. NETCONF has this ability by default – if at any moment the action has encountered an error, RPC-session is torn down and all changes have to be canceled.

Additional information

I really hope this article clears the position of networking in the software-driven world to you.
The text has been written in a free manner to involve you on a path of exploration and accept my excuses if you would prefer a strict technical language.

My friend @hellt_ru, an author of NOSHUT blog has recommended to me some very useful videos by Tail-f.

Channel link – here.

NETCONF and YANG Tutorial part 1a: NETCONF and YANG Overview
NETCONF and YANG Tutorial Part 1b: Relation to SDN?
NETCONF and YANG Tutorial part2 : NETCONF
NETCONF YANG Tutorial: part3, YANG



OpenConfig | Post at J-Net


Special thanks to my colleague, the holder of Huawei-oriented blog for pointing out some mistakes in English.
“dis this” is a short notation of “display this” command that shows a configuration from a current point of CLIE – analog of “show” command in Juniper.