I interviewed Dan Grosz who discussed The Provision of Next Generation Network Services.







It’s good to speak with you today, Dan, and I’m looking forward to hearing your views on the topic of the provision of next-generation network services. Before we start, can you provide a brief background of yourself?


Sure. Thank you, Dustin, it’s a pleasure to be with you again. I have over 25 years of corporate IT and management-consulting experience. Currently, I run a private hedge fund that focuses on technology companies, and from time to time, I also take on special corporate IT projects.


What is the next generation network?


The next generation network is a rapidly emerging network architecture. It really has origins over 20 years, with the development of the Internet. Essentially, this new architecture allows decoupling of physical infrastructure from network services and applications. From a supply chain perspective, the provisioning of network, bandwidth, and services, which used to be a very time-consuming and difficult process, switches from being reactive with very highly time to being proactive and dynamically configurable. With this new-generation architecture, you can deploy services via software rather than having to rely on hardware. This provisioning and network architecture is completely IT-based and it utilizes things like the SIP—session-imitated protocol—standard devices, such as session border controllers, and combines that with rule engines that manage signaling between endpoint devices.


What are some key drivers for it?


The basic driver is the exponential growth in demand for data, all kinds of data: video streaming, big data, * (2:14—unclear) services. That makes a traditional network architecture very, very unwieldy and difficult to work with. As companies need additional bandwidth, having to manually configure that, putting in the all the hardware is very time-consuming, expensive, and complex. The other driver is the need to fuse disparate data and the provision of network devices, such as the Internet of Things. It includes things like the unification of voice, video, Internet messaging, and business applications into rich, multimedia services. The network has to be able to do all these things which it previously did not have to deal with. Finally, security, the proliferation of security threats, which is a big driver for this new network architecture, which is a lot more secure.


Can you talk about the characteristics of the next-generation network?


Here are some of the characteristics. It’s virtual, which means that it’s not hardware-limited; you can provision network devices virtually via software rather than having to unbox, configure, and plug in physical devices. By being virtual, the network is very scalable. Another key characteristic is that it’s dynamic and agile; you can change the behavior and configuration of the network on the fly, and that’s based on rules-based policy enforcement and intelligence. Intelligence is built into the network, whereas, previously, you had to configure things manually, and the network remained very static and dumb, in a sense.


Another key characteristic is that the new next-generation network is protocol independent. It’s standards-based and vendor-neutral. Finally, the most interesting aspect is that the new network is contextually aware, which means that it kind of knows what’s going on and is able to reconfigure itself based on what’s happening within the network.


What are the differences from a traditional network?


One of the biggest differences is that the bandwidth of the network can be modulated up or down without having to configure hardware, and this is done via network virtualization. You can apply bandwidth to where it’s needed and essentially eliminate the need to overprovision and have a lot of network sitting idle and only when it’s needed during surge uses of the data. Applications will be able to dynamically reconfigure network resources on demand. The application itself will be able to signal to network what it needs, and then the network will be able to provide that application the services required.


Finally, intelligent networks will be able to independently sort out and manage complex scenarios, which is completely impossible to do with traditional networks. When you have a lot of things going on at the same time over a given network, the rules engines and some of the intelligence and self-awareness of this next-generation network, we will be able to understand and handle the priorities and the intent of the network designers on how these things are managed.


Can you provide three use-case examples?


1. Sure. I thought one interesting use case would be, for a health care provider that needs to provide continuous staff training. These days, with things like Ebola and other threats, you need to provide a lot of training for your health professionals, and a lot of that is video-based; it’s also interactive and typically sourced from a third-party vendor. All of a sudden, your network has to accommodate this tremendous bandwidth of video, which typical traditional networks are not designed to handle. On a traditional network, that typically creates a crisis. When somebody tries to load one of these video-training sessions, the network all of a sudden bogs down and slows down; it interrupts, let’s say, voice services or data services for other applications, and it becomes a big mess.


To sort that out, you need to provide a lot more bandwidth, put in a lot more hardware, need to call the service provider, and get more network. You might not need that all the time, so you need to over-provision for a certain situation. In this case, having the next-generation network based on the rules that you put into the network, the system will know that you’re running a training session, that training session is important, and, therefore, it might degrade or limit some other bandwidth temporarily to allow for the training to proceed smoothly and then, when it’s over, reinstate that network and switch allocation of resources on the fly.


This would be very, very helpful for that company because you could have training and other services running at the same time. Based on various rules that you put in, you can align those rules to coincide with training schedules, but the system could also be smart enough to know that if this CEO needs to have an important conference call, then it might degrade the training session so that the CEO would have priority over the training.


2. Another use case might be a call center that needs to fuse and orchestrate both internal and external data to provide best-in-class customer service. It might need to get into a legacy ERP system, get supply chain information, and then it might need to go outside of that legacy system and go on to a social network to get information about the customer. Finally, it might need to go to some cloud service and get some external data and fuse all those things and provide those in a real-time basis to the customer service rep so they can provide best-in-class customer service.


Traditionally doing that with a legacy network could cause a lot of bottlenecks. The legacy ERP system could bog down, the network might not have enough bandwidth to outside and get cloud services, so it could become a very painful experience to be on the phone with a customer waiting for all this data to be orchestrated. The next-generation network will be able to do that within predetermined key-performance indicators so that the customer will see that information provided seamlessly.


3. Finally, another interesting use case, which is very important, is that military and government emergency services oftentimes need to rapidly deploy forces to address crisis situations. This could be marshaling in some foreign country, or it could be in some remote area of the country that has a forest fire or an earthquake. All of a sudden, you need all of these units to come together and communicate, and they may not have worked together in that particular context, so by having a configurable network that you could put up on the fly, that is able to work across various protocols and codex, which a hardwired, physical network would find very, very difficult to do, you now have the capability using this next-generation system to provide network services when they’re needed in a crisis.


In all these three scenarios, having a virtual dynamically configurable and smart network enables all of these use cases.


Thanks, Dan, for sharing today on the topic of provision of next-generation network services.


My pleasure, Dustin. Thank you for the opportunity.







About Dan Grosz




Dan Grosz


IT Executive and Information Architect.


LinkedIn Profile