Online Proceedings

Thursday 30, May 2019

iPOP Plenary
Thursday 31, May 2018, 10:00-12:20
Presider: Hiroaki Harai, NICT, Japan
Opening Address
Naoaki Yamanaka, General Co-Chair, Keio University, Japan
Bijan Jabbari, General Co-Chair, ISOCORE, USA
Keynote
K-1 "Evolution of technologies, applications, and eco-systems of coherent optics"
Masahito Tomizawa, NTT, Japan

Masahito Tomizawa

This Keynote expresses recent trends of optical communication systems, mainly focusing to the evolution of "coherent" technologies. The contents include enabling technologies for Ultra-Long-Haul 100Gbit/s to Short-Haul 600Gbit/s per wavelength, and rapidly growing application fields such as hyper-scale data-center interconnection requiring coherent technologies. The presentation also includes an establishment of new eco-systems, where different business model has to be developed.


Biography:

Masahito Tomizawa received MS and Dr Eng. degrees from Waseda University in 1992 and 2000, respectively. He joined NTT in 1992, since then he has been engaged in R&D, deployments and international standardizations of high-speed optical transmission systems. He is a Fellow of Institute of Electronics, Information and Communication Engineers (IEICE) and a Fellow of Optical Society of America (OSA).

K-2 "Social Value Creation Accelerated by AI and ICT Technologies"
Yuichi Nakamura, NEC, Japan

Yuichi Nakamura

Because of increasing urban population and other global changes, various social problems would be occurred in various areas, for example, crimes, luck of resources, dilapidated infrastructure. In those cases, an integration of ICT (Information and Communication Technology) and AI (Artificial Intelligence) technologies is one of the promising methods to solve such social problems. For example, we NEC obtain the NO.1 position in 3 biometrics applications, iris, face, and finger print recognition by using signal processing, circuit design, parallel computing, neural network. and etc. In this talk, the latest use cases to solve various social problems are introduced. In addition, requirements from real social problem to AI and ICT Technologies are presented.


Biography:

Yuichi Nakamura received his B.E. degree in information engineering and M.E. degree in electrical engineering from the Tokyo Institute of Technology in 1986 and 1988, respectively. He received his PhD. from the Graduate School of Information, Production and Systems, Waseda University, in 2007. He joined NEC Corp. in 1988 and he is currently a vice president at R&D, NEC Corp. He is also a guest professor of National Institute of Informatics and Waseda Univ. He has more than 25 years of professional experience in digital and analog circuit design, electronic design automation, and signal processing.

iPOP Exhibition introduction
- iPOP Exhibition Co-Chair
Local Arrangement
- iPOP Local Arrangement Co-Chair
Technical Session
Tech. Session (1): AI-assisted/Automated Operation and Control
Thursday 30, May 2019, 13:30-15:10
Chair: Takehiro Sato, Kyoto University, Japan
T1-1 "Machine Learning-assisted Network Analysis Framework for anomaly detection and RCA toward 5G"
Genichi Mori, Junichi Kawasaki, and Masanori Miyazawa, KDDI Research, Japan

Genichi Mori

1. Introduction
5G network is widely acknowledged that network infrastructure will be based on network function virtualization (NFV) and software-defined network (SDN) from IP/optical transport network to mobile network. Under such 5G, achievement of automated closed loop operation has become a top priority to maintain stable 5G networks, but, in reality, it is difficult due to the increase in the number of unexpected failures and the amount of managed data (e.g., alarm, performance data and network topology) in NFV and SDN environments. In particular, software-based network function such as NFV causes unexpected behavior even though each component works properly since the number of software bugs in the code itself also increases, and thus detection of failure with unexpected behavior is even more difficult. Therefore, the operator wants to promptly comprehend such failures before the process escalates into critical failure. In addition, root cause analysis process is also complex and takes considerable time since the NFV consists of a huge number of hardware and software components.
In order to address this issue, we propose and develop a network analysis framework with machine learning (ML) technology, which has scalable monitoring function we proposed in previous work[1], automated fault generation function to intentionally generate several failures and obtain the training dataset required for trained model, and anomaly detection and root cause analysis function with ML to promptly to analyse abnormal behavior and root cause.
In this presentation, we will introduce our proposed framework and make a demonstration on our testbed assuming transport network.

2. Overview of Network Analysis Framework with ML Techniques
The overall architecture of the proposed framework is shown in Fig.1. The framework has two-phase processes; learning phase and operation phase.
Actually, it takes a long time to gather training dataset from commercial network. In order to address the delay, the framework intentionally generates a failure in test network, which has the same structure as commercial network, to promptly gather training dataset(step1) in learning phase. After that, it obtains abnormal behavior data from network elements through telemetry data(step2), finally stores its dataset adding a label information of fault point and cause (step3). In order to support several types of failure, the procedure of three steps are automatically repeated multiple times until possible failures are emulated. And then, the trained models for anomaly detection and root cause analysis are created using dataset and general learning model (e.g., Isolation Forest and Auto-Encoder for anomaly detection, Random Forest and Deep Neural Network for root cause analysis).
In the operation phase, abnormal behavior occurring in commercial network and its root cause of failure are analysed by the trailed model using telemetry data from commercial network. Once an abnormality occurs, the framework enable to detect it, and notify the operator of alert with prospective root cause.
We developed a framework using open source software and demonstrate it using our testbed. The demonstration results shows that the trailed model was successfully created based on the training dataset from test network, and it models detected the failure and identify the root cause of failure correctly.
T1-1_Fig1

Fig.1 Proposed Framework


Acknowledgement: This work was conducted as part of the project entitled "Research and development for innovative AI network integrated infrastructure technologies" supported by the Ministry of Internal Affairs and Communications, Japan.


Reference:

  1. Genichi Mori, Junichi Kawasaki and Masanori Miyazawa, "Hierarchical Model-Driven Telemetry Analysis Framework based on YANG for Optical Network" , iPOP2018, May 2018.



Biography:

Genichi Mori received his B.S. in electronic elengineering from Seikei University in 2007 and M.E. in information and communication engineering from University of Electro-Communications in 2009. He joined KDDI Corporation in 2009 and has been engaged in operation and development of IP core network systems, L3 VPN systems. Since 2017, he has been working in KDDI Research, Inc., and engaged in network operation automation.

T1-2 "Autonomic Resource Management in Service Function Chaining Platform"
Takahiro Hirayama, Ved P. Kafle, NICT, Japan

Ved P. Kafle

1. Introduction
Service function chaining (SFC) is a framework for the placement of virtual network functions (VNFs) required for processing the traffic of application services provided over network function virtualization infrastructure (NFVI) [1,2]. A service function (SF) chain contains a series of VNFs such as load balancer, firewall, intrusion detection system (IDS), and contents server. Currently, an SFC construction procedure takes much time from the time of receiving a construction request to the time of starting to provide the service to customers. Moreover, to deal with time-varying network environment and diverse quality-of-service (QoS) requirements in future networks, automation of not only SFC construction but also adjustment is essential to shorten the network configuration/reconfiguration time. Therefore, in this paper, we introduce our research activities for the automation of the deployment and periodical adjustments of computational resources assigned to each VNF in an SFC platform.

2. Procedures
For the on-demand allocation and dynamic adjustment of computational resources, our prior work [3] has proposed an internetwork scaling framework , which consists of the following three steps as shown in Fig. 1: A) resource arbitration among various functions deployed in the same server node, B) VNF migration from one server node to another by keeping the communication path unchanged, and C) SFC reconfiguration by migrating functions from one server node to another by changing the communication path. We proposed a resource adjustment method that included the processes for autonomic resource arbitration among services and VNF migration along the same SF chain in order to keep the operation of SF chains stable [3]. It reduces the number of occurrences of CPU-saturation due to VNF overloading. Additionally, we have proposed a machine learning (ML)-based autonomic service function migration method [4] to determine when and which VNF need to be migrated to mitigate the occurrences of server overload.

3. Experiment System
We are developing an SFC platform as a proof of concept (PoC) on the basis of standards specified in IETF RFCs 7665 and 8300. In this platform, as shown in Fig. 2, packets sent from end hosts are encapsulated by a service function header (SFH) that contains the SFC identifier. By using the SFH, packets are to given SF chains by the service classifier (SC). Since the control plane operation for autonomic management has not been standardized yet, we have designed an autonomic resource management system by ourselves. As a first step, we have installed a part of the autonomic resource arbitration mechanism that allocates appropriate amount of CPU resources to VNFs as demand increases. By using Cefore [5], we confirmed that our SFC platform is suitable for introducing new types of networking services, such as CCN-based video streaming services. In future, we plan to implement the other components of autonomic resource management mechanism to establish autonomous management of computing resources.

T1-2_Fig1

Fig.1 Internetwork scaling (Arbitration, Migration, and Reconfigration).



T1-2_Fig1

Fig.2 PoC implement of SFC platform.


References:

  1. IETF RFC 7665, "Service Function Chaining (SFC) Architecture," Oct. 2015.
  2. IETF RFC 8300, "Network Service Header (NSH)," Jan. 2018.
  3. T. Miyazawa, M. Jibiki, V. P. Kafle, and H. Harai, “Autonomic Resource Arbitration and Service-Continuable Network Function Migration along Service Function Chains,” IEEE/IFIP NOMS, Apr. 2018.
  4. T. Hirayama, T. Miyazawa, M. Jibiki, and V. P. Kafle, “Service Function Migration Scheduling based on Encoder-Decoder Recurrent Neural Network,”IEEE NetSoft, June 2019 (to appear).
  5. Cefore. https://cefore.net/



Biography:

VED P. KAFLE received a B.E. in Electronics and Electrical Communications from Punjab Engineering College (now PEC University of Technology), India, an M.S. in Computer Science and Engineering from Seoul National University, South Korea, and a Ph.D. in Informatics from the Graduate University for Advanced Studies, Japan. He is currently a research manager at National Institute of Information and Communications Technology (NICT), Tokyo, and concurrently holding a visiting associate professor’s position at the University of Electro-Communications, Tokyo. He has been serving as a Co-rapporteur of ITU-T Study Group 13 since 2014. He is an ITU-T Study Group 13 Fellow. His research interests include network architectures, 5G networks, Internet of things (IoT), directory service, smart cities, network supported automated driving infrastructure, network security, network service automation by AI and machine learning, network function virtualization (NFV), software defined networking (SDN), and resource management. He received the ITU Association of Japan’s Encouragement Award and Accomplishment Award in 2009 and 2017, respectively. He received three Best Paper Awards at the ITU Kaleidoscope Academic Conferences in 2009, 2014 and 2018. He is a senior member of IEEE and a member of IEICE.

T1-3 "Enhancement of search-based automatic service configuration designing with tacit heuristics"
Takashi Maruyama, Takayuki Kuroda, Takuya Kuwahara, Yoichi Sato, Hideyuki Shimonishi, and Kozo Satoda, NEC, Japan

Takashi Maruyama

In this talk, we will talk about an application of reinforcement learning to a prior research by the authors "Weaver: Search-based service configuration designing with fine grained architecture refinement rules". The prior research realizes the process of designing service configurations as a tree-search problem by exploiting fine-grained models of system components, and the deployable service configurations are found as a solution of the search. However, due to the fineness of models, the search space is turned out to be O(NN) in the worst case. Hence, we exploit an evaluation function as a heuristics to improve the search performance. When designing evaluation functions, one characteristic to be considered in the search tree is that deployable service configurations are located in deep layer of the tree from the root, which suggests that evaluation functions should be designed so as to predict goal-oriented value.
Motivated by the problem mentioned above, we propose an application of reinforcement learning to the search scheme proposed by our prior research. We first define tacit heuristic, which can been seen as an alternative of criteria made by human experts who utilize their lot of experiences and intuition thorough the process of designing systems. After that, we introduce reinforcement learning as a promising way to obtain a tacit heuristics as a goal-oriented heuristics. Finally we show some evaluation results which suggest that resulted tacit heuristic is sufficiently trained so as to find deployable service configurations.
T1-3_Fig1

Fig.1





Biography:

■Education and professional experience
・2014 MSc. (Mathematics), Graduate School of Mathematics, Nagoya University, Aichi, Japan
・2017 Ph.D. (Mathematics), Graduate School of Mathematics, Nagoya University, Aichi, Japan
■Enployment
・2014, Internship student, NEC Corporation, Japan
・2017-pres, Researcher, System Platform Research Laboratories, NEC Corporation, Japan
■Research Interest
・Application of machine learning technique to designing of ICT-systems, especially systems with heterogeneous infrastructure
・Directed Attributed graph theory, machine learning
■Bibliography
・Takayuki Kuroda, Takuya Kuwahara, Takashi Maruyama and Yoichi Sato, “Search-based network design generation scheme for
  closed-loop automation of network operations”, IEICE Technical Report, vol. 118, no. 118, ICM2018-11, pp. 1-6, July, 2018.
・Takashi Maruyama, Takuya Kuwahara, Yutaka, Yakuwa, Takayuki Kuroda and Yoichi Sato, “Accelerated Search for
  Search-Based Network Design Generation Scheme with Reinforcement Learning”, IEICE Technical Report,
  vol. 118, no. 483, ICM2018-71, pp. 123-128, March, 2019.

T1-4 "Proposal of a Function-oriented Network Model for Automatic Control of IoT Network Orchestration"
Takamichi Nishijima, Hiroshi Tomonaga, Jo Sugino, Yu Minakuchi, and Hideyuki Matsuda, Fujitsu, Japan

Takamichi Nishijima

For achieving digital services, networks must be rapidly and dynamically modified according to changes in the services. To accomplish this, it is necessary to grasp what a user or network operator wants to do (hereinafter called "intent") and to automatically build and configure networks that satisfy that intent. In addition, in order to modify networks suitably, it is important to understand the reasons why the current network configurations and parameters are defined as they are.
In order to clearly understand the reasons for derivation of the current network configurations and parameters, we propose the function-oriented network model represented by a consistent graph that contains intent, required network functions, network devices and configuration parameters as nodes and their relationships as edges (Fig. 1). This model can be built by the following four steps.

  1. Interpret intent as network requirements (hereinafter called “interpreted intent”).
  2. Derive the network functions required to satisfy the interpreted intent.
  3. Identify network devices that can activate the network functions in consideration of their capability.
  4. Determine the configuration parameters of the devices to satisfy the interpreted intent.

The point here is that the relationships between the input and output of each step are recorded as edges on the graph.
This model is useful in that it can be used to easily identify which devices and parameters should be modified by tracing the relationships in the graph when a part of the intent is changed. Moreover, when the same network function is derived from different interpreted intents, it can also be used to share an existing device and configuration without activating another device.
We applied the proposed model to a prototype of an IoT network orchestrator [1]. The orchestrator automatically builds and configures networks between IoT devices and IoT applications/services (Fig. 2). It activates data consolidators, data distributors, VPN functions and static routing functions to satisfy intent such as IoT application demands and network policies. In evaluation of the prototype, we confirmed that it can automatically generate a proposed model according to intent. Moreover, we also confirmed that when IoT application demands are changed it can change suitable parameters such as the sending frequency of the data consolidator and the destinations of the data distributor by utilizing the relationships in the graph.


T1-4_Fig1

Fig.1 Function-oriented network model



T1-4_Fig2

Fig.2 IoT network orchestrator with function-oriented network model



Acknowledgment:
This work is partially supported by the R&D contract “Wired-and-Wireless Converged Radio Access Network for Massive IoT Traffic for radio resource enhancement” with the Ministry of Internal Affairs and Communications, Japan.


Reference:

  1. H. Tomonaga, M. Kuwahara, H. Nakazato, and A. Nakao “IoT-centric Network Orchestration Technology on Automatic Control Architecture for Wired / Wireless Network Virtualization,” in Proceedings of the 2019 IEICE General Conference, March 2019 (in Japanese).



Biography:

Takamichi Nishijima received his B.E., M.E. and Ph.D. degrees from Osaka University, Japan, in 2009, 2011 and 2014, respectively. He joined Fujitsu laboratories in 2014. Since then, he has been involved in research and development activities in the SDN, NFV and automatic network control field. He is a member of IEEE and IEICE.


Tech. Session (2): Advanced Network Design
Thursday 30, May 2019, 15:20-17:00
Chair: Takeshi Kawasaki, NTT, Japan
T2-1 "Experimental Evaluation of Power Reduction Effect in Energy Efficient Data Center Network HOLST Using Operated Data Center Traffic Data"
Masaki Murakami, Masahiro Matsuno, Satoru Okamoto, and Naoaki Yamanaka, Keio University, Japan

Masaki Murakami

With the expansion of data center (DC) use, power consumption of data center network (DCN) is rapidly increasing. In conventional leaf-spine DCN architecture, 85% of the power consumption of DC equipment is for servers and disks and the rest are for network [1]. If the power consumption of leaf layer and spine layer are almost equal, the power consumption of spine switches (Spine SWs) is 8% and the power consumption of Leaf SWs is 7%. DCN named HOLST (High-speed optical layer 1 switch system for timeslot switching based optical data center networks) that realizes switching power consumption reduction of Spine SWs by introducing optical slot switching (OSS) and optical circuit switching (OCS) to the data center have been proposed [2]. The traffic of HOLST is categorized into a small capacity flow named Mice Flow (MF) accommodated in EPS, medium capacity flow named Doggy Flow (DF) accommodated in OSS and large capacity flow named Elephant Flow (EF). As a flow classification method in HOLST, a flow classification using hierarchical least recently used (LRU) queues [3] have been proposed. The flow control of HOLST is as follows. The network accommodating the flow that reaches the Top of Rack switch (ToR SW) from the server is determined by the flow classification function implemented in the hardware of the ToR SW. The flow classification result is sent to the Software Defined Networking (SDN) controller. The SDN controller changes the configuration of ToR SW and Spine SW based on the result. The flow is accommodated in the network corresponding to the classification result and reaches the destination ToR SW.
We have experimentally introduced the HOLST prototype system which performs flow classification using the hierarchical LRU queue at the actual data center. In HOLST prototype system, the flow classification machine classifies the flow obtained by port mirroring and sends the classification result to the Ryu SDN controller. The Ryu SDN controller configures the VLAN setting of ToR SW. The flow reaches the destination ToR SW by VLAN forwarding. Based on [4], the estimated switching power consumption per bit is 0.365 nJ / bit for optical switches and 6.88 nJ / bit for electric switches, and we confirmed 49% switching power consumption reduction of Spine SWs by accommodating 5 flow IDs among 20 flow IDs determined based on the destination IP address. This contributes to a 4% reduction in the overall DCN power consumption.

T2-1_fig1

Fig.1


References:

  1. T. Hartno, "Green Data Center," https://www.cisco.com/c/dam/global/en_id/training-events/cnsf2008/files/Cisco_Green_Data_Centre.pdf
  2. M. Hirono, T. Sato, J. Matsumoto, S. Okamoto and N. Yamanaka, "HOLST: Architecture design of energy-efficient data center network based on ultra High-speed Optical Switch," 2017 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN), Osaka, 2017, pp. 1-6.
  3. Yukihiro Imakiire, Masayuki Hirono, Masaki Murakami, Satoru Okamoto, and Naoaki Yamanaka, “Flow/Application Triggered SDN control in Hybrid Data-center Network "HOLST",” 41st The Optical Fiber Communication Conference and Exhibition (OFC2018), Tu3D.6, March 2018.
  4. S. J. B. Yoo, “Energy efficiency in the future internet: The role of optical packet switching and optical-label switching,” IEEE Journal of Selected Topics in Quantum Electronics, vol. 17, no. 2, pp. 406–418, March 2011.



Biography:

Masaki Murakami received his B.E. degree from Keio University in 2018. He is currently a master course student in Graduate School of Science and Technology, Keio University.

T2-2 "Carrier's expectation for disaggregation network and approach to controller development"
Aki Fukuda, Masatoshi Saito, Yoshinori Koike, and Hirotaka Yoshioka, NTT, Japan

Aki Fukuda

The current carrier network is operated by dedicated equipment manufactured by a vendor corresponding to the provided service, but there are also issues.

  • It is difficult for us to make partial function changes since the implementation is a black box.
  • There is a risk that local failures will affect a wide area due to the large number of parts that make up the equipment.

For the purpose of overcoming the same problem, the study and implementation of disaggregation of a network (separation of hardware and software), which has been advanced mainly to the data center network so far, are also promoted in the carrier network in recent years. With the realization of the disaggregation network, a carrier has the following expectations, as well as overcoming the issues described above (Fig. 1). And, we think that it is necessary for carriers to develop their own control functions (ie, controllers) in order to secure carrier requirements and to realize quick function changes. If we develops the controller by ourselves, the following avenues are considered.

  1. We build all of controller function by ourselves.
  2. We build most of the controller functions by ourselves, and use conventional SDN technologies such as OSS and various standards as parts.
  3. We build a controller based on the conventional S framework, and implement the necessary functions by ourselves.

When considering easiness and rapidity of the development, it is advantageous to use a development framework. However, there is also an issue that it is difficult to ensure reliability and maintainability. For example, we need to perform quality assurance including the framework part. Therefore, we have been developing controller from 2014 by the method of (ii), with the concept of simultaneously securing ease of development, rapidity, and reliability and maintainability of carrier grade.
On the other hand, as a new use case of carrier SDN, the expectations for IP/Optical control cooperation for the purpose of application to medium/short distance communication such as DCI(Data Center Interconnection), etc. has been increasing very much in recent years, and especially, the movement of the equipment commoditization including the optical layer lead by open communities (ex. TIP, ONF), and the study of the utilization of common interface and API are accelerating, and the standard is also being settled. And, implementation of the conventional controller framework is also progressing in accordance with this movement. Given these trends and the increasing diversity and immediacy of the service provision in the future, carriers should reconsider how to utilize the conventional controller framework.
Therefore, we reviewed the requirements for the conventional controller framework in the development of the controller for the disaggregated carrier network, and then surveyed and analyzed how the major framework responded to the requirements. Then, based on the results, we propose a career-oriented controller based on the premise of using the framework.

T2-2_Fig1

Fig.1 Carrier Expectations for Disaggregation network




Biography:

Aki Fukuda received the B.E., M.E. degrees from Akita University in 2007, 2009, respectively. She joined the Nippon Telegraph and Telephone Corporation (NTT) in 2009. Since then, she has carried out research of control and operation on next generation IP and optical transport network. She is currently researching about a study SDN technology for wide-area IP and optical transport network. She is a member of IEICE.

T3-1 "Probabilistic Protection Model for Virtual Networks against Multiple Facility Node Failures"
Fujun He, Takehiro Sato, and Eiji Oki, Kyoto University, Japan

Fujun He

Network virtualization technology enables multiple tenants to share the same physical infrastructure of a cloud provider [1]. Services from a cloud provider can be represented as some virtual networks (VNs) embedded in the same substrate network (SN). Each VN is composed of a set of virtual nodes and a set of virtual links. A virtual node is embedded in a substrate facility node and utilizes a certain computing capacity. A virtual link requires a certain bandwidth capacity and is mapped through a substrate path in the SN.
Potentially su ering from multiple simultaneous facility node failures is a critical issue for cloud providers in practical applications [2]. Several services become unavailable and significant revenue loses if a facility node fails and can not be recovered promptly. To survive any facility node failure, as shown in Fig. 1, a cloud provider is always equipped with some dedicated backup facility nodes, which are prepared for hosting the workloads from the failed primary facility nodes. The backup facility nodes can be assumed not to fail. Simply mirroring the primary resources in backup facility nodes absolutely protects every random failure scenario, which means 100% protection is provided. However, this straightforward approach is not cost e ective due to requiring double amounts of computing capacity in the cloud provider.
This presentation proposes a probabilistic protection model for VNs against multiple facility node failures with jointly considering backup computing and bandwidth resource allocation. Providing probabilistic protection can reduce the total cost of protection by allowing the virtual nodes to share the backup computing capacity. A probabilistic protection guarantee is introduced to assure the recovery of primary facility nodes from any random failure with no less than a certain probability. We consider each primary facility node fails independently with probability p. Figure 2 shows the optimal backup computing resource allocation of VNs in Fig. 1 when p = 10-2 and 99.99% protection is guaranteed, where the bandwidth capacity and ow constraints for backup bandwidth resource allocation of virtual links are considered. Figure 3 shows the probability of each failure case and the corresponding failed computing capacity in primary facility nodes n 2 P and n 5 P , which indicates that reserving five units of backup computing capacity in backup facility node n 1 B guarantees the probability of successful recovery with no less than 99.99%. Finally, compared to providing 100% protection, in total about 30% backup computing capacity is saved by providing 99.99% protection.


T3-1_Fig1

Fig.1 Primary and backup facility nodes in SN.


T3-1_Fig2

Fig.2 Backup computing resource allocation of proposed probabilistic protection model.


T3-1_Fig3

Fig.3 Probability and failed computing capacity for each failure case in primary facility nodes n 2 P and n 5 P .

References:

  1. M. Chowdhury et al., \A survey of network virtualization," Computer Networks, vol. 54, no. 5, pp. 862-876, 2010.
  2. J. Tsidulko, \The 10 biggest cloud outages of 2018 (so far)," 2018.




Biography:

Fujun He received the B.E. and M.E. degrees from University of Electronic Science and Technology of China, Chengdu, China, in 2014 and 2017, respectively. He is currently pursuing the Ph.D. degree at Kyoto University, Kyoto, Japan. He was an exchange student in The University of Electro-Communications, Tokyo, Japan, from 2015 to 2016. His research interests include modeling, algorithm, optimization, resource allocation, survivability, and optical networks.

T2-4 "Analysis of network architecture for machine-to-machine service network platform"
Takehiro Sato, Eiji Oki, Kyoto University, Japan

Takehiro Sato

The Machine-to-Machine (M2M) service network platform [1] has been presented as a network architecture which provides computation resources for Internet-of-things (IoT) devices in order to execute their tasks. Figure 1 shows the overview of the M2M service network platform. This platform has a tree-structured topology which aggregates sensor data and task queries from devices connected to edge nodes toward a center cloud. Computation resources are deployed on each network node so that the amount of computation resources increases from downstream (i.e., edge-side) nodes to upstream (i.e., cloud-side) nodes. Program files which are required for executing a task of each device are placed on the network nodes so that the requirements of each task, such as link bandwidth and latency, are satisfied. Each device can use only computation resources of network nodes which are located on the path between the center cloud and the edge node which hosts the device.
In [1], program file placement methods for the M2M service network platform were studied. The work in [1] mainly focused on the optimization problem and heuristic algorithms to determine the program file placement when the architecture of the platform is given. The number of program files which are placed on the platform to execute all tasks of connected devices were evaluated. However, as the architecture of the platform, only a perfect binary tree was considered in [1]. To develop a policy for designing the architecture of the platform which effectively utilizes computation resources, we should investigate how the characteristics of the architecture, such as the number of node stages and the number of nodes per stage, affect the performance of program file placement.
In this presentation, we analyze the impact of the network architecture of the M2M service network platform on the performance of program file placement. We examine several types of architecture, and evaluate the number of required program files when the task requirements of devices change.
Figure 2 shows examples of examined architectures, each of which accommodates eight devices. M represents the number of node stages from edge nodes to the center cloud. Figure 3 shows the number of placed program files when the architectures in Fig. 2 are used. The average number of 100 trials is shown in Fig. 3. In each trial, the number of program files that each device requests is selected randomly from a uniform distribution on the integers 1, 2, …, and 8. The type of each program file is selected randomly from among eight types. The placement of program files is calculated by solving an optimization problem which minimizes the number of placed program files. The latency requirement of each device is determined so that the request blocking does not occur. As shown in Fig. 3, the perfect binary tree topology (M = 4) reduces the required number of placed program files compared to the other two topologies at the cost of securing more places to deploy computation resources.

T2-4_Fig1

Fig.1 Overview of M2M service network platform [1].

T2-4_Fig2

Fig.2 Examined architectures.

T2-4_Fig3

Fig.3 Number of placed program files.

Acknowledgment:
This work was supported in part by JSPS KAKENHI, Japan, under Grant Numbers 15K00116, 18H03230, and 19K14980.


Reference:

  1. T. Sato and E. Oki, "Program file placement problem for machine-to-machine service network platform," IEICE Transactions on Communications, vol. E102-B, no. 3, March 2019.



Biography:

Takehiro Sato received the B.E., M.E. and Ph.D. degrees in engineering from Keio University, Japan, in 2010, 2011 and 2016, respectively. He is currently an assistant professor in Graduate School of Informatics, Kyoto University, Japan. From 2011 to 2012, he was a research assistant in the Keio University Global COE Program, “High-level Global Cooperation for Leading-edge Platform on Access Spaces” by Ministry of Education, Culture, Sports, Science and Technology, Japan. From 2012 to 2015, he was a research fellow of Japan Society for the Promotion of Science. From 2016 to 2017, he was a research associate in Graduate School of Science and Technology, Keio University, Japan. He is a member of IEEE and IEICE.

Poster Session / Exhibition
Thursday 30, May 2019, 17:20-18:20
P-1 "Reduction of Request Blocking in Elastic Optical Network with Spectrum Slicing"
Ryota Matsuura, Nattapong Kitsuwan, University of Electro-Communications, Japan

Ryota Matsuura

This paper presents a scheme to reduce the request blocking probability in an elastic optical network (EON) in which a requested spectrum band is allowed to be sliced. In a conventional EON, spectrum allocation needs to follow two constraints due to the limitation of optical technology. First, spectrum slots of the requested spectrum band must be continued from a source to a destination. Second, the spectrum slots used for the requested spectrum band must be consecutive. A fragmentation problem occurs in EON when a new lightpath is setting up or the existing lightpath is tearing down. This problem makes the available slots isolated from each other. A new request is rejected if available space is not enough. There are two main approaches to solve the fragmentation problem. First, a defragmentation approach reallocates the existing spectrum slots to offer enough space for the request. Transmission in this approach is interrupted during the defragmentation process. A backup path may be temporarily needed. Computation process is complicated since several paths are affected. Second, an allocation algorithm approach is used to provide available space as much as possible for future requests. We consider the second approach in this paper due to an advantage of no reallocation is required. Recently, a new technology, called a slice and stitching technology, to split a spectrum band into several optical components has been invented. Slicing the spectrum band is done by: first, copy the original band to another optical frequency using coherent optical frequency combs and nonlinear wave mixing: second, partial spectra of both the original and the copy ones are sliced into two smaller channels by optical filters. Both sliced components, which are allocated into two portions of consecutive spectrum slots, are transmitted to the destination. At the destination, the original band of both components are recovered by phase-preserving wavelength conversion, called a stitching process. It should be noted that the requested spectrum band is able to be sliced more than two components by repeating the slicing process on each component. This technology breaks the second constraint of the conventional EON. There is a question on how to allocate the sliced optical components to reduce the number of request blocking. We introduce an algorithm for the presented scheme. The algorithm is divided into two phases, as shown in Fig. 1. Phase I logically assigns the sliced optical components into available slots. Phase II determines the required number of slicers for the assigned optical components from Phase I. A computer simulation result shows that the request blocking probability of the presented scheme achieves 82% and 13% reduction in a low and a high traffic load, respectively.


P-1_Fig1

Fig.1 Example of slot allocation in presented scheme.




Biography:

Ryota Matsuura received the B.E. degrees in Information Science and Engineering from The University of Electro-Communications, Tokyo, Japan, in 2018. He currently pursuing the master degree at Department of Computer and Network Engineering, The University of Electro-Communications, Tokyo, Japan. His research interests include elastic optical network.

P-2 "Flexible Scheduling Approach for Network Services in Virtual Networks"
Yuncan Zhang, Fujun He, Takehiro Sato, and Eiji Oki, Kyoto University, Japan

Yuncan Zhang

Network functions are implemented on specific hardware devices in the past. Network Function Virtualization (NFV) enables them to be implemented and managed via software, which are called virtual network functions (VNFs). Network operators can select a set of VNF instances to deploy on network nodes and steer network traffic to pass through the VNF instances in required order to provide network services (NSes). Given a network deployed with VNF instances, the scheduling problem of NSes arises when the traffic of some NSes needs to be processed by the same VNF instance on the same node. Proper and efficient NS scheduling is required to improve throughput and revenue of the network.
In a conventional scheduling approach [1], the computational resources occupied by each VNF instance are fixed in runtime, which means that the process rate of each VNF instance is constant. Besides, the process of an NS cannot be interrupted on each mapped VNF instance. According to [2], the software that provides VNF is structured into software components called VNF components (VNFCs), which are able to scale out/in. Moreover, recent advances in container technology make auto-scaling VNF instances in runtime simple due to the lightweight resource usage of containers [3].
Inspired by these characters, we propose a exible scheduling approach for NSes in virtual networks, which allows interruption of the process of an NS and scaling out/in VNF instances to change their process rate with the available node resources. In this way, when a delay-sensitive NS with deadline arrives at the network, in order not to violate its deadline, the process of NSes in the network can be interrupted when needed. The approach that we propose can adjust the resource occupancy of each VNF instance to change its process rate, which makes resource usage more efficient. Figure 1 shows a scheduling example using conventional approach and proposed approach. Figure 2 shows the average acceptance ratio of arriving NSes of the conventional and proposed approaches when the traffic size and the number of network functions of each arriving NS distribute over [5,10] Mbits and [1,5], respectively.

P-2_Fig1

Fig.1 Scheduling example


P-2_Fig2

Fig.2 Average acceptance ration when the traffic size and the number of network functions of each arriving NS distribute over [5,10] Mbits and[1,5], respectively.


References:

  1. Hyame Assem Alameddine, Long Qu, and Chadi Assi. Scheduling service function chains for ultra-low latency network services. In Network and Service Management (CNSM), 2017 13th International Conference on, pages 1-9. IEEE, 2017.
  2. ETSI GS NFV-IFA 011 v3.1.1. https://www.etsi.org/deliver/etsi_gs/NFV-IFA/001_099/011/03.01.01_60/gs_NFV-IFA011v030101p.pdf/.
  3. S Natarajan, A Ghanwani, D Krishnaswamy, R Krishnan, P Willis, and A Chaudhary. An analysis of container-based platforms for nfv. IETF draft, Apr, 2016.



Biography:

Yuncan Zhang received the B.E. degree from Dalian University of Technology, Dalian, China, and the M.E. degree from University of Science and Technology of China, Hefei, China, in 2013 and 2016, respectively. She is currently pursuing the Ph.D. degree at Kyoto University, Kyoto, Japan. Her current research interests include modeling, algorithm, and virtual network optimization.

P-3 "Proposal of the DDoS detection method using Reconfigurable Communication Processors"
Naoto Sumita, Chiaki Hara, Satoru Okamoto, and Naoaki Yamanaka, Keio University, Japan

Naoto Sumita

In this presentation, we will provide a network-based novel DDoS detection method using recently developed Reconfigurable Communication Processors (RCPs).
In order to cope with traffic growth and service diversify the photonic network processor (PNP) has been proposed [1]. PNPs can realize the transmission network service infrastructure [2]. As a first step toward realizing the PNP concept, an RCP concept that combines various processing functions based on LSIs, FPGAs, NPs, and CPUs with switching devices has been proposed [3]. RCP will support 400 Gbps class interface cards and supports a kind of IP router node and an edge node of optical core network. RCP is possible to reconstruct functions provided by modules; Reconfigurable Processing Module (RPM) based on LSIs, FPGAs, and NPs and Reconfigurable Service Module (RSM) based on NPs and CPUs. A Tbps class switching module interconnecting RPMs and RSMs. Multiple RCPs are interconnected by optical networks, and they can be provided a Virtual Reconfigurable Communication Processor (VRCP). VRCP can provide computing resource pool by RSMs and packet processing resource pool by RPMs.
To avoid IDS/IPS bottleneck of the DDoS attack detection system, we propose a DDoS attack detection method using VRCP. We apply the distributed random forest algorithm which is concurrently run on many RSMs. RPMs distribute the incoming traffic into RSMs. This is because the random forest is determined by the majority of each decision tree and the process of each decision tree can be distributed. As shown in the left side of Fig. 1, when we decide whether a flow is a DDoS attack by a total of five decision tree, we process five decision trees with one RSM in Chiba if we process with one RSM. If we process with multiple RSMs, we can process five decision trees with RSMs which are in VRCP as shown in the right side of Fig. 1. Therefore, we can increase throughput of DDoS attack detection and reduce the processing load by distributing decision trees in resource pool. Detailed performance evaluation results of computer simulations will be provided in the presentation.


P-3_Fig1

Fig.1 Process with one RSM and process with multiple RSMs


References:

  1. K. Kitayama, et al., “Photonic Network Vision 2020 - Toward Smart Photonic Cloud,” IEEE JLT, Vol. 32, No. 16, pp. 2760-2770, Aug. 2014.
  2. S. Okamoto, “Smart Photonic Networking – Toward Application Driven Networking –” Proc. in IEICE Society Conference, BI-5-6, Sep. 2015 (written in Japanese).
  3. S. Okamoto, et al., “Proposal of the Photonic Programmable Node Architecture using Virtual Reconfigurable Communication Processors” IEICE Technical Report, Vol. 116, No. 205, PN2016-24, pp. 59-64, Sep. 2016 (written in Japanese).



Biography:

Naoto Sumita received his B.E. degree from Keio University in 2019. He is currently a master course student in Graduate School of Science and Technology, Keio University.

P-4 "Weaver: Search-based service configuration designing with fine grained architecture refinement rules"
Takayuki Kuroda, Takashi Maruyama, Takuya Kuwahara, Yoichi Sato, Hideyuki Shimonishi, and Kozo Satoda, NEC, Japan

Takayuki Kuroda

In this paper we present Weaver, an automated IT service configuration designer which can generate a concrete service configuration from an abstract user requirement. We suppose that a user of an enterprise IT service does not care about the detail of the service configuration but having concrete demands in some parts, for example using particular existing networks or servers. Weaver accepts such a requirement, i.e. partially detailed intents, and concretizes the abstract parts to suit to the detailed parts. This function enables the users to enjoy flexible and agile service provisioning. However, it imposes providers to be responsible for variety of requirements and much burden to maintain variety of service configuration patterns. Our challenge is to solve this problem with utilizing fine grained architecture refinement rules. In Weaver, both of the requirement and the service configuration as graphs which consist of components and their relationships. Then, Weaver refine the topology of the requirement graph according to the rules in step by step manner. The same part of the graph can be refined in different way, therefore, the whole process is conducted as a search process. Since the refinement rules are composable and reusable, we can generate high variety of service configuration patterns from combinations of the small number of rules. In this paper, we present the mechanism of Weaver and demonstrate its effectiveness through experiments with case studies in enterprise IoT services.


P-4_Fig1

Fig.1



Biography:

Takayuki Kuroda received M.E. and Ph.D. degrees from the Graduate School of Information Science, Tohoku University, Sendai, Japan in 2006 and 2009. He joined NEC Corporation in 2009 and has been engaged in research on model-based system management for Cloud applications and Software-Defined networks. As a visiting scalar in the Electrical Engineering and Computer Science department at the Vanderbilt University at Nashville, he studied declarative workflow generation for ICT system update. Now he is working on researched for automation technologies for system design, optimization and operation.

P-5 "A Study on Wavelength of Cyclic Performance Fluctuation of TCP BBR"
Kouto Miyazawa, Saneyasu Yamaguchi, Aki Kobayashi, Kogakuin University, Japan

Kouto Miyazawa

I. INTRODUCTION
When TCP BBR [2] and CUBIC TCP [1] communicate concurrently, their performances cyclically fluctuate [3]. In this paper, we discuss the effect of an internal constant on the cycle of performance fluctuation and the mechanism of the cyclic performance fluctuation.

II. RELATED WORK
A. Cyclic performance fluctuation
Figure 1 depicts the cyclic performance fluctuation when 10 TCP BBR connections and 10 CUBIC TCP connections communicate concurrently [4].

III. LENGTH OF PERFORMANCE CYCLE
In this section, we investigate the effect of the parameter of bbr_min_rtt_win_sec in the kernel on the performance of the TCP BBR and CUBIC TCP. Figure 2 depicts the average throughput of 10 TCP BBR connections with bbr_min_rtt_win_sec 1, 2 and 4. Figure 3 shows those with 8, 16, and 32. We can see that the performances switch periodically. Each cycle length is about twice of its bbr_min_rtt_win_sec.
This parameter determines the cycle of mode switching in the implementation of the TCP BBR. From these results, we can conclude that the performance switching in the cyclic performance fluctuation is caused by the mode change of TCP BBR.

IV. CONCLUSION
In this paper, we investigated the relationship between the cycle length of cyclic performance fluctuation and the parameter bbr_min_rtt_win_sec. In addition, we presented the clue about the mechanism of cyclic performance fluctuation.


P-5_Fig1

Fig.1 Cyclic performance fluctuation


P-5_Fig2

Fig.2 Average of 10 BBR (value: 1, 2, and 4)


P-5_Fig3

Fig.3 Average of 10 BBR (value: 8, 16 and 32)


Acknowledgment:
This work was supported by JSPS KAKENHI Grant Numbers 15H02696, 17K00109, 18K11277.
This work was supported by JST CREST Grant Number JPMJCR1503, Japan.


References:

  1. Injong Rhee and Lisong Xu “CUBIC: A New TCP-Friendly High-Speed TCP Variant,” Proc. Workshop on Protocols for Fast Long Distance Networks, 2005, 2005.
  2. Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, and Van Jacobson, “BBR: Congestion-Based Congestion Control,” Queue 14, 5, pages 50 (October 2016), 34 pages, 2016. DOI: https://doi.org/10.1145/3012426.3022184
  3. Kouto Miyazawa, Kanon Sasaki, Naoki Oda and Saneyasu Yamaguchi, "Cyclic Performance Fluctuation of TCP BBR," 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 2018, pp. 811-812. doi: 10.1109/COMPSAC.2018.00132
  4. Kouto Miyazawa, Kanon Sasaki, Naoki oda and Saneyasu Yamaguchi, “Cycle and Divergence of Performance on TCP BBR,” IEEE International Conference on Cloud Networking, Oct. 2018.



Biography:

Kouto Miyazawa received his B.E. degree from Kogakuin University in 2019. He is currently a master course student in Electrical Engineering and Electronics, Kogauin University Graduate School.

P-6 "Improving TCP Fairness in TCP BBR and CUBITC TCP"
Kanon Sasaki, Saneyasu Yamaguchi, Kogakuin University, Japan

Kanon Sasaki

I. INTRODUCTION
The proposals of TCP congestion control algorithms raised an issue of throughput fairness among TCP algorithms, which is called TCP fairness. It was recognized that applying CoDel improves TCP fairness.
In this paper, we focus on the TCP fairness among CUBIC TCP [1] and TCP BBR [2]. We compare the fairness with and without packet dropping by CoDel, then show that applying it improves the fairness in limited situations.

II. TCP BBR
TCP BBR is a TCP algorithm that was newly proposed in 2016. Unlike popular TCP congestion control algorithms, such as TCP Reno and CUBIC TCP, TCP BBR was not a loss-based algorithm. It estimates the bandwidth and delay and set its congestion windows size to the BDP, bandwidth delay product.

III. CODEL (CONTROLLING QUEUE DELAY)
Nichols et al. proposed a new packet queue scheduling algorithm, called CoDel. This drops packets when queue delay time exceeds the threshold time, which is called target. The default target is 5 ms. It was demonstrated that applying CoDel improves the TCP fairness [3].

IV. TCP FAIRNESS EVALUATION
In this section, we compare the throughputs of CUBIC TCP and TCP BBR. Each TCP algorithm established 10 connections, totally 20 connections. These connections share the bottleneck link. The queue was managed with TailDrop or CoDel in our experiment. Fig. 1 show the throughputs of TCPs. The horizontal axis is the target time of CoDel.
Focusing on the throughputs with 4ms delay time, we can see that applying CoDel improves the fairness with large target time. On the contrary, applying the CoDel with small target time does not improves the TCP fairness. Focusing on those with 16ms delay time, we can see that applying CoDel does not have positive impact on performance fairness.
From these results, we can conclude that applying CoDel with large enough target time improves in limited situations wherein the physical RTT is not very large.

V. CONCLUSION
In this paper, we evaluated the improvement of TCP fairness between CUBIC TCP and TCP BBR by applying CoDel. We then demonstrated that CoDel with large target time could improve the fairness in limited situations wherein the physical RTT is not large.
For future work, we plan to discuss a method for improving the fairness on a network with large RTT.


P-6_Fig1

Fig.1 BBR and CUBIC Throughput


Acknowledgment:
This work was supported by JST CREST Grant Number JPMJCR1503, Japan. This work was supported by JSPS KAKENHI Grant Numbers 26730040, 15H02696, 17K00109.


References:

  1. Sangtae Ha, Injong Rhee, and Lisong Xu, "CUBIC: a new TCP-friendly high-speed TCP variant," SIGOPS Oper. Syst. Rev. 42, 5, 64-74, July 2008. DOI=http://dx.doi.org/10.1145/1400097.1400105
  2. Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, and Van Jacobson, “BBR: Congestion-Based Congestion Control,” Queue 14, 5, pages 50 (October 2016), 34 pages, 2016. DOI: https://doi.org/10.1145/3012426.3022184
  3. M. Hanai, S. Yamaguchi and A. Kobayashi, "Modified Controlling Queue Delay for TCP fairness improvement," 2016 18th Asia-Pacific Network Operations and Management Symposium (APNOMS), Kanazawa, 2016, pp. 1-6. doi: 10.1109/APNOMS.2016.7737224



Biography:

Kanon Sasaki received his B.E. degree from Kogakuin University in 2019.
He is currently a master course student in Electrical Engineering and Electronics, Kogauin University Graduate School.

P-7 "Application switch for KVS performance improvement"
Tomoaki Kanaya, Hiroaki Yamauchi, Saneyasu Yamaguchi, Kogakuin University, Japan, Akihiro Nakao, Shu Yamamoto, The University of Tokyo, Japan, and Masato Oguchi, Ochanomizu University, Japan

Tomoaki Kanaya

I. INTRODUCTION
Recent programmable switches allow developers to profoundly optimize network elements. In the previous work[1], we introduced a concept of an application switch that supports a network application based on programmable switches. We then proposed a method to apply an application switch for key-value store (KVS), which is one of a database management system (DBMS) using TCP. The method migrates a TCP connection from a server computer to the application switch when the switch replies a query. In this paper, we evaluated the performance of the proposed method.

II. APPLICATION SWITCH
Figure 1 illustrates behaviors of network application with a usual and application switches. In the case of a usual switch, a request from the client in transmitted to the server via the switch and processed by the server. In the case of an application switch that supports some function of the application, a request can the processed by the application switch. For example, an application switch that supports a caching function processes a request for getting data like Application Switch in Fig.1 if the request hits the cache. The switch forwards the request to the server and the server processes the request like Usual Switch in Fig.1.

III. EVALUATION
In this section, we evaluate the proposed method. We constructed a KVS system using Cassandra and the proposed application switch. The application switch supports a caching function of KVS reading queries. The switch inspects packets and the switch replies a reading query in case of the requested data are stored in the cache.
We measured the time to complete 100 SELECT queries with and without the proposed method. The KVS database includes 1000 key-value pairs. Each pair has one value. The size of each value is 100 bytes. The cache hit ratio can be controlled by choice of the target data of each reading query.
Fig.2 detects the average turnaround times of the proposed method with various cache hit ratios and the normal method. The results indicate that the turnaround times decreased by the proposed method in case of the hit ratios were not 0%. The turnaround times increased only in the case of the ratio was 0%. However, the size of increase was not large and remarkably less than the sizes of decreases with cache hits. In the case of the ratio was 100%, the turnaround time decreased by 79%.

IV. CONCLUSION
In this paper, we introduced and evaluated an application switch. For future work, we plan to support other queries such as writing query.


P-7_Fig1

Fig.1 The overview of optimization of Application Switch


P-7_Fig2

Fig.2 Experimental results with the normal method and proposed method of each cache hit rate


Acknowledgment:
This work was supported by JST CREST Grant Number JPMJCR1503, Japan. This work was supported by JSPS KAKENHI Grant Numbers 26730040, 15H02696, 17K00109.


Reference:

  1. T. Kanaya, H. Yamauchi, S. Nirasawa, A. Nakao, M. Oguchi, S. Yamamoto, and S. Yamaguchi, "Intelligent Application Switch Supporting TCP," IEEE Int. Conf. Cloud Netw., Tokyo, Japan, 2018



Biography:

Tomoaki Kanaya received his B.E. degree from Kogakuin University in 2019.
He is currently a master course student in Electrical Engineering and Electronics, Kogauin University Graduate School.

P-8 "A TCP Method Based on Predicting Queue Empty at LTE Base Stations"
Tansheng Li, Takahiro Nobukiyo, and Takeo Onishi, NEC, Japan

Tansheng Li

Carrier aggregation (CA) spreads widely in current LTE networks. However, the Transmission Control Protocol (TCP) cannot reach optimal throughput over LTE, because TCP cannot acquire LTE status like bandwidth fluctuations or retransmissions, therefore it controls its transmission rate independently. Once control-mismatch between TCP and LTE occurs, TCP performance is considerably degraded.
For example, a phenomenon called TCP delay spikes are well discussed. A delay spike is a particular phenomenon that TCP round trip time (RTT) suddenly increases and drops [1], which is caused by retransmissions at the radio link control (RLC) layer of LTE and degrade TCP throughput.
We analyzed the mechanism that how delay spikes affect TCP throughput performance. As shown in Fig.1, when an RLC Protocol Data Unit (PDU) is being retransmitted (the shadowed block in Fig.1), data after that PDU is still transmitted from the sending queue in the base station (BS) buffer to the User Equipment (UE). However, the UE’s Packet Data Convergence Protocol (PDCP) layer only send an ACK of in-sequence completely received data to the UE’s upper layer (TCP/IP), until the retransmitting PDU (shadowed block in Fig.1) successfully being retransmitted [2]. Then, TCP at the receiver cannot send new ACKs to the sender either, making the sender stop sending more data. At the same time, MAC/PHY layer at the BS still sends data to the UE, draining the sending queue in the BS. Once the queue becomes empty, the wireless transmission complete stopping, thus making TCP transmission stop.
Based on the discovery mentioned above, we developed a TCP control method to prevent BS queue empty. The main idea is when we detected RLC retransmissions, we use TCP throughput and in flight data to predict the time that queue emptying, then send additional data to the BS before queue empty.
Our method is composed with following steps. First, if ACK packets does not arrive for a particular time, the method estimates that an RLC PDU retransmission occurs. Second, we estimate the time that BS buffer depletes, using TCP throughput to approximate the bandwidth between BS and UE, and TCP bytes in flight as the amount of data in the BS queue. Then a timer is started to determine whether additional data should be sent. Third, once the timer exceeds the threshold that the BS queue would deplete, the sender sends additional TCP data to the receiver.
We evaluated our method via a commercial LTE network. The result is shown in Fig. 2. With our proposed TCP control method, average throughput is shown improved 5% comparing with Linux’s default TCP control method.


P-8_Fig1

Fig.1 A typical LTE transmission procedure, showing a retransmission in RLC layer.


P-8_Fig2

Fig.2 Download TCP throughput


References:

  1. S. Fu and W. Ivancic, "Effect of delay spike on SCTP, TCP Reno, and Eifel in a Wireless Mobile Environment," in Proc. 11th International Conference on Computer Communications and Networks (ICCCN), pp. 575–578, Miami, Florida, USA, October 2002.
  2. ETSI, LTE. " Evolved Universal Terrestrial Radio Access (E-UTRA); Radio Link Control (RLC) protocol specification (Release 12)." 3GPP TS 36.322 (2015).



Biography:

Tansheng Li received his B.E. degree from Zhejiang University in 2006, and master degree from Waseda University in 2008.
He joined NEC Corporation as a researcher from 2013. His research themes include predicting wireless attenuation and cross-layer design between Internet and mobile networks. Currently he's working on industrial IoT.

P-9 "A Study on Clustering Sessions of TLS based on Upload Message"
Hiroaki Yamauchi, Saneyasu Yamaguchi, Kogakuin University, Japan, Masato Oguchi, Ochanomizu University, Japan, Akihiro Nakao, and Shu Yamamoto, The University of Tokyo, Japan

Hiroaki Yamauchi

I. Introduction
Providing higher priority to packets for important services, such as rescue, is essential in a situation wherein networks are heavily congested with severe disaster. Service identification in a network element is required to achieve this. In this paper, we focus on the identification of service based on packet analysis with DPI (Deep Packet Inspection) and DPN (Deeply Programmable Network) [1].
Many of current communications are encrypted with TLS. Some methods for identifying the services of flows by analyzing the non-encrypted parts of TLS sessions [2] have been proposed. However, the ability of clustering of each filed has not been discussed enough. In this paper, we analyze the fields of TLS Handshake and reveal the ability of clustering.

II. Investigation
In this section, we investigate the ability of clustering sessions of each field in ClientHello. The sessions that have the same value in a session are clustered into one group by the field. The field has the ability of clustered if the number of groups is greater than one and less than the number of sessions. In our experiments, the flows of 15 Google services (Account, Calendar, Document, Drive, Gmail, Map, News, Photo, Play, Plus, Scholar, Sheets, Translate, Web Search, and YouTube) are analyzed. The client web browser if Mozilla FireFox 52.2. 10 times accesses were executed for every service. The total number of TLS sessions are 3797.
We have clustered the sessions by the value of each field and Fig. 1 depicts the numbers of clusters. Version, Cipher Suite Length, Cipher Suite, Compression Method Length, and Compression Method do not have clustering ability because their numbers of sessions are one. Session ID Length has the ability because the number is two. The number of clusters by Session ID was 16. If Session ID Length is 0, the session does not have Session ID. We considered a case of there is not Session ID as one cluster. Similarly, Handshake Length, Extensions Length, Extensions have the clustering ability because their number of clusters is greater than one and less than the number of sessions.

III. Discussion
The existing identification method [2] checks all the fields in the non-encrypted parts of TLS session establishment. However, the results in the previous section demonstrated that some fields do not have the ability for clustering. Naturally, these fields should not be checked. We think that the time to identify can decrease by checking only Session ID, Handshake Length, Extensions Length, and Extensions.

IV. Conclusion
In this section, we investigated the ability of clustering sessions of each field in the TLS session establishment. We then showed that checking the limited filed also can cluster sessions. For future work, we plan to implement a method for identifying the service from IP flows.


P-9_Fig1

Fig.1 number of clusters by each filed


Acknowledgment:
This work was supported by JSPS KAKENHI Grant Numbers 15H02696, 17K00109, 18K11277. This work was supported by JST CREST Grant Number JPMJCR1503, Japan.


References:

  1. A.Nakao, ”FLARE: Open Deeply Programmable Node Architecture”, Stanford Univ. Networking Seminar, Oct 2012.
  2. Masaki Hara, Shinnosuke Nirasawa, Akihiro Nakao, Masato Oguchi, Shu Yamamoto, and Saneyasu Yamaguchi, “Service Identification by Packet Inspection based on N-grams in Multiple Connections,” 7th International Workshop on Advances in Networking and Computing, 2016
  3. H.Yamauchi, A.Nakao, M.Oguchi, S.Yamamoto, and S.Yamaguchi, “Clustering TLS Sessions Based on Protocol Fields Analysis”, COMPSAC 2018: The 42nd IEEE Computer Society Signature Conference on Computers, Software & Applications, FastAbstracts. 2018



Biography:

Hiroaki Yamauchi received his B.E. degree from Kogakuin University in 2019.
He is currently a master course student in Electrical Engineering and Electronics, Kogauin University Graduate School.



Friday 31, May 2019

Technical Session
Tech. Session (3): Network Architecture, Resiliency and Diagnosis
Friday 31, May 2019, 9:30-11:10
Chair: Eiji Oki, Kyoto University, Japan
T2-3 "Survey on topology design and traffic matrix characteristics research in data center networks"
Kohei Shiomoto, Tokyo City University, Japan

Kohei Shiomoto

Big online internet service companies such as Google, Amazon, Facebook, Apple, and Micosoft has been continuously building so-called hyper-scale data centers (DCs) across the globe to provide their online services to tens of millions of users. The increase pace of Intra-DC traffic surpasses that of the Internet traffic. The topology of data center networks (DCNs) was designed as regular topology such as Fat-Tree at the early days of data center network design history. A top-of-rack (TOR) switch, which accommodates dozens, typically 20-40, of servers in the same rack with dozens of 1 Gb/s downlink ports, is connected through several 10 Gb/s uplinks to larger scale switches in the higher tier, which are recursively connected through higher uplinks to further larger scale ones in the further higher tier. That is, switching nodes with the limited degrees are recursively inter-connected to build a large scale switching network.
As measurement studies on traffic matrix characteristic of DCNs have been conducted, it has been revealed that Fat-Tree topology may not necessarily be the best choice for many DCNs. Traffic matrix characteristic is highly dependent on what kinds of applications are running in the data center. Some DCNs exhibit so-called “scatter-gather” pattern as a result of Map-Reduce jobs while others do “work-seek-bandwidth” pattern where most of traffic remains in the same rack. Most of TOR switches exchange few traffic with other TOR switches and very few TOR switches do fair amount of traffic with each other.
To address this mismatch between topology design and traffic matrix characteristic in DCN, two approaches have been proposed so far: dynamic topology and static topology. In dynamic topology camp, optical circuit switching is introduced to dynamically change the topology according to the traffic demand as virtual network topology reconfiguration was introduced in IP-over-optical networks. In static topology camp, random graph topology design to accommodate a wide set of variable traffic matrices under constraint of use of a switching node with a limited node degree have been studied.
In this talk, we will review research efforts in topology design and traffic matrix characteristic of data center networks and discuss the future directions of research of topology design and traffic management in data center networks.



Biography:

Kohei Shiomoto is a Professor of Tokyo City University, Tokyo Japan. He has been engaged in R&D in Data Communication industry over 25+ years. He has been active in the areas of Network Virtualization, Data-Mining for Network Management, Traffic & QoE Management since he joined Tokyo City University in 2017. He published 70+ journal papers and 130+ reviewed international conference papers. He published 6 RFCs in IETF. He served as Guest Co-Editor for a series of special issues established in IEEE TNSM on Management of Softwareized Networks. He served in various roles for organizing IEEE ComSoc profile conferences such as IEEE NOMS, IEEE IM, IEEENetSoft. He presented keynote speeches and took part in the distinguished expert panels.
From 1989 to 2017, he produced technologies to innovate Internet, Mobile, and Cloud at NTT Laboratories, where he was engaged in research and development of high-speed computer networks including ATM networks, IP/MPLS networks, GMPLS networks, network virtualization, traffic management, network analytics. From 1996 to 1997 he was engaged in research in high-speed networking as a visiting scholar at Washington University in St. Louis, MO, USA. He received his B.E., M.E., and Ph.D degrees in information and computer sciences from Osaka University, Osaka in 1987, 1989, and 1998. He is a Fellow of IEICE, a Senior Member of IEEE, and a member of ACM.

T3-2 "Failure localization in optical transport networks"
Takashi Kubo, Hiroshi Yamamoto, Hiroki Kawahara, Takeshi Seki, Toshiyuki Oka, and Hideki Maeda, NTT, Japan

Takashi Kubo

Failure localization is essential in high-capacity core networks as they support many kinds of services. However, some types of failure are difficult to localize by monitoring alarms and PM(Performance Monitor) information; resolving them wastes too much manpower and time. We propose a failure localization method for dealing with such ones automatically. The method detects status change of physical layer in the transmission equipment (STEP1) and localizes failure in the network control server (STEP2).

Figure 1 shows a simplified diagram of our failure localization method.
A Parameter Collection and calculation function Unit (PCU) is set in the every transmission equipment, and a failure localization function unit is set in out of transmission equipment as the network control server.
STEP1 contains the following actions by a PCU.
(1-1) The PCU always collects parameters that show status of physical layer from the transmission equipment, and performs correlation analysis among acquired parameters.
(1-2) The PCU extracts temporal fluctuation patterns (periodic, instantaneous, continuous, no change, and so on) from results of (1-1).
(1-3) Given the fluctuation patterns and correlation analysis results, the PCU judges whether presence/absence of in-network abnormality, and extracts estimated causes.
(1-4) The PCU notifies of the estimated causes and the suspected optical path to the network control server. STEP2 contains the following actions by the network control server.
(2-1) Given the results of STEP1, the network topology information and route information of optical paths, the network control server localizes the failure coverage area by optical path/section unit. For instance, in figure 1 when some issue appears on a specific λ, it can be judged that failure lies in the red frame portion, and when the issue is with all λ passing through the section, the failure lies in the blue frame portion.
(2-2) Given the determined fault coverage area, the network control server makes a judgment using the information of the construction of transmission equipment and the estimated causes and, identify the suspected transmission equipment.
(2-3) The network control server acquires the PM information from suspected transmission equipment, and judges the PKG that needs replacement.

We propose the failure localization method automatically deals with failure that is difficult to localize by monitoring alarms and PM information.

T3-2_Fig1

Fig.1 Proposed failure locarization method.



Biography:

Staff, Transport Network Innovation Project, NTT Network Service Systems Laboratories.
He received the B.S. and M.S. degrees in engineering from The University of Electro-Communications, Tokyo, Japan, in 2016 and 2018, respectively. In 2018, he joined the NTT Network Service Systems Laboratories, where he engaged in research on failure localization method.

T3-3 "Study on multi-layer configuration management technology using traffic information"
Mizuto Nakamura, Naoyuki Tanji, Atsushi Takada, Toshihiko Seki, and Kyoko Yamagoe, NTT, Japan

Mizuto Nakamura

Introduction
In network(NW) operation on carriers, configuration information of the entire multi-layer NW of the optical transport network / IP network is required for identifying the cause of the service failure and speeding up the influence grasping.
The configuration information refers to the accommodation information of the NW device and the connection relationship information between NW devices, and all the NW operation is executed based on the configuration information. The management of the latest configuration information is called configuration management. In this paper, we propose the configuration management method of multi-layer NW and report the result of evaluating the effectiveness of our method.

Problems of configuration management
The carrier provides services by combining various devices and software. Therefore, multiple services and NW are superimposed on the optical transport NW (Fig.1-A). In order to properly operate all of these NW, it is necessary to grasp information such as the addition or abolition of NW devices and to reflect them in the configuration information. For example, when a package (PKG) of a certain NW device fails, it is coped with by automatic control by controller such as path switching or reset based on the configuration information. If the configuration information is incorrect, there is a possibility of resetting different PKG. As a result, services may not only be restored but also services that had no problem may be affected.

Proposed method
A number of methods have been proposed to grasp connection relationship information between NW devices. However, carrier network may not be able to use these conventional methods because those methods don't support specific vendors’ devices and specific layers’ devices. Therefore, we propose a configuration management method using the traffic volume information of NW device interfaces (IFs). The traffic volume information can be collected from most network devices. In the proposed method, compare the input traffic volume information and output traffic volume information for each IF of NW devices. It is possible to grasp the connection relationship information by extracting the combination of the IFs having the most matching input traffic volume information and output traffic volume information (Fig1-B).

Future tasks
In this paper, we proposed configuration management method of multi-layer NW using traffic volume information. In the future, we plan to introduce the proposed method to the operation of the carrier network.

T3-3_Fig1

Fig.1-A Configuration of service NW in carrier

Fig.1-B Proposed method



Biography:

Mizuto Nakamura received the B.E. and M.E. degrees from Tokyo University of Science, Tokyo, Japan, in 2014 and 2016, respectively.
In 2016, he joined NTT (Nippon Telegraph and Telephone Corporation) Network Service Systems Laboratories.
He is currently researching engineering techniques for operation support systems.
He is a member of IEICE.

T3-4 "Architecture of Dynamic MAC using Bit-by-bit Mapping on Massively Parallel Optical Channel"
Kyosuke Sugiura, Masaki Murakami, Satoru Okamoto and Naoaki Yamanaka, Keio University, Japan

Kyosuke Sugiura

It is expected that the required transmission capacity for optical fiber will hit 1 Pb/s around 2030; whereas the physical limitation of single mode optical fiber (SMF) is 100 Tb/s. Therefore, it will be essential to handle spatially parallel optical channels over multiple multi-core/multi-mode fibers [1]. Therefore, as a mapping scheme between MAC (Media Access Control) client signals and massively parallel optical channel, we will propose a Dynamic MAC technology. The goal of the Dynamic MAC in this presentation is to achieve optical signal parallelism of 80,000 and a bandwidth stretching degree of each MAC client of 400, which is determined by estimated traffic volume around 2030.
Currently, as a protocol for transmitting MAC client signals, 100 G/s Ethernet (100GE) is widely used. 100GE is constituted of multiple sub-layers. The byte stream of MAC client signals is divided into 64 bit blocks by RS (Reconciliation Sublayer) and passed to PCS (Physical Coding Sublayer) through MII (Media Independent Interface). Then, 64 bit blocks are distributed to 20 PCS lanes in a round-robin fashion. Finally, they are converted into four physical lanes by PMA (Physical Medium Attachment) and PMD (Physical Medium Dependent) in case of 100GBASE-LR4 and mapped into four optical channels. As conventional techniques dealing with multiple lanes, 100GE-ALR-CLT [2] and Link Aggregation have been proposed. 100GE-ALR-CLT dynamically changes the number of active lanes according to the traffic, which adaptively changes the number of aggregated channels. This method achieves a bandwidth stretching degree of 4. Link aggregation is a technique to create one logical link from multiple slower physical links. It has a bandwidth stretching degree for the number of links to be aggregated but has no stretching degree at every flow because signals are distributed on a MAC frame basis.
In order to obtain bandwidth stretching degree even at every flow, it is imperative to distribute MAC client signals to multiple lanes on a block basis. When distributing MAC client signals up to 400 lanes, it is critical to boosting the roundrobin mapper, which distributes MAC client signals to multiple lanes. Therefore, we propose a Dynamic MAC architecture using hierarchical round-robin mapper based on 100GBASE-LR4. The design of the Dynamic MAC is illustrated in Fig 1. Alike 100GBASE-LR4, the Dynamic MAC has a round-robin mapper between MII and PCS, which distribute the blocks to 20 PCS in a round-robin fashion. In addition to this, the proposed scheme involves another round-robin mapper between RS and MII. It distributes the blocks generated by the RS to the MII of 13 in sequence. This method would achieve the bandwidth stretching degree of 416 while keeping the required clock speed for physical layer to 8 times the 100GBASE-LR4. As a result of computer simulation, this method can be applied to 100 Gb/s per channel. This is four times faster than 100GBASE-LR4, which adopt 25 Gb/s per channel.


T3-4_Fig1

Fig.1 Architecture of Dynamic MAC with hierarchical round-robin mapper


References:

  1. K. Igarashi, D. Soma, Y. Wakayama, K. Takeshima, Y. Kawaguchi, N. Yoshikane, T. Tsuritani, I. Morita, and M. Suzuki, “Ultra-dense spatial-division-multiplexed optical fiber transmission over 6-mode 19-core fibers,” Optics Express, vol.24, no.10, pp. 10213-10231, 2016.
  2. T. Miyazaki, I. Popescuy, M. Chino, X. Wang, K. Ashizawa, S.Okamoto, M. Veeraraghavan, and N. Yamanaka, "High speed 100GE adaptive link rate switching for energy consumption reduction," 2015 International Conf. on Optical Network Design and Modeling (ONDM), pp. 227-232, May 2015.



Biography:

Kyosuke Sugiura received his B.E. degree from Keio University in 2019. He is currently a master course student in Graduate School of Science and Technology, Keio University.

Business Session
Friday 31, May 2019, 11:20-11:50
Chair: Tomohiro Otani, KDDI Research, Japan
B-1 "Enhanced Server infra Testing for WHAT?"
Akihiro Nakamura, Spirent Communications, Japan
Akihiro Nakamura



Biography:

Graduated from Chiba University in 2001 and started 1st career at Toyo Corp.
Engaged in Optical communication business sales & business development as a sales engineer.
In 2005, started IP performance test tool sales and worked for major service provider sales.
In 2009, became a product specialist and managed Spirent business development.
In 2017, moved to Spirent Communications Japan as a country manager and manage all Japan sales and partners.

Technical Session
Technical Session (4): SDN, Network Slicing and Disaggregation
Friday 31, May 2019, 13:30-15:10
Chair: Hideyuki Shimonishi, NEC, Japan
T4-1 "Result of Autonomous Driving Vehicle Control Using US-Japan Reconfigurable Resource Pool Networking Experiment"
Goki Yamamoto, Yoshiki Aoki, Kodai Yarita, Satoru Okamoto and Naoaki Yamanaka, Keio University, Japan

Goki Yamamoto

The development of a new transportation system has been promoted. Autonomous driving vehicle (ADV) is one of new transportation systems. ADV has a variety of sensors such as millimeter wave radars, LIDAR (Laser Imaging Detection and Ranging) and cameras. However only using these sensors by ADV, there is a problem that ADV can sense only own environment and need to make a decision based on limited information. To deal with this problem, the method of using a cyberspace control while sharing information with the surrounding ADV is examined. Agents on cyberspace control vehicles while gathering information via networks. In order to utilize cyberspace, a network that can satisfy high QoS requirements is necessary. Network-assisted ADV platform that deals with various QoS requirements has been proposed [1, 2]. The platform consists of cloud and edges because mobile edge computing (MEC) provides low latency and locally decisions.
In this presentation, mobile edge computing (MEC) environment for ADV platform over the network connected 5 sites (Keio Univ. (JPN), NICT Koganei (JPN), SC18 Dallas NICT site, SC18 UTD site, and SC18 SCinet NOC) will be demonstrated. We accomplish migrating agents which are implemented by virtual machines (VM) between several edges. Reconfigurable Communication Processors (RCPs) are used for this MEC environment. RCPs configured by many kinds of hardware (LSI, FPGA, NPU, CPU, etc.) provide a reconfigurable resource pool such as 100 Gbps routing/switching systems and computing resources. Cloud servers in Japan and edge servers including RCPs in USA are connected by JGN and other NRENs. Ethernet-over-WDM SDN orchestration of VM live migration is applied to keep RTT between the vehicle and agent program of the vehicle in VM less than 10 ms.

T4-1_Fig1

Fig.1 Implementation and experiment configuration in SC18


References:

  1. N. Yamanaka, et al., “Edge/Cloud co-operative Autonomous Driving Vehicular (ADV) control technologies,'' 21st Annual Conference Net-Centric, No. Mon2_3, October 2018.
  2. N. Yamanaka, et al., “Application-Triggered Automatic Distributed Cloud/Network Resource Coordination by Optically Networked Inter/Intra Data-center,'' IEEE/OSA Journal of Optical Communications and Networking, Vol. 10, No. 7, pp. B15-B24, July 2018.



Biography:

Goki Yamamoto received his B.E.degree from Keio University in 2019. I am currently a master course student in Graduate School of Science and Technology, Keio University.

T4-2 "OpenShift/Kubernetes Native Infrastructure for Telco edge cloud and network slicing"
Hidetsugu Sugiyama, Red Hat K.K., Japan

Hidetsugu Sugiyama

Akraino edge stack project has been established in 2018 and several blueprint projects are work in progress such as Kubernetes Native Infrastructure -Edge project for 5G mobile and fixed edge computing. Communication Service Providers started to exprolor for Container Network Function such as 5G control plane, Cloud RAN and Edge Computing, in kubernetes environment that is de-facto container environment in open source communities. Rather than legacy edge computing such as CDN on MEC host node, many industries need Intelligent-edge platform that supporting AI and data lake storage service in scalable manner, in addition to the legacy edge computing. Challenges of Edge computing or Edge cloud in kubernetes environment at Telco edge is management scalability that to orchestrate over than 1000 of Edge computing with network nodes in micro hybrid cloud environment in flexible way.

This session introduces the following Openshift/Kubernetes solutions and will discuss further.

  • Lifecycle management of stateful containerized network application and cluster management by Core OS operator framework
  • On-Demand Network slicing
  • Container Native Storage & Data management service across many clouds



Biography:

Hidetsugu Sugiyama is Chief Architect at Red Hat and focus on Service Provider sector in Japan.
Hidetsugu has been with Red Hat for six years, working on SDN/NFV/Edge Computing solutions development and joint GTM with NFV/SDN partners. He has 30+ years experience in the Information and Communications Technology industry.
Prior to Red Hat, he worked at Juniper Networks as a Director of R&D Support driving JUNOS SDK software development ecosystems and IP Optical collaboration development in Japan and APAC for 10 years. Also he worked at Service Providers including Sprint and UUNET in both team leadership and individual contributor.

T4-3 "5G Orchestration Revealed"
Hervé Guesdon, UBiqube, Ireland

Hervé Guesdon

5G is a paradigm change that will lead to a distributed network and cloud architecture combining legacy infrastructure with a whole new generation of platforms and systems. This network revolution promises 10-100 times speed, 1000x devices and <1ms latency for end customers. It will pave the way for the integration of networking, computing and storage resources into one programmable and unified infrastructure continuum. This change leads to the following orchestration challenges

Distribute Compute Orchestration:
The next generation of applications and consumption models developed around the 5G transition calls for greater agility in the provisioning and management of all distributed resources (including networking).

Multitenant (Network Slices) Orchestration :
5G will also lead to a dramatic increase in multi-tenant models, as a broader ecosystem of users will be empowered to consume the infrastructure continuum as they see fit. The network slicing mechanism to be implemented at the 5G edge is the perfect illustration of the potential tenants' inflation ahead of us. This architectural transformation should lead to greater capacity-usage, and a more dynamic resource allocation per tenant and per application.

Security Orchestration:
Such benefits will come with their own share of challenges to overcome; not least of all is the greater exposure to security risks that a distributed, hyper-connected and vendor fragmented architecture leads to.
Ensuring proper integration of it all, a clear automation strategy to avoid an exponential OpEx curve, and enough agility to adapt to never-ending changes is no simple endeavor.
The orchestration layer is the technical enabler via which the smooth transition happens (or doesn’t). Ensuring proper orchestration of a multi-vendor, multi-domain and hybrid environment (physical and virtual technologies) across the entire continuum requires a very special software DNA which has been lacking in the vendor-centric world of networking and security vendors.
Considering the infinite combination of vendors and solutions that will emerge at the edge with 5G, and the associated service and usage churn, it is now paramount to disintermediate the orchestration and automation of the services and processes from the configuration of the devices and systems involved, regardless of each vendor’s specific roadmap (or agenda).

T4-3_Fig1

Fig.1



Biography:

I have 20 years of experience in the IP Service Provider Industry.
I started my career at France Telecom R&D labs where I covered IP routing and high speed core networks projects (owning 2 patents). I later focused on the engineering of France Telecom core IP/MPLS backbone and on large scale VPN services deployment and associated management suites (OSS).
My expertise spans IP Networking and Security, wireline and wireless, physical and virtual and associated service management tools (orchestration, assurance, security reporting, performance management, analytics (Big Data), OSS, etc..).
As an early founding member I have been Leading UBIqube’s innovation. One particular area of focus is the next generation management software architecture needed for a smooth migration from legacy networking technologies to SDN, NFV, 5G and beyond. I am an active contributor in several industry groups and forums on the matter.
Albeit often hoping from plane to plane, It is in Grenoble, France, that I live and find the inspiration to challenge some of our industry’s established ideas on how ‘things should be done'. I must admit the local gastronomy helps me in this daunting endeavour.

T4-4 "Expectations and challenges for disaggregated packet optical converged networks"
Minoru Yamaguchi, Masatoshi Saito, Aki Fukuda, Yoshinori Koike and Hirotaka Yoshioka, NTT, Japan

Minoru Yamaguchi

SDN (Software defined networks) and disaggregated networks continue to extend in data center networks. On the other hand, those technologies are expected be beneficial also for carriers’ transport networks. In this presentation, we will show our expectations and challenges for promising commercial deployment of SDN and disaggregation technologies, particularly focusing on the promising IP/packet and optical converged transport networks.
In IP networks, they typically consist of three components: service, control and transport functions. Operators are able to select necessary functions flexibly by disaggregating each component as well as separation of the three main components. The disaggregation of IP/packet networks has been technically achieved based on white-box switches, network operating systems (NOSs), controllers and orchestrators. On the other hand, optical transport networks emcompasses different aspects particularly in a transport plane. Due to the technical challenges, there are a few possible disaggregation models and interfaces. (1) Partial disaggregation and (2) full disaggregation are typical models that achieve disaggregation. The difference of the two models have to be identified and carriers need to consider those features for the deployment. From that perspective, the disaggregation technologies in optical networks still has room for improvement, specifically in horizontal disaggregation as well as vertical disaggregation.
Nevertheless, the disaggregation models themselves are promising. This presentation shows a few use-cases and clarify the benefits in each use-case. Partial replacements and partial migrations (Fig.1) are typical scenarios that are achieved by the disaggregation. Moreover, a multi-layer use-case such as multi-interface services are another use-cases that are implemented by leveraging IP/packet optical converged networks. Considering use-cases above, there are some challenges that have to be addressed. One of the main issues is controller architectures in a multi-layer environment (Fig.2). In conjunction with them, open interfaces such as OpenConfig, Transport-API and OpenROADM have been being developed as standard in optical networks. Therefore, it is required for carriers to identify the different characteristics and design the network architecture based on comprehensive network services and operations. Furthermore, they need to create a roadmap for introducing controllers in commercial networks where element management system (EMS) and networks management system (NMS) are applied in terms of control and management of the networks. This presentation exemplify some examples of our challenges for those issues.
Finally, we introduce our activity in Telecom Infra Project (TIP). Open Optical & Packet Transport group is a project group within TIP that works on the definition of open technologies, architectures and interfaces in Optical and IP Networking. Converged Architectures for Network Disaggregation & Integration (CANDI) was established in October, 2018 and NTT is leading this sub-group to accelerate related technical developments and help operators in real-world scenarios for achieving open optical packet transport networks.


T4-4_Fig1

Fig.1 Use-case of partial migration


T4-4_Fig2

Fig.2 Possible controller architectures




Biography:

Minoru Yamaguchi received the B.S. and M.S. degrees from Osaka Prefecture University, Japan, in 2014 and 2016, respectively. In April 2016, he joined Nippon Telegraph and Telephone Corporation (NTT) as a researcher. His research interest includes Packet and optical disaggregated network. He is a member of IEICE.

Special Panel Session
Friday 31, May 2019, 15:20-17:20
Special Panel Session

Theme: Is AI-Empowered Networking the Next Goldmine?
Panelists:
-Kenjiro Cho, IIJ, Japan, "KISS or AI: The Innovator's Dilemma"
-Cengiz Alaettinoglu, Ciena Blue Planet, USA, "Role of Machine-Learning in Intent-Based Network Automation"
-Tatsushi Miyamoto, KDDI, Japan, "What is the sustainable data model for NW operation to adopt?"
-Ved P. Kafle, NICT, Japan, "AI for agile network control and management"
-Takahito Tanimura, Fujitsu, Japan, "Sensory system for optical network: Factorising optical field after fiber transmission"
Organizer/Moderator: Akihiro Nakao, The University of Tokyo, Japan

Biography:
Kenjiro Cho

Kenjiro Cho is Research Director at Internet Initiative Japan, Inc.
He is also a board member of the WIDE project.
His current research interests include Internet data analysis, networking support in operating systems, and cloud networking.






Cengiz Alaettinoglu

Cengiz Alaettinoglu is the Chief Technology Officer of Blue Planet, a division of Ciena. In this role, he leads Blue Planet’s multi-layer network automation efforts and is responsible for the technical direction and architecture of the Blue Planet portfolio.
Before joining Ciena through the Packet Design acquisition, Cengiz was focused on real-time SDN analytics and orchestration applications that intelligently satisfied path and bandwidth demands and adapted their paths to changing network conditions using real-time analytics. His early experimental work, correlating network performance issues to routing protocol incidents, pioneered the creation of route analytics technology. Prior to Packet Design, Cengiz worked for USC's Information Sciences Institute where he worked on the Routing Arbiter project. He was co-chair of the IETF's Routing Policy System Working Group, has been published widely, and is a popular lecturer at industry events worldwide.
Cengiz holds a BS in Computer Engineering from Middle East Technical University, Ankara, and a MS and PhD in Computer Science from the University of Maryland.


Ved P. Kafle

VED P. KAFLE received a B.E. in Electronics and Electrical Communications from Punjab Engineering College (now PEC University of Technology), India, an M.S. in Computer Science and Engineering from Seoul National University, South Korea, and a Ph.D. in Informatics from the Graduate University for Advanced Studies, Japan. He is currently a research manager at National Institute of Information and Communications Technology (NICT), Tokyo, and concurrently holding a visiting associate professor’s position at the University of Electro-Communications, Tokyo. He has been serving as a Co-rapporteur of ITU-T Study Group 13 since 2014. He is an ITU-T Study Group 13 Fellow. His research interests include network architectures, 5G networks, Internet of things (IoT), directory service, smart cities, network supported automated driving infrastructure, network security, network service automation by AI and machine learning, network function virtualization (NFV), software defined networking (SDN), and resource management. He received the ITU Association of Japan’s Encouragement Award and Accomplishment Award in 2009 and 2017, respectively. He received three Best Paper Awards at the ITU Kaleidoscope Academic Conferences in 2009, 2014 and 2018. He is a senior member of IEEE and a member of IEICE.


Akihiro Nakao

Akihiro NAKAO received B.S.(1991) in Physics, M.E.(1994) in Information Engineering from The University of Tokyo. He was at IBM Yamato Laboratory, Tokyo Research Laboratory, and IBM Texas Austin from 1994 till 2005.
He received M.S.(2001) and Ph.D.(2005) in Computer Science from Princeton University.He has been teaching as an associate professor (2005-2014) and as a professor (2014-present) in Applied Computer Science, at Interfaculty Initiative in Information Studies, Graduate School of Interdisciplinary Information Studies, the University.



Closing Session
Friday 31, May 2019, 17:20-17:30
Closing by iPOP Organization Committee Co-Chair
Satoru Okamoto,Keio University, Japan

>