Site A - Ramiro @ CERN

Document technical information

Format pptx
Size 3.9 MB
First found May 22, 2018

Document content analysis

Language
English
Type
not defined
Concepts
no text concepts found

Persons

Sarah Harmer
Sarah Harmer

wikipedia, lookup

Organizations

Places

Transcript

Servicii distribuite
Alocarea dinamică a resurselor de rețea pentru
transferuri de date de mare viteză
folosind servicii distribuite
Distributed Services
Dynamic network resources allocation
for high performance transfers
using distributed services
Conducător ştiinţific
Prof. Dr. Ing. Nicolae Ţăpuş
Autor
Ing. Ramiro Voicu
- 2012-
Outline








Jan 2012
Current challenges in data-intensive applications
Thesis objectives
Fundamental aspects of distributed systems
Distributed services for dynamic light-paths
provisioning
MonALISA framework
FDT: Fast Data Transfer
Experimental result
Conclusions & Future Work
Ramiro Voicu
2
Data intensive applications: current challenges and
possible solutions



Large amounts of data (in order of tens of
PetaBytes) driven by R&E communities
Bioinformatics, Astronomy and Astrophysics,
High Energy Physics (HEP)
Both the data and the users, quite often
geographically distributed
What is needed
 Powerful storage facilities
 High-speed hybrid network (100G around the corner);
both packet based and circuit switching
o OTN paths, λ, OXC (Layer 1)
o EoS(VCG/VCAT) + LCAS (Layer 2)
o MPLS (Layer 2.5), GMPLS (?)
 Proficient data movement services with intelligent
scheduling capabilities of storages, networks and data
transfer applications
Jan 2012
Ramiro Voicu
3
Challenges in data intensive applications
CERN storage manager CASTOR (Dec 2011):
60+ PB of data in ~350M files
Source: Castor statistics, CERN IT department, December 2011
Jan 2012
Ramiro Voicu
4
DataGrid basic services
A. Chervenak, I. Foster, C. Kesselman, C. Salisbury,
S. Tuecke, ”The Data Grid: Towards an Architecture
for the Distributed Management and Analysis of
Large Scientific Datasets”
 Resource reservation and co-allocation
mechanisms for both storage systems and other
resources such as networks, to support the endto-end performance guarantees required for
predictable transfers
 Performance measurements and estimation
techniques for key resources involved in data
grid operation, including storage systems,
networks, and computers
 Instrumentation services that enable the end-toend instrumentation of storage transfers and
other operations
Jan 2012
Ramiro Voicu
5
Thesis objectives
This thesis studies and addresses key aspects of the
problem of high performance data transfers
 A proficient provisioning system for network
resources at Layer1 (light-paths) which must be
able to reroute the traffic in case of problems
 An extensible monitoring infrastructure capable
to provide full end-to-end performance data. The
framework must be able to accommodate
monitoring data from the whole stack:
applications and operating systems, network
resources, storage systems
 A data transfer tool capable of dynamic
bandwidth adjustments capabilities, which may
be used by higher-level data transfer services
whenever network scheduling is not possible
Jan 2012
Ramiro Voicu
6
Fundamental aspects of distributed systems

Heterogeneity
 Undeniable characteristic (LAN, WAN - IP, 32/64bit – Java, .Net , Web
Services)

Openness
 Resource-sharing through open interfaces (WSDL, IDL)

Transparency
 unabridged view to its user

Concurrency
 Synchronization on shared resources

Scalability
 Accommodate without major performance penalty an increase in
requests load

Security
 Firewalls, ACLs, crypto cards, SSL/X.509, dynamic code loading

Fault tolerance
 deal with partial failures without significant performance penalty
 Redundancy and replication
 Availability and reliability
The entire work presented here is based on these aspects!
Jan 2012
Ramiro Voicu
7
Provisioning System



Jan 2012
A proficient provisioning system for network
resources at Layer1 (light-paths) which must be able
to reroute the traffic in case of problems
A data transfer tool capable of dynamic bandwidth
adjustments capabilities, which may be used by
higher-level data transfer services whenever network
scheduling is not possible
An extensible monitoring infrastructure capable to
provide full end-to-end performance data. The
framework must be able to accommodate monitoring
data from the whole stack: applications and operating
systems, network resources, storage systems
Ramiro Voicu
8
Simplified view of an optical network topology

The edges are pure optical links
 They may as well cross other network devices

Both simplex (e.g. video) and duplex devices are
connected
H323
H323
Site A
Mass
Storage
System
Jan 2012
Site B
Mass
Storage
System
Ramiro Voicu
9
Cross-connect inside an optical switch

An optical switch is able to perform the
“cross-connect” function
:     ⟶ ℤ2 , ℎ ℤ2 = {0, 1}

 , 
=
Fiber1 IN
Fiber2 IN
Fiber3 IN
Fibern-1 IN
Fibern IN
Jan 2012
1,   ℎ 
0,    ℎ 
f1IN
f1OUT
f2IN
f2OUT
f3IN
FXC
f3OUT
fn-1OUT
fnOUT
fn-1IN
fnIN
Ramiro Voicu
, where
 ∈  
 ∈  
Fiber1 OUT
Fiber2 OUT
Fiber3 OUT
Fibern-1 OUT
Fibern OUT
10
Formal model for the network topology
Definition 7:
An FXC topology is a labeled multigraph defined as:
MF = (OF, E, l)
where OF is the set of vertices, FIN, FOUT is the set of input and output ports and E
is the set of edges and l is the labeling function for the edges:
l:E⟶OFxFOUTxOFxFIN
l(eij(uv))=<u, fiuOUT, v, fjvIN>, where
H323
u, v ∈ OF, are the source and destination of the edge
fiuOUT is the source port in u and
H323 fjvIN ∈ FvIN is the destination port in v
Site A
Mass
Storage
System
Jan 2012
Mass
Storage
System
Site B
Ramiro Voicu
11
Optical light path inside the topology
Definition 10:
the form:
A path in the multigraph MF is a non-empty multigraph, of
  =  ,  ,  , ℎ  ⊆  ,  ⊆ 
 = 0 , 1 , … ,  , 0 ,   
 = 0 , 1 , … ,  −1


:  ⟶  
   ,      ℎ ℎ
 ⊆   ,  ⊆  
  =< −1 , 
,  , 
>, ℎ


−1

            −  
H323
H323
Site A
Site B
Mass
Storage
System
Jan 2012
Mass
Storage
System
Ramiro Voicu
12
Important aspects of light paths in the multigraph
Lemma:
Let ℙ =
 be the set of all paths in the multigraph MF,
 being the number of paths, and let  be the set of edges for  , then:

 = ∅,   ≥ 2, ℎ  = |ℙ|
=1

All optical paths in the FXC multigraph
are edge-disjointed
H323
H323
Site A
Mass
Storage
System
Jan 2012
Site B
Mass
Storage
System
Ramiro Voicu
13
Single source shortest path problem


Similar approach with the link-state routing
protocols (IS-IS, OSPF)
Dijkstra’s algorithm combined with lemma’s
results
 Edges involved in a light path are marked as
unavailable for path computation
H323
3
1
5
3
15
7
1
8
7
9
2
Site A
10
1
11
4
3
Site B
Mass
Storage
System
Jan 2012
H323
Ramiro Voicu
Mass
Storage
System
14
Simplified architecture of a distributed end-to-end
optical path provisioning system


Monitoring, Controlling and Communication
platform based on MonALISA
OSA – Optical Switch Agent
 runs inside the MonALISA Service

Jan 2012
OSD – Optical Switch Daemon on the end-host
Ramiro Voicu
15
A more detailed diagram
http://monalisa.caltech.edu/monalisa__Service_Applications__Optical_Control_Planes.htm
Jan 2012
Ramiro Voicu
16
OSA: Optical Switch Agent components


Message based
approach based on
MonALISA
infrastructure
NE Control
 TL1 cross-connects

Topology Manager
 Local view of the
topology
 Listens for remote
topology changes
and propagates
local changes

Optical Path Comp
 Algorithm
implementation
Jan 2012
Ramiro Voicu
17
OSA: Optical Switch Agent components(2)

Distributed
Transaction
Manager
 Distributed 2PC for
path allocation
 All interactions are
goverened by
timeout mechanism
 Coordinator (OSA
which received the
request)

Distributed Lease
Manager
 Once the path is
Jan 2012
allocated each
resource get a
lease; heartbeat
approach
Ramiro Voicu
18
MonALISA: Monitoring Agents using a Large
Integrated Service Architecture



Jan 2012
A proficient provisioning system for network
resources at Layer1 (light-paths) which must be able
to reroute the traffic in case of problems
An extensible monitoring infrastructure capable to
provide full end-to-end performance data. The
framework must be able to accommodate monitoring
data from the whole stack: applications and operating
systems, network resources, storage systems
A data transfer tool capable of dynamic bandwidth
adjustments capabilities, which may be used by
higher-level data transfer services whenever network
scheduling is not possible
Ramiro Voicu
19
MonALISA architecture
Higher-Level Services & Clients
Regional or Global High Level
Services,
Repositories & Clients
Proxy
Services
Agents
MonALISA Services
JINI-Lookup Services
Public
Secure &
Secure and reliable communication
Dynamic load balancing
Scalability & Replication
AAA for Clients
Agents lookup & discovery
Information gathering and:
Customized aggregation, Filters,
Agents
Discovery and Registration
based on a lease
mechanism
Fully Distributed System with NO Single Point of Failure
Jan 2012
Ramiro Voicu
20
MonALISA implementation challenges


Major challenges towards a stable and reliable
platform were I/O related (disk and network)
Network perspective: “The Eight Fallacies of
Distributed Computing”
- Peter Deutsch, James Gosling
1.
2.
3.
4.
5.
6.
7.
8.

Jan 2012
The network is reliable
Latency is zero
Bandwidth is infinite
The network is secure
Topology doesn't change
There is one administrator
Transport cost is zero
The network is homogeneous
Disk I/O – distributed network file systems, silent
errors, responsiveness
Ramiro Voicu
21
Addressing challenges



Jan 2012
All remote calls are asynchronous and with an
associated timeout
All interaction between components
intermediated by queues served by 1 or more
thread pools
I/O MAY fail; the most challenging are silent
failures; use watchdogs for blocking I/O
Ramiro Voicu
22
ApMon: Application Monitoring







Jan 2012
Light-weight library
for application
instrumentation to
publish data into
MonALISA
UDP based
XDR encoded
Simple API provided
for: Java, C/C++, Perl,
Python
Easily evolving
Initial goal : job
instrumentation in
CMS (CERN
experiment) to detect
memory leaks
Provides also full host
monitoring in a
separate thread (if
enabled)
Ramiro Voicu
23
MonALISA – short summary of features

The MonALISA package includes:
 Local host monitoring (CPU, memory, network traffic ,







Jan 2012
Disk I/O, processes and sockets in each state, LM
sensors), log files tailing
SNMP generic & specific modules
Condor, PBS, LSF and SGE (accounting & host
monitoring), Ganglia
Ping, tracepath, traceroute, pathload and other networkrelated measurements
TL1, Network devices, Ciena, Optical switches
XDR-formatted UDP messages (ApMon).
New modules can be easily added by
implementing a simple Java interface, or calling
external script
Agents and filters can be used to correlate,
collaborate and generate new aggregate data
Ramiro Voicu
24
MonALISA Today





Running 24 X 7 at ~360 Sites
Collecting ~ 3 million “persistent” parameters in real-time
80 million “volatile” parameters per day
Update rate of ~35,000 parameter updates/sec
Monitoring
 40,000 computers
 > 100 WAN Links
 > 8,000 complete end-to-end
network path measurements
 Tens of Thousands of Grid jobs
running concurrently



Jan 2012
Controls jobs summation, different central services for the
Grid, EVO topology, FDT …
The MonALISA repository system serves
~8 million user requests per year.
10 years since project started (Nov 2011)
Ramiro Voicu
25
FDT: Fast Data Transfer



Jan 2012
A proficient provisioning system for network
resources at Layer1 (light-paths) which must be able
to reroute the traffic in case of problems
An extensible monitoring infrastructure capable to
provide full end-to-end performance data. The
framework must be able to accommodate monitoring
data from the whole stack: applications and operating
systems, network resources, storage systems
A data transfer tool capable of dynamic bandwidth
adjustments capabilities, which may be used by
higher-level data transfer services whenever network
scheduling is not possible
Ramiro Voicu
26
FDT client/server interaction
Control connection / authorization
NIO Direct buffers
Native OS operation
NIO Direct buffers
Native OS operation
Data Channels / Sockets
Restore the files from
buffers
Independent
threads per device
Jan 2012
Ramiro Voicu
27
FDT features





Out-of-the-box high performance using standard
TCP over multiple streams/sockets
Written in Java; runs on all major platforms
Single jar file (~800 KB)
No extra requirements other than Java 6
Flexible security
 IP filter & SSH built-in
 Globus-GSI, GSI-SSH external libraries needed in the
CLASSPATH; support is built-in


Jan 2012
Pluggable file systems “providers” (e.g. nonPOSIX FS)
Dynamic bandwidth capping (can be controlled
by LISA and MonALISA)
Ramiro Voicu
28
FDT features (2)

Different transport strategies:
 blocking (1 thread per channel)
 non-blocking (selector + pool of threads)

On the fly MD5 checksum on the reader side
 On the writer side MUST be done after data is flushed to
the storage (no need for BTRFS and ZFS ?)




Jan 2012
Configurable number of streams and threads per
physical device (useful for distributed FS)
Automatic updates
User defined loadable modules for Pre and Post
Processing to provide support for dedicated
Mass Storage system, compression, dynamic
circuit setup, …
Can be used as network testing tool (/dev/zero →
/dev/null memory transfers, or –nettest flag)
Ramiro Voicu
29
Major FDT components

Session
 Security
 External
control

Disk I/O
FileBlock Queue

Jan 2012
Network I/O
Ramiro Voicu
30
Session Manager





Session bootstrap
CLI parsing
Initiates the control
channel
Associates an
UUID to the
session & files
Security & access
 IP filter
 SSH
 Globus-GSI
 GSI-SSH

Ctrl interface
 HL Services
 MonA(LISA)
Jan 2012
Ramiro Voicu
31
Disk I/O

FS provider
 POSIX (embedded)
 Hadoop (external)


Physical partition
identification
Each partition gets a
pool of threads
 one thread for normal
devices
 Multiple threads for
distributed network FS

Builds the FileBlock
(UUID session, UUID file, offset, data length)

Mon interface
ratio % = Disk time / Time Wait Q Net
Jan 2012
Ramiro Voicu
32
Network I/O


Shared Queue with
Disk I/O
Mon interface
 Per channel throughput
ratio % = net time / time Q wait disk

BW manager
 Token based approach
on the writer side
rateLimit * (currentTime – lastExecution)

I/O strategies
 BIO – 1 thread per data
stream
 NBIO – event based
pool of threads
(scalable but issues on
older Linux kernels…)
Jan 2012
Ramiro Voicu
33
Experimental results
Jan 2012
Ramiro Voicu
34
USLHCNet: High-speed trans-Atlantic network

CERN to US
 FNAL
 BNL


6 x 10G links
4 PoPs
 Geneva
 Amsterdam
 Chicago
 New York


Jan 2012
The core is
based on
Ciena CD/CI
(Layer 1.5)
Virtual
Circuits
Ramiro Voicu
35
USLHCNet distributed monitoring architecture
MonALISA
@AMS
MonALISA
@GVA
Each Circuit
is monitored at both
ends by at least two
MonALISA services;
the monitored data
is aggregated by
global filters in
the repository
MonALISA
@NYC
MonALISA
@CHI
Jan 2012
Ramiro Voicu
36
High availability for link status data
The second link from the top AMS-GVA 2(SURFnet) was commissioned Dec 2010
Jan 2012
Ramiro Voicu
37
FDT Throughput tests – 1 Stream
Jan 2012
Ramiro Voicu
38
FDT: Local Area Network Memory to Memory
performance tests
Most recent tests from SuperComputing 2011
Same performance as IPERF
Jan 2012
Ramiro Voicu
39
FDT: Local Area Network Memory to Memory
performance tests
Same CPU usage
Jan 2012
Ramiro Voicu
40
WAN test over an OUT-4 (100 Gbps) link @ SC11
Jan 2012
Ramiro Voicu
41
Active End to End Available Bandwidth between all
the ALICE grid sites
Jan 2012
Ramiro Voicu
42
ALICE : Global Views, Status & Jobs
Jan 2012
Ramiro Voicu
43
Active End to End Available Bandwidth between all
the ALICE grid sites with FDT
Jan 2012
Ramiro Voicu
44
Controlling Optical Planes
Automatic Path Recovery
200+ MBytes/sec
From a 1U Node
CERN
Geneva
USLHCNet
Internet2
StarLight
FDT Transfer
CALTECH
Pasadena
MAN LAN
“Fiber cut” emulations
The traffic moves from one
transatlantic line to the other one
FDT transfer (CERN – CALTECH)
continues uninterrupted
TCP fully recovers in ~ 20s
Jan 2012
Ramiro Voicu
2
1
4
3
4 fiber cut emulations
45
Real-time monitoring and controlling in the
MonALISA GUI Client
Controlling
Port power monitoring
Glimmerglass Switch Example
46
Jan 2012
Ramiro Voicu
46
Future work

For the network provisioning system: possibility
to integrate OpenFlow-enabled devices

FDT: new features from Java7 platform like
asynchronous I/O, new file system provider

MonALISA: routing algorithm for optimal paths
within the proxy layer.
Jan 2012
Ramiro Voicu
47
Conclusions




Jan 2012
The challenge of data-intensive applications must
be addressed from an end-to-end perspective,
which includes: end-host/storage systems,
networks and data transfer and management
tools.
A key aspect is represented by a proficient
monitoring which must provide the necessary
feedback to higher-level services
The data services should augment current
network capabilities for a proficient data
movement
Data transfer tools should provide the dynamic
bandwidth adjustments capabilities whenever
networks cannot provide this feature
Ramiro Voicu
48
Contributions

Design and implementation of a new distributed
provisioning system
 Parallel provisioning
 No central entity
 Distributed transaction and lease manager
 Automatic path rerouting in case of LOF (Loss of Light)

Overall design and system architecture for
MonALISA system
 Addressed concurrency, scalability and reliability
 Monitoring modules for full host-monitoring (CPU, disk,
Jan 2012
network, memory, processes,
 Monitoring modules for telecom devices (TL1): optical
switches (Glimmerglass & Calient), Ciena Core Director
 Design for ApMon and initial receiver module
implementation
 Design and implementation of a generic update
mechanism (multi-thread, multi-stream, crypto hashes)
Ramiro Voicu
49
Contributions (2)

Designed and main developer of FDT a highperformance data transfer with dynamic
bandwidth capping capabilities
 Successfully used during several rounds of SC
 Fully integrated with the provisioning system
 Integrated with Higher-level services like LISA and
MonALISA


Jan 2012
Results published in articles at international
conferences
Member of the team who won the Innovation
Award from CENIC in 2006 and 2008, and the
SuperComputing Bandwidth Challenge in 2009
Ramiro Voicu
50
Vă mulțumesc!
http://cern.ch/ramiro/thesis
Jan 2012
Ramiro Voicu
51

Similar documents

×

Report this document