data center Ethernet (DCE)
Data center requirements are increasingly driven by changes in data traffic, dominated by server farms, server-to-server traffic, and new IT architectures such as grid computing and cloud computing. In addition, regular backups, in which the data to be stored is transferred to storage devices without loss, have led to a further increase in traffic, as have serverless backups between storage units, which are performed daily.
The increase in data traffic and the changing traffic patterns are associated with certain transmission quality requirements. These include lossless data transmission, for which Ethernet is not exactly predestined, low latency, high bandwidth and scalability. For these data center applications, the reliable data transport of Fibre Channel over Ethernet( FCoE), which operates without data loss, and the Priority Flow Control Protocol( PFC), which enables lossless Ethernet and eliminates retransmissions even during network congestion, were developed.
Data Center Ethernet (DCE) is a concept for efficient networking of data centers, mainframes, servers, storage systems and peripherals. This powerful concept is comparable to Converged Enhanced Ethernet( CEE) and is based on Ethernet standards. It has been extended to include important requirements for modern data centers in terms of scalability, convergence and flexibility. The concept of Data Center Ethernet is standardized by ANSI 2008 as T11 FCoE. Data Center Ethernet has a high data rate, which is 10 Gbps, a low latency of less than 1 µs, it is deterministic and supports service grades. It is also referred to as Low Latency Ethernet (LLE) because of its low latency.
DCE concepts require flexible algorithms with which to schedule bandwidth management between lossy and lossless traffic. It must support secure end-to-end traffic as well as link- level traffic and priority control.