Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

The network’s role in improving application security, reliability and efficiency

David Klebanov | March 21, 2011
Access to data center resources needs to be fast, secure and reliable, a significant challenge for the data center network infrastructure which is tasked to adhere to the following principles

Default gateway functionality is maintained on aggregation/distribution layer devices, which are reachable through the bridge mode service appliances.

 

Bridge mode appliances are almost always Layer 2 adjacent to the servers behind them, which means there are no routed hop(s) in between. Such MAC based forwarding simplifies achieving traffic flow symmetry needed for stateful service appliances, although distributed data centers or cloud deployments still require attention, especially when fist-hop-routing-protocol localization methods are being utilized.

Direct communication between the servers, which can bypass firewall or IPS inspection, is easily prevented by putting those servers in different VLANs, so the traffic headed toward the default gateway for inter-VLAN routing is inspected as it passes through the service appliances. Private VLANs or port ACLs could also be employed to isolate servers from each other and create Layer 2 security zoning.

To summarize, there is no golden rule in regard to which of the modes is better and it really comes down to individual use cases or, at times, the personal preferences of the network designers.

Application resilience

Application resilience is achieved by bundling servers (physical or virtual) into server farm constructs reachable through the virtual IP address owned by the load-balancer service appliance. The use of server farms implies that a failure of individual server within the farm goes unnoticed for clients, while the load-balancer simply stops forwarding traffic to the failed server. Such behavior guarantees application availability as long as there are "live" servers in the farm. Of course performance might be impacted due to the fact that there are less servers processing client requests.

Load-balancers implement a variety of traffic distribution algorithms across the servers in the farm based on the desired logic. Such algorithms range from simplistic round robin to sophisticated Application Layer intelligence. Encrypted traffic, such as the case with SSL, represents an issue if Application Layer load-balancing is desired, simply because of the fact that application payload is encrypted and is not available for load-balancing inspection. To address this case, encrypted traffic must be decrypted by either the load-balancer or a front-end SSL termination point (aka reverse proxy), which then allows clear-text payload be inspected for application layer load-balancing decision.

In data center environments SSL termination functionality is usually collapsed in the load-balancers themselves, which can operate in either routed or bridged mode, as discussed. In routed mode, after the load-balancing decision is made, traffic is forwarded to the servers by rewriting destination IP address of the load-balanced packets to match the server chosen by the distribution algorithm. In case of the bridged mode load-balancer, the same process occurs for destination MAC addresses.

Application delivery optimization

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for Computerworld eNewsletters.