F5 Error – bcm56xxd

Oct 20 10:08:16 local/lb01 err bcm56xxd[3506]: 012c0011:3: I2C SFP operation failed: dev:0xee at :228, errmsg(Operation timed out)

This specific error is benign. bcm56xxd has a thread that checks every 5 seconds to see if there is any units plugged or unplugged on all the SFP and XFP ports by accessing the GPIO pins attached to these SFP/XFP ports via the I2C bus.

The plug check does not disrupt the traffic flow on an existing port.

If the system is busy some I2C operation may time out as the I2C bus becomes overloaded.

bcm56xxd – The switch daemon configures and controls the Broadcom 56xx switch chips.

Reference:

SOL13444

Questions to consider – Buying an ADC

These are some of the questions that you would want to analyze and answer before thinking about buying an ADC:

What is the load requirement ?

This is defined in terms of maximum number of concurrent or simultaneous connections/requests that an ADC can handle.

Another factor to consider is the rate of connections/requests. If your application requires short bursts of traffic, the ADC should be able to handle it.

What kind of protocol do you intend to load balance ? 

Most customers tend to load balance HTTP traffic. If you intend to perform load balancing of specific applications like Citrix/Xen related apps, it will be better to buy a Citrix ADC like NetScaler.

Does your application require “persistence” and if so, what kind of “persistence” do you require ?

Persistence is the ability of the load balancer to send a client connection request to the same server that handled the previous request based on information presented by the client connection. This information can be Source IP address or Cookie or any information available in the incoming packet like JSession ID.

Persistence information is generally required for applications like a checkout cart. For example, after a client adds merchandise to the checkout cart, subsequent HTTP request/TCP connection should be sent to the server with the checkout cart information in order to complete the transaction. If the connection/request is sent to a different server, the checkout cart may not have the relevant information.

Do you require Layer7 load balancing like redirect or load balancing based on HTTP header/content ?

One of the differentiating factor between the newer generation ADC and the older Load Balancer is the difference in their ability to handle L7 function. Load Balancers don’t provide as much L7 functionality as the ADC.

Certain functions like redirects can be implemented on the ADC instead of the servers. This will reduce the round trip time and the latency involved in serving the application and at the same time make it easier from an administrative perspective as the redirect configuration is implemented at one point instead of multiple servers.

Are you planning to terminate the SSL certificate/key on the ADC and send the unencrypted traffic to the servers ?

Implementing SSL termination on the load balancer will reduce the load on the servers as SSL processing can be resource intensive. From a management perspective, it is easier to replace the certificate/key on a single device (ADC) than a multitude of servers. With a flood of SSL vulnerabilities, any changes required to the SSL ciphers or versions can be done at one location.

If load balancing requires L7 functionality, the SSL cert/key has to be terminated on the ADC as the SSL encrypted traffic has to be decrypted at the ADC before any L7 functionality can be implemented by the ADC.

Does your business require specific SSL ciphers/versions for regulatory or security reasons ?

SSL processing is done on hardware or software. For any ADC, some ciphers are handled at the hardware level and some ciphers are handled at the software level. Hardware SSL generally tends to be efficient than software SSL processing.

SSL Keys – Newer ADCs are optimized for the 2K keys and the older ones can only handle 1K key efficiently. Newer F5 platforms are better optimized for SSL 2K keys than the older F5 LTM 1600 & 3600

Do you require High Availability ?

ADC can be a single point of failure. Using ADC in high availability setting would provide the redundancy.

Do you require any specific performance features ?

These performance features can be like caching, compression or newer protocols like SPDY.

Do you require any other functionality ?

There are functions like application acceleration, application firewall, IPv6 Gateway that can be implemented on the ADC in addition to the normal load balancing.

Major ADC Vendors:

Top 3 based on Gartner 2013:

BigIP F5

Citrix NetScaler

Radware

Others:

Riverbed, A10, Brocade, Barracuda etc.

ADC Functions:

Scalability:

This provides the ability to add/remove servers with minimal disruption to ongoing traffic processing.

High Availability:

Do you require 2 (or more) Load Balancers that can be set up such that one of the “standby” load balancers take over the active load balancers function, if one of them fails ?

Performance:

This is not just about the values like connections/s or throughput. You would have to consider the feature set that is available to you to maximize the application delivery. These features can be caching, compression, newer protocol support like SPDY.

Security:

The newer load balancers provide greater support against certain Denial of Service (DoS) and security like a single authentication portal, web application firewall (WAF)

RackConnect Versions

RackConnect:

RackConnect is a Rackspace product that provides connectivity between Dedicated, Single Tenant Environment and an Openstack based Public Cloud Environment.

RackConnect_How

*Image taken from Rackconnect page.

RackConnect Versions:

RackConnect v1.0 is the very 1st version of RackConnect and it provides simple connectivity between the Dedicated, Single Tenant Environment and the Cloud Servers via ServiceNet.

After a cloud server is spun up, manual changes would have to be made to the cloud server’s route table, interface and software firewall/iptable in order to force the traffic to flow to dedicated servers.

RackConnect v2.0 is the next iteration of the RC product series. The major advantage of RC v2 compared to RC v1 is Automation. Changes to route table, cloud server interface, software firewall/iptable and IP assignment were taken care by the automation. RC v2 was also essential for utilizing the next generation “performance cloud servers” offered by Rackspace.

RackConnect v2.1 is similar to RC v2 but consists of RC v1 configurations that were migrated to RC v2.

RackConnect v3.0 is the latest in the RackConnect series. RC v3 utilizes SDN & Cloud Networks while the previous versions utilized the ServiceNet network offered for access to the different “Services” provided within the Datacenter.

Comparison of RC v3 & RC v2 is seen here.

As of Q4 2014, new customers will be provided with RC v3 features. There are quite a few customers on RC v2 and RC v2.1. RC v1 is almost non-existent.

Reference:

RackConnect

BigIP F5 – Poodle Vulnerability

Poodle was initially targeted against SSLv3. Quite a few websites fixed this issue at the server and client side by disabling SSLv3. There is a variation of Poodle for TLS with the following CVE ID: CVE-2014-8730. For a brief description of the issue: Poodle on TLS

This is known to affect load balancers like F5. F5 recommends a code upgrade. As of now (Dec 09, 2014), it is recommended that the code is upgraded to at least 10.2.4 with HotFix 10 if you are running 10.x code version and one of the 11.x code versions that is not vulnerable as per F5 documentation: SOL15882

F5 has stated that the code upgrade is the best possible option available. 

BigIP F5:

If you are running F5 LTM on 11.x code version, using 11.5.1 with HF6 and above is recommended.

If you are running F5 LTM on 10.x code version and want to stay in 10.x code version, you would have to perform the following tasks in order to have a “A-” rating in Qualys SSL test:

  • Upgrade code to 10.2.4 HF10
  • After code upgrade to 10.2.4 HF10, complete the following steps in order to remove RC4.
(tmos.ltm)# create profile client-ssl PARENT-SSL-SECURE ciphers 'HIGH:MEDIUM:!SSLv3:!RC4'

(tmos.ltm)# modify profile client-ssl CUSTOM-CLIENT-SSL defaults-from PARENT-SSL-PROFILE-SECURE

Qualys Rating is “B” after code upgrade and “A-” after performing code upgrade and removing RC4.

After making the code upgrade & removing RC4 cipher, it is recommended that you test your site for any vulnerabilities at the Qualys Site.

The above commands will create a “Parent” SSL Client profile – “PARENT-SSL-SECURE” that will disable SSLv3, RC4 and order the ciphers from High to Medium strength. Please, note that any client initiating connection on SSLv3 will be dropped. Usually clients running Windows XP and IE6 will initiate SSLv3 connections.

In order to know the total SSLv3 connections, you can run this command:

(tmos.ltm)# show profile client-ssl CUSTOM-CLIENT-SSL | grep Protocol
 Protocol
 SSL Protocol Version 2                       0
 SSL Protocol Version 3                      156
 TLS Protocol Version 1.0                   16.1K
 TLS Protocol Version 1.2                   11.6K
 DTLS Protocol Version 1                      0

Based on the above output, it can be seen that the client SSL profile configured on the Virtual Server handling SSL traffic has received 156 SSLv3 connection requests.

The stats can be cleared using this command:

(tmos.ltm)# reset-stats profile client-ssl

(tmos.ltm)# show profile client-ssl CUSTOM-CLIENT-SSL | grep Protocol
 Protocol
 SSL Protocol Version 2                       0
 SSL Protocol Version 3                       0
 TLS Protocol Version 1.0                     0
 TLS Protocol Version 1.2                   11.6K
 DTLS Protocol Version 1                      0

Note: The “reset-stats” command will NOT clear the stats for TLS1.2  in 10.2.3 code version. F5 is aware of this bug. I was able to clear the stats for 10.2.4 code version. I am not sure if this bug exists for 10.2.3 and lower code versions.

The cipher suite that is being utilized, after the change can be identified by running the following command in bash:

[root@lbal1:Active] config # tmm --clientciphers 'HIGH:MEDIUM:!SSLv3:!SSLv2:!RC4:!COMPAT:!EXP'

Cipher-Suite-F5-TMM

NOTE:

After making the above changes, some of the older browsers may not be able to connect to your website on HTTPS as the older browsers don’t support TLS1.2. For a list of browsers and supported protocols, see here.

Having dealt with SSL/TLS vulnerabilities, it looks like TLS1.2 with Ephemeral Diffie-Hellman is the most secure current version. Anything before TLS1.1 would be insecure and would have to be avoided, if possible. The main problem is that there are quite a few client browsers that are just incompatible with TLS1.2 and this has to be factored before making any changes.

GHOST Vulnerability:

This is a recent vulnerability – CVE-2015-0235

According to SOL16057, if you want to stay with 10.x code version on F5 LTM, it is better to use 10.2.4 + HF11. For 11.x code version, use at least 11.5.1 + HF8.

Update:

For a great Qualys grade see the following link.

!SSLv2:!EXPORT:!DHE+AES-GCM:!DHE+AES:!DHE+3DES:ECDHE+AES-GCM:ECDHE+AES:RSA+AES-GCM:RSA+AES:ECDHE+3DES:RSA+3DES:-MD5:-SSLv3:-RC4:@STRENGTH 

Load Balancers & ADC

An ADC is placed in the Data Center, closer to the Servers. ADC provides a single point of access for the clients that request information.

Application Delivery Controller (ADC) is the newer, fancier and more relevant term for Load Balancers. Although people refer to ADC as just a “Marketing Term”, ADCs certainly provide enhanced functionality compared to the Load Balancers of previous generation.

load_balancer_traffic_flow (1)

 

Load Balancers:

  1. Layer 4 Devices
  2. Distribute Load across multiple servers. Load in the context of L4 devices point to TCP Connections (Socket – IP+Port Combination)
  3. Normally, full TCP stack is client facing and the server facing TCP stack can only handle up to L4
  4. Load Balancers don’t offer high performance functionality like caching, compression and newer protocols like SPDY
  5. Additional features like WAF, DoS prevention are not usually available.

Example: Cisco CSS

Application Delivery Controllers:

  1. Layer 7 Devices
  2. Distribute Load across multiple servers. Load in the context of L7 devices point to TCP Connections and L7 content like HTTP content.
  3. They tend to have 2 full TCP stack. One facing the client and another facing the servers. This provides them with the functionality of a full-proxy and enables them to balance application content and not just the L4 connections.
  4. Most ADC devices provide high performance functionality like caching, compression and newer protocols like SPDY
  5. Additional features like WAF, DoS prevention are usually available.

Example: F5 LTM, Citrix NetScaler

Reference:

Load Balancing 101 – The Evolution to Application Delivery Controllers

What is an Application Delivery Controller ?

iRule – To iRule or Not to

TCL based iRule is a force-multiplier when it comes to Application Delivery Networking utilizing F5 devices. However, it is essential to understand the interaction of iRule with the normal application.

As an F5 iRule administrator, it is essential to understand the responsibility of maintaining the code. Does this fall within the realm of Application Delivery Engineers or DevOps ? This would depend on the organization and the roles for different people and frankly, there is no clear cut answer to this question.

Having said that, I believe iRule can be used effectively to perform the following functions based on the information available within the header or content of the packet, without affecting DevOps:

  • Load Balance to Pools of Servers
  • Persistence actions
  • Serving Sorry Page or Sorry URL

Load Balancing & Persistence actions will not alter the content of the packet and hence, it should be easily achieved without disrupting the Developers. The 2 functionalities provided do not affect the code but the header or content of the packet is utilized to pick the right server or pool of servers.

  • Redirects to specific URL
  • Changing contents of header or packet

For redirects and for altering contents of header or packet, it is essential that the Dev team is aware of the traffic flow. This will prevent any conflicting functionality that may disrupt the normal application delivery.

HTTP Keepalive and Pipelining

HTTP 1.0:

With this old HTTP 1.0 version, there has to be an unique TCP connection for each HTTP Request/Response pair. This is an inefficient way to transfer content between the client and the server as resource is wasted in setting up and tearing down TCP connections at both client and the server.

 

network_1

 

HTTP 1.1 Keepalive:

HTTP Keepalive is also known as Persistent Connections. By default, any HTTP 1.1 Requests are expected to be Keepalive. This basically means, multiple HTTP Requests can be be sent within a single TCP connection between the client and the server. However, each HTTP Request must be followed by a HTTP Response before subsequent HTTP Requests are sent within the TCP Connection.

network_2

HTTP 1.1 Pipelining:

HTTP Pipelining is similar to HTTP Keepalive and is supported in HTTP 1.1 and future versions.

There can be multiple HTTP Requests within the same TCP connection between the client and the server. This is similar to the Keepalive functionality.

The main difference between Keepalive and Pipelining is that in Keepalive, each HTTP Request must be followed by a HTTP Response. In Pipelining, multiple HTTP Request can be sent without waiting for a HTTP Response.

Although, Pipelining seems like an efficient function for HTTP, it is not widely implemented in the internet world and hence, it can break functionality like that of OneConnect on F5. However, it is widely believed that with the new HTTP2.0 implementation, Pipelining will be implemented in the “real-world”.

network_3 (1)

 

An Example:

Imagine a scenario involving a client and a server. The client needs to send 3 HTTP Requests and the server must respond with 3 HTTP Responses in order to exchange the relevant content.

For HTTP1.0, this will involve setting up 1 TCP Connection for each HTTP Request-Response pair. So, we require 3 TCP Connections with an HTTP Request-Response pair per TCP Connection

For HTTP1.1, this will require 1 TCP connection for the 3 HTTP Request-Response pair resulting in a savings of 2 TCP Connections. For HTTP 1.1 Keepalive, the HTTP Request must be followed by an HTTP Response. For HTTP 1.1 Pipelining, multiple HTTP Requests can be sent without waiting for a corresponding HTTP Response from the server.

Advantages of HTTP 1.1:

  • Reduces the total number of TCP Connections.
  • Reduces latency as there is no need to set up/tear down multiple TCP connections
  • Lowers CPU & Memory utilization as there is fewer number of connections set up and torn down.

When you enter a hostname/domain in the browser, the browser opens more than 1 TCP Connection. There isn’t a well-defined limit on the number of TCP Connections that can be opened. This link provides a rough estimate of the number of TCP Connections that each browser opens: TCP Connection on Broswers

HTTP 1.0 RFC1945

HTTP 1.1 RFC2616

HTTP 2.0 Draft

BigIP F5 Rookie

Welcome to the world of Application Delivery Networking and F5 Platform.

If you are new to BigIP F5 or Application Delivery Network and looking to see where to start, I would recommend the following:

F5 University – This provides flash based introduction to some of the F5 modules like LTM.

DevCentral Community – F5’s DevCentral Community is one of its biggest assets. The sheer volume of information available and the number of active members willing to answer your questions will help you in overcoming any obstacle that you may face when deploying F5.

You can also download and run your own F5 virtual instance here.