iRule – To iRule or Not to

TCL based iRule is a force-multiplier when it comes to Application Delivery Networking utilizing F5 devices. However, it is essential to understand the interaction of iRule with the normal application.

As an F5 iRule administrator, it is essential to understand the responsibility of maintaining the code. Does this fall within the realm of Application Delivery Engineers or DevOps ? This would depend on the organization and the roles for different people and frankly, there is no clear cut answer to this question.

Having said that, I believe iRule can be used effectively to perform the following functions based on the information available within the header or content of the packet, without affecting DevOps:

  • Load Balance to Pools of Servers
  • Persistence actions
  • Serving Sorry Page or Sorry URL

Load Balancing & Persistence actions will not alter the content of the packet and hence, it should be easily achieved without disrupting the Developers. The 2 functionalities provided do not affect the code but the header or content of the packet is utilized to pick the right server or pool of servers.

  • Redirects to specific URL
  • Changing contents of header or packet

For redirects and for altering contents of header or packet, it is essential that the Dev team is aware of the traffic flow. This will prevent any conflicting functionality that may disrupt the normal application delivery.

BigIP F5 Rookie

Welcome to the world of Application Delivery Networking and F5 Platform.

If you are new to BigIP F5 or Application Delivery Network and looking to see where to start, I would recommend the following:

F5 University – This provides flash based introduction to some of the F5 modules like LTM.

DevCentral Community – F5’s DevCentral Community is one of its biggest assets. The sheer volume of information available and the number of active members willing to answer your questions will help you in overcoming any obstacle that you may face when deploying F5.

You can also download and run your own F5 virtual instance here.

F5 OneConnect

F5 has a trademark feature called OneConnect that leverages HTTP 1.1 Keepalive. In this article, I will try to explain the functionality of OneConnect, underlying technology and its usage requirements.

There are 2 main reasons that I have utilized OneConnect profile:

  1. Efficient reuse of server side connections (F5-Servers)
  2. L7 Content Switching i.e., when the F5 is configured to make load balancing or persistence decision based on information available in L5 to L7.

Efficient TCP Connection Reuse:

Every time a new connection is set up on the server, the server has to allocate resources.

What Resources:

Resources like State/buffer memory at the Kernel level & memory per thread as each connection consumes web server threads. There is also an impact on the CPU of the server as the server needs to keep track of the web threads and connections.

The consumption of resources outlined earlier won’t affect a small to medium scale load balanced web-server infrastructure. However, as the traffic flow increases, the servers can take quite a big hit. At this point, the option would be to add more servers which can increase your CapEx and of course, OpEx will rise for administrative reasons.

OneConnect, if used properly will provide you with a method to efficiently utilize the existing infrastructure without the requirement for adding more physical servers.  This is done by leveraging existing server side connections using HTTP 1.1 Keepalive.

Lets say, client IP 1.1.1.1 is initiating a connection to the VIP 50.50.50.50 which then gets load balanced to the server 10.10.10.10. Within the TCP connection, the client will utilize multiple HTTP Requests to obtain the right content from the server (HTTP 1.1 Keepalive). After the transaction has been completed, the client closes the client side connection (Client – F5). However, the F5 retains the server side connection (F5-Server). If a new client (1.1.1.2) initiates a connection within a certain timeout interval, the F5 will re-use the server side connection that was retained for the 1.1.1.1 connection. As you can see, the server side connection that was created when 1.1.1.1 made the initial request was used when the new client 1.1.1.2 made the request.

In this particular simple example, 2 client side connections were served with only 1 server side connection. Assuming you can achieve the same scale ratio, 100K client side connections require only 50K server side connections. As the client side connection increases, the OneConnect profile delivers better results.

One thing to note is that from the server’s perspective, HTTP Requests initiated by 1.1.1.2 is still assumed to be over the connection initiated by 1.1.1.1 i.e., the client IP at the server level no longer provides the right information about the true client IP. In order to overcome this, “X-Forwarded-For” header insertion would have to be utilized to insert the right “True Client IP” at the server level. It is essential that the server logs and the application looks for the client IP in the “X-Forwarded-For” header and not the Client IP field.

How does the F5 know which server-side connection to reuse:

In order to understand the connection-reuse algorithm, it is essential to understand the OneConnect Profile Settings.

OneConnect Profile Settings:

  • Source Mask – Network mask that is applied to the incoming client IP in order to identify the server side connection that can be reused. The effect of Source Mask in OneConnect is well explained here.
  • Maximum Size – Max number of server-side connections that is retained in the connection pool for reuse
  • Maximum Age – Max age up to which a server side connection is retained in the connection pool for reuse
  • Maximum Reuse – Max number of HTTP Requests that can be sent over a server side TCP connection
  • Idle Timeout Override – Max time up to which an idle server side connection is kept open for reuse.

A server side connection is retained in the “connection pool” for reuse as long as it satisfies the Max Age, Reuse & Idle Timeout conditions. The size of this “connection pool” is defined within Max Size.

Based on experience, I would recommend not altering any of the default settings of the OneConnect profile other than the Source Mask.

L7 Content Switching:

By default, F5 performs load balancing only once for each TCP connection. This is performed when the very first HTTP Request within a TCP Connection is received by the F5 LTM. Subsequent HTTP Requests in the same TCP connection will be sent to the same server that handled the 1st HTTP Request.

Consider a scenario in which there are multiple clients originating connections from behind a proxy that multiplexes individual client TCP connection into a single TCP connection and sends requests to the F5’s VIP via this single TCP Connection. In this case, after the load balancing decision is made for the very first HTTP Request from the 1st client, subsequent HTTP requests from other clients will be sent to the same server, regardless of the L7 information. If you utilize the HTTP header information for load balancing or persistence, it will lead to undesirable behavior.

An example:

Lets say that there are 5 clients initiating 5 TCP connections and this gets multiplexed into 1 TCP connection between the Proxy and the F5.

Lets say that the load balancing/persistence decision is made based on the cookie persistence (L7)

When the 1st client sends its HTTP Request, it will be load balanced to a specific pool and cookie will be inserted in the HTTP Response.

If there is a subsequent HTTP Request from any  of the other 4 clients utilizing the same TCP connection between the proxy and the F5, they will be sent to the same server that handled the 1st HTTP Request, even if the cookie information provided by the subsequent HTTP Requests are different – lets say that the cookie was set manually and it is pointing to a different server.

This happens because the F5 will stop parsing the HTTP requests in the same TCP connection after the load balancing decision has been made for the very 1st HTTP request – default behavior. When we enable OneConnect, we are telling the F5 to continue checking the HTTP Requests and not to stop checking after the 1st HTTP Request.

This article explains the default behavior of the F5 device and how it affects L7 persistence like Cookie & Universal (UIE).

This article explains the reason for using OneConnect when load has to be balanced to multiple pools based on L7 information.

In short, whenever you require load balancing or persistence based on L7 information, utilize OneConnect.

A good understanding of OneConnect requires a good grasp of HTTP 1.1 Keepalive, Pipelining, OneConnect Profile Settings & the default F5 load balancing behavior.

I will try to provide a graphical explanation to OneConnect in the coming days.

More information about OneConnect is provided here.

BigIP F5 Terminology

BigIP F5 configuration element utilizes the following terminology:

Virtual Server (VS) consists of the following attributes:

  • Virtual IP (VIP)
  • Profiles
  • Persistence Method
  • Pool

Profiles like Client side TCP profile, Server Side TCP Profile, HTTP Profile etc define ways to handle the traffic.

Persistence Methods like Source IP, Cookie etc provide a way to make sure that incoming connections are sent to the same server that handled the initial request. For example, for Source IP based persistence, when a connection comes in and if there is a persistence table entry for that client IP on the F5, the connection will be sent to the same server that handled the request within a specific time out threshold.

A pool consists of the following:

  • Pool Members
  • Load Balancing Method
  • Monitor

A Pool consists of multiple pool members. A pool member is an “IP address + Port”. A node is an IP address.

Load Balancing Methods will help the F5 to balance traffic across the pool members in the pool.

Monitor is utilized to check the health of the pool member. Simple health checks can be ICMP ping or TCP socket check on specific ports. A good practice is to make sure that the timeout period is [(3*Frequency) + 1]. So, if frequency of monitor checks is 5 seconds, timeout would be 16 seconds and this means 3 consecutive failures will result in the pool member being marked down.

BigIP F5 Platform

BigIP F5 is the hardware platform that can be configured to provide the functionality of multiple modules like LTM, GTM, APM, ASM, AFM, AAM, PEM etc

LTM – Local Traffic Manager

GTM – Global Traffic Manager

APM – Access Policy Manager

ASM – Application Security Manager

AFM – Advanced Firewall Manager

AAM – Application Acceleration Manager

PEM – Policy Enforcement Manager

Each hardware platform unit can only handle x number of modules at any one time. This number varies based on the platform. F5 has multiple platforms like 1600, 3600 etc based on the hardware capability. This is a link for the different F5 Hardware Platform Specifications.

Most commonly used modules are LTM & GTM. LTM serves as a Load Balancer or Application Delivery Controller within a single geographic location. GTM is used to load balance DNS requests across multiple geographic regions. ASM provides WAF (Web Application Firewall) functionality. APM provides access restrictions, policies and VPN access. AAM provides optimized application delivery utilizing various functions like compression, caching and the next generation HTTP protocols.

F5 Certification Program

I am an F5 Certified Product Consultant & Systems Engineer for LTM (F5-PCL, F5-SEL) and have about 5 years experience in supporting their products and slightly more experience working with other Networking Vendors. I was quite intrigued by this certification program as this is one of the first exclusive certification for Application Delivery Networking that was intended to be set up similar to the Cisco’s multi level certification culminating in CCIE which is a lab based test.

Sometime in the summer of 2012, I took my first test (Beta Test) in the newly created F5 Certification Program – 101 Application Delivery Fundamentals. As of now, I have successfully completed 101, 201, 301 & 302 exams. 303 & 304 are still remaining and I understand that the 401 exam will be released for beta testing by Q4 2014 or Q1 2015.

Exams currently available (Q3 2014):

Certifications:

  • 101 & 201 – BigIP Certified Administrator
  • 301a & 301b – F5 Certified Technology Specialist – LTM
  • 302- F5 Certified Technology Specialist – GTM
  • 303- F5 Certified Technology Specialist – ASM
  • 304- F5 Certified Technology Specialist – APM

As you can see, passing the 101 exam doesn’t provide you with any certification. However, it is a prerequisite in order to take higher level exams.

F5-Cert-Image

Cost:

The beta tests cost $95 and the production, non-beta tests cost $135. The pricing is given in US Dollars. Of course, now that the production tests are available, beta tests are not available for 101, 201, 301a, 301b, 302, 303 & 304 tests.

Duration:

Beta tests had a duration of 2 hours. Production tests have a duration of 90 minutes.

Preparation:

Without hands-on experience working on F5 products, it is going to be quite tough to pass these exams. The questions are closely tied to the blueprint for each exam. You can get the blue print from the F5 exams page.

Some of the early test takers and F5 Engineers have put together a study guide in order to help the test takers. You can find the study guide, after you register for the exam at the F5 Credential Management System (F5 CMS)

This is the new F5 Certified Candidate Portal.

Before you register, I highly recommend that you check out the F5 Certification Page.

Other resources include F5 University & DevCentral

Exam Retake Policy:

  • 1st Failure:  15 days wait
  • 2nd Failure: 30 days wait
  • 3rd Failure: 45 days wait + complete retake allowance form
  • 4th Failure: Wait 2 years

F5 has a policy were, upon failing an exam for the third time, you may request a review of your past exam performance before taking the exam for the fourth time. This review will provide a list of objectives on which you have scored less than 50% in at least two of the three attempts. To request this review, simply email F5certification@f5.com a day or two after your third failure.

Public Registry:

This link provides access to information on individuals who are F5 certified in different regions. As of Q3 2014, there are about 30 individuals who have achieved all the 3xx level certification.

Personal Opinion:

I had taken the F5-PCL & F5-SEL tests (older F5 certification) and I must say that they were a bit too easy with excessive emphasis on certain configuration elements.

The newly created tests are quite challenging & credible in my opinion for the following reasons:

The tests are conducted by Pearson Vue in test centers with more security (Photo, Palm) which means there is a lesser chance of tests being leaked.

The stricter retake policy would prevent people from taking the exam multiple times just to memorize and leak questions.

F5 seems to have a better tracking system to prevent multiple attempts from individuals.

The tests are tougher in general compared to the previous F5 exams and are set up in such a way that you need at least 1-2 years of real world experience before you can ace them. In other words, bookish knowledge alone won’t help you. In my opinion, the tests and exam scores are easier to achieve than a Cisco certification.I think this will get more complex and challenging as we gain more certified individuals.

Ken Salchow, the F5 Program Manager for this certification has created a linkedin community for F5 Certified Professional and engaged with techies with F5 expertise to answer their questions and address their concerns related to the new certification program. The linkedin community is a great resource for any questions related to the certification program.

iRULE – True-Client-IP Block

Sometimes a customer may serve the content from a Virtual Server via Akamai and may want to block a specific client IP that is presented in the “True-Client-IP” that is inserted by Akamai. The following iRULE would provide the required functionality:

when HTTP_REQUEST { set HOST [string tolower [HTTP::host]] if { ([HTTP::header exists "True-Client-IP"]) and ([HTTP::header "True-Client-IP"] != "") } { set True_Client_IP [HTTP::header "True-Client-IP"] } else { set True_Client_IP 0.0.0.0 } if { [class match $True_Client_IP equals CLASS_BLOCK_IP] } { discard } }

Discard would silently drop the connection. Reject can be utilized instead of Discard. Reject would send a TCP reset to the client. Instead of Reject or Discard, we can also serve a sorry page or a redirect, if required.

iRule – Altering Header Information

This iRULE example will alter the incoming URI before passing the request to the servers:

when HTTP_REQUEST { switch -glob [HTTP::uri] { /old_URI/* { HTTP::uri /new_URI[HTTP::uri] } } }

In this case, for any incoming request that starts with the URI “/old_URI/” (http://domain.com/old_URI/), the “/old_URI/” will be replaced with “/new_URI/old_URI” and this will be passed to the servers (http://domain.com/new_URI/old_URI/)

The various interpretations within the switch statement:

/old_URI/      – URI equals /old_URI/
/old_URI/*    – URI starts with /old_URI/
*/old_URI/* – URI contains /old_URI/

Instead of the “switch” statement, we can also use an “if-statement” like this:

when HTTP_REQUEST { if { [HTTP::uri] starts_with "/old_URI/" } { HTTP::uri /new_URI[HTTP::uri] } }

A slightly more complex version of the URI function alteration is provided here:

when HTTP_REQUEST { set HOST [string tolower [HTTP::host]] set URI [string tolower [HTTP::uri]] if { $URI contains "/NEW_Session_ID=" } {HTTP::uri [string map {/NEW_Session_ID= /OLD_Session_ID=} [HTTP::uri]] pool POOL-WEB-Server } } when HTTP_RESPONSE { if { [HTTP::header values Location] contains "/OLD_Session_ID=" } { HTTP::header replace Location [string map {/OLD_Session_ID= /NEW_Session_ID=} [HTTP::header value Location]] } }

For any incoming HTTP Request, “/NEW_Session_ID=” within the URI is replaced with “/OLD_Session_ID=” and passed to the servers in the pool “POOL-WEB-Server”.

For any HTTP Response from the server to the client that contains the HTTP Header Location, “/OLD_Session_ID=” is replaced with “/NEW_Session_ID=”

This can be used to “mask” the URI or any other header information between the client and the server.

The following iRule will remove “/m/” in the incoming URI and send a redirect:


when HTTP_REQUEST {
set URI [string tolower [HTTP::uri]]
if {$URI starts_with "/m/" }{ 
set NEW_URI [string map {"/m/" "/"} [HTTP::uri]] 
HTTP::respond 301 Location "http://www.domain.com$NEW_URI"
}
}

TEST:

$ curl -I http://10.10.10.10/m/OLD_URI
HTTP/1.0 301 Moved Permanently
Location: http://www.domain.com/OLD_URI
Server: BigIP
Connection: Keep-Alive
Content-Length: 0

The “/m/” in the URI is replaced with “/” as seen in the “Location” header.

This has been tested in production environment on 10.x code of F5 LTM

iRULE – non-English Characters

The web browser will URL encode URI’s that contain special characters.

For example, http://www.domain.com/été is encoded as follows: http://www.domain.com/%C3%A9t%C3%A9

when HTTP_REQUEST { 

set ENCODED_URI [ b64encode [HTTP::uri]]

    switch [HTTP::host] { "domain.com" { 

          if { (($ENCODED_URI eq "LyVDMyVBOXQlQzMlQTk=") or ($ENCODED_URI eq "L2ZyLyVDMyVBOXQlQzMlQTk=")) } 

{ pool POOL_Web-Servers } 

} 

}

}

“/été” URL encodes to “/%C3%A9t%C3%A9″ which base64 encodes to “LyVDMyVBOXQlQzMlQTk=”

“/fr/été” URL encodes to “/fr/%C3%A9t%C3%A9” which base64 encodes to “L2ZyLyVDMyVBOXQlQzMlQTk=”

An Intro to iRULE

This post will provide basic information related to iRULE. The intention of writing this post is to provide someone new to iRULE with basic introduction and cover some of the often used Functionality. This isn’t an in-depth coverage of iRule.

What is an iRULE:

TCL based scripting that is utilized by F5 Application Delivery Modules to manipulate traffic.

Structure of an iRULE:

when <EVENT> {
<PERFORM ACTION>
}

Commonly used iRULE Events:

  • HTTP_REQUEST
  • HTTP_RESPONSE
  • CLIENT_ACCEPTED

Having worked with iRules for almost 5 years, the above 3 events are what I utilize on a daily basis. Almost 9 out of 10 iRules that I have written cover the above events. For an iRule rookie, I would recommend understanding the above 3 events.

Within the structure of the iRule, <PERFORM_ACTION> provides the ACTION to be performed if certain CONDITIONS are matched. For example:

if { [HTTP::host] equals “domain.com” } {
pool POOL_WEB-SERVER
}

Even if you don’t understand scripting, the above information should be quite clear that you are sending the traffic to the pool: POOL_WEB-SERVER, if the incoming host header equals “domain.com” 🙂

Common ACTIONS that I have utilized:

  • Load Balancing based on incoming header values
  • Redirection based on incoming header values
  • Persistence based on incoming header values

As seen above, the vast majority of ACTIONS involve one of three actions based on the incoming header value. We can also perform more complicated actions based on the content of the incoming packet. However, in my opinion, it is better to avoid performing such actions on the load balancer. As the code gets complex, there is a serious question on accountability and ownership – who owns the code (Dev guys, Network guys ?!?) – this requires a separate post just to hash out the realms of control.

Points to Remember:

  • iRULE is TCL based scripting utilized to manipulate traffic on an F5 device
  • iRULE, like TCL is EVENT based
  • iRULE’s structure consists of EVENT, CONDITIONAL statements & ACTION to perform