Workspace Streaming

 View Only

Configuring Load Balancing Best Practices  

May 05, 2011 11:12 AM

This article contains details for configuring a load balanced environment.

The environment consists of the following components: a load balancer, client network, server network, database server, domain controller, DHCP server, and SWS servers and clients. The following subsections contain configuration instructions for some of these components. The environment details in this article are for a F5 BIG-IP load balancer version 10.2.0 configuration. You will need to adjust some of the settings depending on your environment.

For instructions on how to configure a basic load balancing scheme for Streaming Servers, read the Configuring load balancing section in chapter 7 of the Symantec Workspace Streaming Administration Guide at the following location: http://www.symantec.com/docs/DOC2498

Load Balancer

The load balancer referenced in this article is a F5 BIG-IP Local Traffic Manager VE (Virtual Edition). The network settings should already be configured for management, client side private network, and server side private network.

Components Configured in this section

F5's BIG-IP Local Traffic Manager VE (Virtual Edition)
Version: F5 BIG-IP 10.2.0 Build 1707.0 Final

Components that should already be configured

NIC1: Management console. 10.127.x.x
NIC2: Server side private network. 192.168.1.x
NIC3: Client side private network. 192.168.2.x

Monitors

The portal and streaming monitors need to be added to each pool you create.

Portal Monitor settings

Name: PortalMonitor
Partition: Common
Type: HTTP
Interval: 5 seconds
Up Interval: Disabled
Time Until Up: 0 seconds
Timeout: 5 seconds
Manual Resume: No
Send string: GET /statusCheck\r\n
Receive string: ServerStatus=0
Receive Disable String: <blank>
User Name: <blank>
Password: <blank>
Reverse: No
Transparent: No
Alias Address: * All Addresses
Alias Service Port: * All Ports

Streaming Monitor settings

Name: StreamingMonitor
Partition: Common
Type: HTTP
Interval: 5 seconds
Up Interval: Disabled
Time Until Up: 0 seconds
Timeout: 5 seconds
Manual Resume: No
Send string: GET /AppStreamN.check\r\n
Receive string: ServerStatus=0
Receive Disable String: <blank>
User Name: <blank>
Password: <blank>
Reverse: No
Transparent: No
Alias Address: * All Addresses
Alias Service Port: * All Ports

NOTE: When you create a new TCP, HTTP, or HTTPS monitor in LTM version 10.2.0, you must include \r\n at the end of a non-empty Send String. For example, 'GET /\r\n' instead of 'GET /.' If you do not include \r\n at the end of the Send String, the TCP, HTTP, or HTTPS monitor fails.

For more information, read the following article: https://support.f5.com/kb/en-us/products/big-ip_ltm/releasenotes/product/relnote_10_2_0_ltm.html

Nodes

All nodes should use the following configuration:

Health Monitors Node Default
Ratio 1
Connection Limit 0

Nodes to configure

Address Name
192.168.1.11 SWSFE11
192.168.1.12 SWSFE12
192.168.1.13 SWSFE13
192.168.1.14 SWSFE14
192.168.1.20 SWSBE
192.168.1.21 SWSFE21
192.168.1.22 SWSFE22

Pools

All pools should have the following properties:

Partition Common
Health Monitors: PortalMonitor, StreamingMonitor
Availability Requirement: All
Allow SNAT: Yes
Allow NAT: Yes
Action On Service Down: None
Slow Ramp Time: 10 seconds
IP ToS to Client: Pass Through
IP ToS to Server: Pass Through
Link QoS to Client: Pass Through
Link QoS to Server: Pass Through
Reselect Tries: 0

Setup the following pools with the appropriate members:

Pool Members
BackupFEPool 192.168.1.21 (SWSFE21)
192.168.1.22 (SWSFE22)
PrimaryFEPool 192.168.1.11 (SWSFE11)
192.168.1.12 (SWSFE12)
192.168.1.13 (SWSFE13)
192.168.1.14 (SWSFE14)
SWSConsole 192.168.1.20 (SWSBE)
SWSFE11 192.168.1.11 (SWSFE11)
SWSFE12 192.168.1.12 (SWSFE12)
SWSFE13 192.168.1.13 (SWSFE13)
SWSFE14 192.168.1.14 (SWSFE14)
SWSFE21 192.168.1.21 (SWSFE21)
SWSFE22 192.168.1.22 (SWSFE22)

iRules

SWSMain

This rule provides persistence on the main site based on the AppStreamKey. If the key does not match, then it will direct the user to the main FE pool. The default pool is set to the main FE pool on the main VIP so if no key exists it will go to the main FE pool.

when HTTP_REQUEST
{
   if {[HTTP::header exists "AppStreamKey"]}
   {
      set log_prefix "[IP::client_addr]:[TCP::client_port]"
      log local0. "$log_prefix: AppStreamKey: [HTTP::header "AppStreamKey"]"
      set key [HTTP::header "AppStreamKey"]
      if {$key equals "11"}
      {
        pool SWSFE11
      }
      elseif {$key equals "12"}
      {
         pool SWSFE12
      }
      elseif {$key equals "13"}
      {
         pool SWSFE13
      }
      elseif {$key equals "14"}
      {
         pool SWSFE14
      }
      else
      {
          pool PrimaryFEPool
      }
   }
 }

SWSBackup

This rule provides persistence on the backup site based on the AppStreamKey. If the key does not match, then it will direct the user to the backup FE pool. Default pool is set to backup FE pool on the backup VIP so if no key exists it will go to the backup FE pool.

when HTTP_REQUEST
{
      set log_prefix "[IP::client_addr]:[TCP::client_port]"
      log local0. "$log_prefix: AppStreamKey: [HTTP::header "AppStreamKey"]"
      if {[HTTP::header exists "AppStreamKey"]}
      {
         set key [HTTP::header "AppStreamKey"]
      if {$key equals "21"}
      {
        pool SWSFE21
      }
      elseif {$key equals "22"}
      {
         pool SWSFE22
      }
      else
      {
         pool BackupFEPool
      }
   }
 }

SWSJSESSIONID

This rule provides persistence for the portal page. The same rule can be used for main and backup VIPs. Timeout is for 20 minutes, which is the default timeout for the portal page. An iRule is used for portal persistence because other methods such as source_addr and cookie persistence profiles did not work. This rule was copied from F5 community page.

when HTTP_REQUEST
{
 # Log details for the request
   set log_prefix "[IP::client_addr]:[TCP::client_port]"
   #log local0. "$log_prefix: Request to [HTTP::uri] with cookie: [HTTP::cookie value JSESSIONID]"
   # Check if there is a JSESSIONID cookie
   if {[HTTP::cookie "JSESSIONID"] ne ""}{
      # Persist off of the cookie value with a timeout of 20 minutes (1200 seconds)
      persist uie [string tolower [HTTP::cookie "JSESSIONID"]] 1200
      # Log that we're using the cookie value for persistence and the persistence key if it exists.
      log local0. "$log_prefix: Used persistence record from cookie. Existing key? [persist lookup uie [string tolower [HTTP::cookie "JSESSIONID"]]]"
   } else {
      # Parse the jsessionid from the path. The jsessionid, when included in the URI, is in the path, 
      # not the query string: /path/to/file.ext;jsessionid=1234?param=value
      set jsess [findstr [string tolower [HTTP::path]] "jsessionid=" 11]
      # Use the jsessionid from the path for persisting with a timeout of 20 minutes (1200 seconds)
      if { $jsess != "" } {
         persist uie $jsess 1200
         # Log that we're using the path jessionid for persistence and the persistence key if it exists.
         log local0. "$log_prefix: Used persistence record from path: [persist lookup uie $jsess]"
      }
   }
}
when HTTP_RESPONSE {
   # Check if there is a jsessionid cookie in the response
   if {[HTTP::cookie "JSESSIONID"] ne ""} {
      # Persist off of the cookie value with a timeout of 20 minutes (1200 seconds)
      persist add uie [string tolower [HTTP::cookie "JSESSIONID"]] 1200
      log local0. "$log_prefix: Added persistence record from cookie: [persist lookup uie [string tolower [HTTP::cookie "JSESSIONID"]]]"
   }
}

SWSMain2Backup

This rule directs the user to the backup FE pool if all nodes from the main pool are down. This allows users to access the backup site while still using the main site's VIP address.

when HTTP_REQUEST {
   if { [active_members PrimaryFEPool] eq 0}
   {
      log local0. "Main site down. Using backup site."
      pool BackupFEPool
   }
}

Virtual Servers

SWSMain

Use this VIP when going to the portal to stream applications.

Properties
Name: SWSMain
Partition: Common
Destination Type: Hose
Destination Address: 192.168.2.3
Service Port: 80 HTTP
State: Enabled
Type: Standard
Protocol: TCP
Protocol Profile
(Client):
tcp
Protocol Profile
(Server):
(Use Client Profile)
OneConnectProfile: None
NTLM Conn Pool: None
HTTP Profile: http
FTP Profile: None
Stream Profile: None
XML Profile: None
SSL Profile (Client): None
SSL Profile (Server): None
Authentication Profiles Enabled: <blank>
RTSP Profile: None
Diameter Profile: None
SIP Profile: None
Statistics Profile: None
VLAN and Tunnel Traffic: All VLANs and Tunnels
SNAT Pool: None
Rate Class: None
Traffic Class Enabled: <blank>
Connection Limit: 0
Address Translation: Enabled checked
Port Translation: Enabled checked
Source Port: Preserve
Clone Pool (Client): None
Clone Pool (Server): None
Last Hop Pool: None
Resources
Default Pool: PrimaryFEPool
Default Persistence Profile: None
Fallback Persistence Profile: None
iRules: SWSMain2Backup
SWSMain SWSJESSIONID
HTTP Class Profiles: None

SWSBackup

The backup VIP address is specified as the backup server in the SWS admin console. It is not used to directly access the backup server portal.

Properties
Name: SWSBackup
Partition: Common
Destination Type: Hose
Destination Address: 192.168.2.4
Service Port: 80 HTTP
State: Enabled
Type: Standard
Protocol: TCP
Protocol Profile (Client): tcp
Protocol Profile (Server): (Use Client Profile)
OneConnectProfile: None
NTLM Conn Pool: None
HTTP Profile: http
FTP Profile: None
Stream Profile: None
XML Profile: None
SSL Profile (Client): None
SSL Profile (Server): None
Authentication Profiles Enabled: <blank>
RTSP Profile: None
Diameter Profile: None
SIP Profile: None
Statistics Profile: None
VLAN and Tunnel Traffic: All VLANs and Tunnels
SNAT Pool: None
Rate Class: None
Traffic Class Enabled: <blank>
Connection Limit: 0
Address Translation: Enabled checked
Port Translation: Enabled checked
Source Port: Preserve>
Clone Pool (Client): None
Clone Pool (Server): None
Last Hop Pool: None
Resources
Default Pool: PrimaryFEPool
Default Persistence
Profile:
None
Fallback Persistence
Profile:
None
iRules: SWSBackup
SWSJESSIONID
HTTP Class Profiles: None

SWSConsole

This VIP is not necessary. Configure it if you want to be able to access the BE admin console from the client network

Properties
Name: SWSConsole
Partition: Common
Destination Type: Hose
Destination Address: 192.168.2.3
Service Port: 9842 Other
State: Enabled
Type: Standard
Protocol: TCP
Protocol Profile
(Client):
tcp
Protocol Profile
(Server):
(Use Client Profile)
OneConnectProfile: None
NTLM Conn Pool: None
HTTP Profile: None
FTP Profile: None
Stream Profile: None
XML Profile: None
SSL Profile (Client): None
SSL Profile (Server): None
Authentication Profiles Enabled: <blank>
RTSP Profile: None
Diameter Profile: None
SIP Profile: None
Statistics Profile: None
VLAN and Tunnel
Traffic:
All VLANs and Tunnels
SNAT Pool: None
Rate Class: None
Traffic Class Enabled: <blank>
Connection Limit: 0
Address Translation: Enabled checked
Port Translation: Enabled checked
Source Port: Preserve
Clone Pool (Client): None
Clone Pool (Server): None
Last Hop Pool: None
Resources
Default Pool: SWSConsole
Default Persistence
Profile:
None
Fallback Persistence
Profile:
None
iRules: None
HTTP Class Profiles: None

Database Servers

Any SWS supported database can be used. A dedicated machine should be used for the DB. SWS back end install will connect to this external database. Enterprise customers will have back end and database servers on different machines. The DB machine should be on the server side network.

Typical database choices for testing: SQL 2005 SP3 or Oracle 11g R2

All database servers need the following modifications:

  • Add the following registry keys. Reboot for the changes to take effect.
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
    "TcpTimedWaitDelay"=dword:0000003c
    "MaxUserPort"=dword:00002710
  • Join the DB machine to the domain.

SWS Servers

This environment requires 1 VM for the back end, 4 VMs for the main front ends, and 2 VMs for the backup front ends. All servers should be Windows 2008 or Windows 2003. Perform custom install on all servers and use host names when configuring component install settings.

Server Components to Install
Back end (SWSBE) Streaming Console, Streamlet Engine, Data access Server, Streamlet Control Module
Front ends (SWSFE11, SWSFE12, etc.) Streaming Server, Data access Server, Streaming Portal

All SWS servers need the following modifications:

  • Add the following registry keys. Reboot for the changes to take effect.
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
    "TcpTimedWaitDelay"=dword:0000003c
    "MaxUserPort"=dword:00002710
  • Join to the test domain.

It is helpful to have console windows for all VMs and admin web page for the load balancer opened. This will allow you to quickly disable processes and check node status.

To determine which FE's the clients are using, go to the Components Details page in the SWS admin console and look at the number of running sessions.

Back end Configuration

  • Database: SQL 2005 SP2 or Oracle 11g R2 depending on the testing.
  • LDAP datasource, auto authentication
  • If you are testing SWS 6.1 SP5 July 2010 release or earlier then you need the following to be able to enter the VIP address for the backup server in the External Address tab of the default server group instead of using a drop down list (which will only allow you to select a FE). This is fixed and not necessary start SWS 6.1 SP5 August release or later.
  • Get AWECSTabLoadBalancing.jsp file from SEV file share at /Fileshares/Load_Balancing_Testing_Files and replace the one on the console at "\Workspace Streaming\Server\console\webapps\console\pages\console". Backup the old file first just in case. Go to "\Workspace Streaming\Server\common\apache-tomcat\work\SWSConsoleService\localhost\_\org\apache\jsp\pages\console" and delete the corresponding .class file.

Back end Server (SWSBE)

  1. Create a Backup Server Group.
  2. Add the FE's to the appropriate server group.
  3. Add the DA's of each FE.
  4. Modify each server group to have the external address that corresponds to the load balancer virtual server address.
  5. The default server group should also have the external address of the backup server group (SWSBackup VIP) as the backup server.
  6. Use port 80 as the external port.
  7. Configure auto authentication for each launch server.

Primary Site Front End Servers (SWSFE11, SWSFE12, SWSFE13, SWSFE14)

  • Tokens should be unique for each FE. The tokens for each server listed above are 11, 12, 13, and 14, respectively.

Backup Site Front End Servers (SWSFE21, SWSFE22)

  • Tokens should be unique for each FE. For these instructions the tokens for each server listed above are 21 and 22, respectively.

Front end Configuration

  1. Modify the da.conf file to match with the one from BE for the DB being used. Do not copy and paste the entire file. Doing so will result in console reporting you have duplicate GUIDs when added components. Just copy the particular section for the database and comment out the Postgres section.
  2. Modify the DA order list on each FE so that it points to itself first. This is found in"\Workspace Streaming\Server\launchserv\conf\launchserv.properties" and "\Workspace Streaming\Server\server\bin\AppstreamServerCfg.txt".
  3. Copy the appropriate database jar files to the folder "\Workspace Streaming\Server\agent\lib" on each front end with DA.

SWS Clients

VMs for clients: Win7 x64, WinXP x86, Win2k3 x86, Win2k8 x64

When installing the client from the portal, the Active-X installer and the SWS installer may be slow to load or install. This will happen if the client machines do not have access to the Internet to validate the digital signature of the installer. The install may take several minutes to complete. One way to speed up the install is to add a second NIC to the client VMs to allow the VMs access to the Internet.

XPF packages require client version 6.1 SP4 (6.2.x) or higher.

For information on how load balancing impacts the SWS architecture, read the Symantec Connect article: Symantec Workspace Streaming Load Balancer Impact to Architecture

Statistics
0 Favorited
2 Views
1 Files
0 Shares
0 Downloads
Attachment(s)
doc file
Load_Balancing.doc   199 KB   1 version
Uploaded - Feb 25, 2020

Tags and Keywords

Comments

Dec 12, 2011 04:08 AM

Usefull stuff.

Related Entries and Links

No Related Resource entered.