Configuring Indexing and Search Service for High Availability

Skip to end of metadata
Go to start of metadata

Configuring Indexing and Search Service for Oracle Communications Unified Communications Suite for High Availability

This information describes how to plan and deploy a highly available Indexing and Search Service system.

The following functionality is available starting in Indexing and Search Service 1 Update 2.

Topics:

ISS High Availability Overview

Topics in this section:

About ISS High Availability

Starting with version 1 Update 2, ISS provides the ability to make its search component highly available through the use of the Cluster Search Service and a highly available NFS, upon which you locate ISS indexes. When ISS search is unavailable from an ISS web node, the clients' search requests are redirected to another ISS web node that accesses the HA NFS and locates the appropriate index. Thus, the ISS search component can fail without an effective loss of the overall search functionality. Additionally, by using hardware load balancers in front of the ISS web nodes, you split the network load across these ISS front ends, increasing their availability to respond to client requests.

The Cluster Search Service is able to:

  • Perform searches on known indexing repositories
  • Manage account state information (removes dependency on topic AccountState.hostname being responsive on indexing host)
  • Manage user directory lookup information (removes dependency on queue Index.hostname being responsive on indexing host)

Availability is often shown as the percentage of time that the system is up, or available, by using a system of "nines." For example:

2 x 9's = 99% uptime = 89.2 hours downtime/year
2.5 x9's = 99.5% uptime = 44.6 hours downtime/year
3 x 9's = 99.9% uptime = 8.92 hours downtime/year
and so on

ISS enables you to deploy a highly available indexing and search system with a goal of enabling at least a 99.9 percent uptime.

Note
Starting with Indexing and Search Service 1 Update 4, you can configure Indexing and Search Service for high availability such that the NFS tier, on which the indexes are stored, is not exposed through your firewall. In ISS terms, this means using the clusterv2 installation type when configuring high availability. For more information, see Configuring Indexing and Search Service for clusterv2.

Benefits and Limitations of ISS HA

The following summarizes the ISS HA benefits and limitations.

Benefits:

  • Your organization can depend on the ISS software to perform its searches with little or no downtime.
  • Reduced single points of failure.
  • Ability to easily add more web and indexing hosts as your deployment grows.

Limitations:

  • Does not provide protection from hardware or network failures.
  • The performance of the NFS host (acts as the backup for indexes so that searches can continue when indexing hosts are unavailable) could impact searching performance.
  • The longer indexing hosts are unavailable, the more out-of-sync the indexes become with the accounts.

ISS High Availability Model

What this is:

  • The ISS High Availability Model is based on making the ISS search services themselves redundant.

What this is not:

  • Not a hot-spare model where one node fails over to a standby node.
  • Not a data replication model that enables any node to fail without loss of data.

ISS HA Architecture

The following Communications Suite components make up an ISS architecture:

  • Messaging Server: Message store, which contains the users' email and attachment data.
  • ISS: Indexing service, which runs the bootstrapping operation and processes JMQ events from the message store.
  • ISS: Search service, which runs the search services and GlassFish Server (web container) to process client search requests. The web container acts as the front end, or access layer, for search clients.

The ISS architecture also depends on a Directory Server for LDAP storage and access. The Directory Server contains user information for all message stores.

The following figure shows the ISS logical architecture in non-HA mode.

ISS Logical Architecture
This figure shows the high-level architecture of the ISS product.

To make the ISS deployment highly available, the following setup is used:

  • GlassFish Server: Use hardware load balancers to split the network load across the GlassFish Server front ends.
  • Search service: Use Cluster Search Service, which can search and maintain account state information for different dIndex repositories.
  • Indexes: Place indexes on an HA NFS to be shared to the Clustered Search Service in case the indexing host is unavailable.

The following figure shows the ISS HA architecture.

ISS HA Architecture
This figure shows the high-level HA architecture of the ISS product.

Migrating From a non-HA ISS Deployment to an HA ISS Deployment

To migrate a standard ISS node to an HA indexing node requires the following operations:

  1. Stop all indexing services.
  2. Remove the ISS WAR files from GlassFish Server.
  3. Stop GlassFish Server.
    It is no longer be needed on this host for ISS operation.
  4. Update the following parameters in the iss-dir/etc/jiss.conf directory:
    iss.cluster.enabled = true
    iss.cluster.type = index
    
  5. Copy the iss.store.dir and iss.attach.dir to the NFS share.
  6. Mount the NFS share so that the iss-dir/etc/jiss.conf file (iss.store.dir and iss.attach.dir) are available.
  7. Start the indexing services.
  8. Generate a cluster configuration for this indexing node.
  9. Copy the cluster file to the web node and enable by using the iss-dir/bin/csearchmgr.sh command.

Configuring ISS High Availability

This section contains the generic procedures to configure ISS HA. See Indexing and Search Service Highly Available Example for a sample that walks you through configuring the different components of an ISS HA deployment on Oracle Solaris.

Topics in this section:

Before You Begin

Make sure that you have performed the following steps prior to beginning the ISS HA configuration:

  1. Install and configure your Messaging Server deployment (most sites already have deployed Messaging Server).
    For more information, see Communications Suite Installation Guide.
  2. Set up your HA NFS.
    Many options exist for this requirement. Choose an NFS that best suits your site's requirements.
  3. Install and configure your Indexing and Search Service deployment.
    For more information, see Installation Scenario - Indexing and Search Service.
    Note the following additional package requirements:
    • ISS Web Host: The web host needs GlassFish Server and Java Message Queue (JMQ).
    • ISS Indexing Host: The indexing host needs JMQ, and Directory Server(LDAP), if you don't already have a Directory Server set up to perform JNDI lookups.
      Note
      When installing ISS through the Communications Suite installer, JMQ is automatically installed for each type of ISS node. You need to install GlassFish Server and Directory Server separately.

To Set Up NFS to Contain the ISS Indexes

Note
In case of indexing host failure, putting the indexes on NFS still provides the ability to do searches. However, the longer the indexing host is unavailable, the more risk you run of the indexes getting out of date with the store.
  1. On the NFS server, and ISS web and index hosts, create iss.user and iss.group with the same uid and gid, and edit the /etc/default/nfs file and make sure that the NFSMAPID_DOMAIN value is accurate.
  2. On all NFS clients (the ISS web and index hosts), edit the /etc/default/nfs file and make sure that the NFSMAPID_DOMAIN value is the value used on the NFS server host.
    By default, the nfsmapid uses the DNS domain of the system. This setting overrides the default. This domain is used for identifying user and group attribute strings in the NFS protocol. Clients and servers must match with this domain for operation to proceed normally.
  3. On the NFS server, enable NFS, create mount points for the ISS hosts, create ZFS pools for each ISS index host, create mount points for the ISS hosts, change the ownership and permissions of these mount points, and share the NFS file systems.
    1. Use the svcadm command to enable NFS. There must be an entry in the /etc/dfs/sharetab file for the NFS server daemon to start.
    2. Create a ZFS pool for each index host.
    3. Create directories for each index host to be used as mount points on the NFS server.
    4. Change ownership of these directories to iss.user.
    5. Change permission on these directories to 755.
    6. Set the mount points for the ZFS file system on the ISS index hosts.
    7. Share the file systems as NFS file systems so that the NFS clients can mount and access them.
      For security reasons, only the ISS index hosts need read/write access. The ISS web hosts, which would access the file systems for searching purposes, are given read-only access. The web hosts are also assigned anonymous access and iss.user uid access. Anonymous access and iss.user uid access might not be needed, depending on the user that runs the GlassFish Server. Running the GlassFish Server as the iss.user and iss.group eliminates the need for anonymous uid access to the share.
  4. Perform the following steps on the ISS web hosts.
    1. Create directories on each web host to be used as mount points on the NFS server.
    2. Change ownership of these directories to the iss.user.
    3. Set the mount points as read-only for these directories on the NFS server.
  5. On each ISS index host, create the ISS data directory mount point, change the user and group owner to iss.user:iss.group, and mount it from the NFS server with read-write privileges. Also, create a local file system directory for logs and change the user and group owner to iss.user:iss.group.
    For example:

To Configure the Indexing Hosts

Perform the following steps on each ISS indexing host.

  1. Run the ISS setup script.
  2. Configure the cluster setup by responding to the following prompts.
    1. Enable cluster configuration (iss.cluster.enabled):
      Type true.
    2. Type of cluster configuration web or index (iss.cluster.type):
      Type index.
    3. Fully qualified domain name of this system (hostname):
      Type the fully qualified name of this host, for example, bco04.example.com.
    4. Instance name of the installation for an indexing node (instance.name):
      Type the unique instance name used to identify topics and queues for this host, for example, bco04. The instance name is not required to match the mount point name.
    5. Location to store the Lucene indexes (iss.store.dir):
      This parameter specifies the mount point on the NFS file system for indexes. Type /var/opt/sun/comms/jiss/index.
    6. Location of attachment data (iss.data.dir):
      This parameter specifies the mount point on the NFS file system for attachment data. Type /var/opt/sun/comms/jiss/attach.
    7. Location of JISS log files (iss.log.dir):
      You should keep the logs on the local disk. Type /var/iss/logs.
  3. Configure the mail server setup by responding to the following prompt: Comma-delimited list of mail server IPs corresponding to mail.server (mail.server.ip).
    Type the mail server IP address or addresses, for example, 10.0.2.0,10.0.2.1.
    The mail server parameters for this indexing host point to the mail server that it indexes. In every configuration the User/Group Directory Server information should be identical. For every mail server in the cluster you need to add it to the list of mail.server.ip.
  4. Configure the Java Message Queue setup by responding to the following prompts.
    1. JISS JMQ broker hostname(s) list, that is, host:7676,host2:7677 (imq.host):
    2. Username for JISS JMQ broker (iss.imq.user):
    3. Password for JISS JMQ user (iss.imq.password):
    4. Password for admin user on JISS JMQ broker (iss.imq.admin.password):
  5. Configure the Directory Server setup for JNDI by responding to the following prompts.
    1. JISS Directory Server host list host:port,host2:port2 (ldap.host):
    2. JISS Directory Manager DN; format: cn=Directory Manager (java.naming.security.principal):
    3. JISS Directory Server password (ldap.password):
  6. Configure the service setup by responding to the following prompt: Storeui access method, disk for single machine, http for multi-machine (iss.storeui.access.method)
    • Type disk. (http is for a different type of install.)
  7. Repeat the preceding setup steps for each indexing host.

To Configure the Web Hosts

Perform the following steps on each ISS web host.

  1. Run the ISS setup script.
  2. Configure the cluster setup by responding to the following prompts.
    1. Enable cluster configuration (iss.cluster.enabled)
      Type true.
    2. Type of cluster configuration web or index (iss.cluster.type)
      Type web.
  3. Configure the local install settings by responding to the following prompts.
    1. Fully qualified domain name of this system (hostname)
      Type the FQDN, for example, bco01.example.com.
    2. Instance name of the installation for a web node (instance.name):
      Type the unique instance name used to identify topics and queues for this host, for example, bco01.
    3. Location to store the Lucene indexes (iss.store.dir):
      Accept the default (/var/opt/sun/comms/jiss/index).
    4. Location of attachment data (iss.data.dir)
      Accept the default (/var/opt/sun/comms/jiss/attach).
    5. Location of JISS log files (iss.log.dir):
      This needs to be on a local disk, for example, type /var/iss/logs.
  4. Configure the mail server setup by responding to the following prompt: Comma-delimited list of mail server IPs corresponding to mail.server (mail.server.ip).
    Type the mail server IP address or addresses, for example, 10.0.2.0,10.0.2.1.
    In every configuration the User/Group Directory Server information should be identical. For every mail server in the cluster you need to add it to the list of mail.server.ip.
  5. Configure the GlassFish Server settings by responding to the following prompts.
    1. Directory location of the Application Server (appserv.dir)
      The default is /opt/SUNWappserver.
    2. Application Server web port (appserv.web.port)
      The default is 8080.
    3. Application Server admin port (appserv.admin.port)
      The default is 4848.
    4. Application Server domain name for deployment (appserv.domain)
      The default is domain1.
    5. Application Server admin user (appserv.admin.user)
    6. Application Server admin password (appserv.admin.password)
  6. Configure JMQ settings by responding to the following prompts.
    1. JISS JMQ broker hostname(s) list, that is host:7676,host2:7677 (imq.host)
      For example, type bco01.example.com:7676.
    2. Username for JISS JMQ broker (iss.imq.user)
    3. Password for JISS JMQ user (iss.imq.password)
    4. Password for admin user on JISS JMQ broker (iss.imq.admin.password)
  7. Configure the service setup by responding to the following prompt: Storeui access method, disk for single machine, http for multi-machine (iss.storeui.access.method)
    • Type disk. (http is for a different type of install.)
  8. Repeat for each web host.

To Generate and Import Cluster Configuration Files

Each ISS index node that is part of the cluster needs to have a configuration file generated and copied to the ISS web nodes. This configuration file informs the web node where its files are, how to connect to get account state updates, and how to search it. This file enables the web nodes to perform a search.

  1. On the first index host, use the configure_etc.pl -C command to generate the Indexing and Search Service iss_base/etc/jiss.conf file but with the host name, for example, bco04.conf.
  1. Copy this index-host1.conf file to the /opt/sun/comms/jiss/etc/cluster.d directory on the web hosts.
  2. On the next index host, use the configure_etc.pl -C command to generate the Indexing and Search Service iss_base/etc/jiss.conf file but with that host's name, for example, bco29.conf.
  3. Copy this index-host2.conf file to the /opt/sun/comms/jiss/etc/cluster.d directory on the web hosts.
  4. Perform the following steps on the web hosts:
    1. Update the iss.store.dir and iss.attach.dir parameters in the configuration files to point to the indexing node.
      • index-host1.conf
        instance.name = <index-host1>
        imq.host = <index-host1>.<domain>:7676
        iss.imq.user = jmquser
        iss.imq.password = <password>
        ldap.host = bco04.example.com:389
        java.naming.security.principal = cn=Directory Manager
        ldap.password = <password>
        java.naming.security.authentication = simple
        # These must be set manually:
        iss.store.dir =  /<index-host1>/index
        iss.attach.dir = /<index-host1>/attach
        
      • index-host2.conf
        instance.name = <index-host2>
        imq.host = <index-host2>.<domain>:7676
        iss.imq.user = jmquser
        iss.imq.password = <password>
        ldap.host = <index-host2>.<domain>:389
        java.naming.security.principal = cn=Directory Manager
        ldap.password = <password>
        java.naming.security.authentication = simple
        # These must be set manually:
        iss.store.dir =  /<index-host2>/index
        iss.attach.dir = /<index-host2>/attach
        
  5. Set the owner and access permissions on the *.conf files.

To Start Cluster Search Services

  • On the ISS web hosts that are running the Cluster Search Service, perform the following commands.
    See the csearchmgr.sh command for more information.

To Index Users on Indexing Hosts

  • Use the issadmin.sh --bootstrap command to bootstrap users on the message store hosts.
    See issadmin.sh Usage for more information.

To Verify Users on Web Hosts

  1. On each ISS web host, log in as the bootstrapped user.
    In this example, log in to bco01 and bco22.
  2. Perform a search by using the RESTful interface.
    The search defaults to the default mail host. Thus, you might need to change the hostname parameter in the URL if the user resides on a different mail host. For example, on bco01.example.com the search URL might resemble the following for user c1:

    You would then change the username and hostname fields in the URL to the following for user u1:

To Add an Additional Web Host

  1. Repeat the installation for the new node.
  2. Copy in the index node configuration files.
  3. Run csearchmgr.sh -A.
  4. Add the new node to the load balancer.

To Remove a Web Host

  1. Remove the node from the load balancer.
  2. Run csearchmgr.sh -D.

Indexing and Search Service Highly Available Example

This example shows how to configure an ISS HA deployment consisting of the following hosts:

  • Two front-end web hosts
  • Two index hosts
  • Two message store hosts
  • One NFS host

Topics in this section:

Assumptions

This example assumes that you have already deployed Messaging Server and ISS (in non-HA mode), and have a host that can serve as the NFS mount.

Note the following additional package requirements:

  • ISS Web Host: The web host needs GlassFish Server and Java Message Queue (JMQ).
  • ISS Indexing Host: The indexing host needs JMQ, and Directory Server(LDAP), if you don't already have a Directory Server set up to perform JNDI lookups.
    Note
    When installing ISS through the Communications Suite installer, JMQ is automatically installed for each type of ISS node. You need to install GlassFish Server and Directory Server separately.

Setting Up NFS to Contain the ISS Indexes

  1. On the NFS host, and ISS web and index hosts, create iss.user and iss.group with the same uid and gid.
  2. On all NFS client hosts (ISS index and web notes), edit the /etc/default/nfs file and make sure that the NFSMAPID_DOMAIN value is the same.
    NFSD_SERVERS=1024
    NFSMAPID_DOMAIN=example.com
    
  3. Set up the ZFS pools on NFS server and configure the shares on each web and index host.
    In the following:
    • bco04 and bco29 are ISS index hosts.
    • bco01 and bco22 are ISS web hosts.
    • nc-agile is the NFS host.
          * nc-agile.example.com
                  svcadm enable svc:/network/nfs/server:default
                  mkdir -p /bco04 /bco29
                  chown jiss:jiss /bco04 /bco29
                  chmod 755 /bco04 /bco29
                  zfs create pool/bco04
                  zfs create pool/bco29
                  zfs set mountpoint=/bco04 pool/bco04
                  zfs set mountpoint=/bco29 pool/bco29
                  share -F nfs -o rw=bco04.example.com,ro=bco01.example.com,ro=bco22.example.com,anon=100 /bco04
                  share -F nfs -o rw=bco29.example.com,ro=bco22.example.com,ro=bco01.example.com,anon=100 /bco29
      
          * bco01.example.com and bco22.example.com
      
            mkdir -p /bco04 /bco29
            chown jiss:jiss /bco04 /bco29
            mount -o ro nc-agile.example.com:/bco04 /bco04
            mount -o ro nc-agile.example.com:/bco29 /bco29
      
          * bco04.example.com
      
            mkdir -p /var/opt/sun/comms/jiss
            chown jiss:jiss /var/opt/sun/comms/jiss
            mount -o rw nc-agile.example.com:/bco04 /var/opt/sun/comms/jiss
            mkdir -p /var/iss/logs
            chown jiss:jiss /var/iss/logs
      
          * bco29.example.com
      
            mkdir -p /var/opt/sun/comms/jiss
            chown jiss:jiss /var/opt/sun/comms/jiss
            mount -o rw nc-agile.example.com:/bco29 /var/opt/sun/comms/jiss
            mkdir -p /var/iss/logs
            chown jiss:jiss /var/iss/logs
      

Running the setup Script on Indexing Hosts

Perform the following steps on each ISS indexing host.

  1. Run the ISS setup script.
  2. Configure the cluster setup.
  3. Configure the mail server setup.
    The mail server parameters for this indexing host point to the mail server that it indexes. In every configuration the User/Group Directory Server information should be identical. For every mail server in the cluster you need to add it to the list of mail.server.ip.
  4. Configure the Java Message Queue setup.
  5. Configure the Directory Server setup.
  6. Configure the service setup.
  7. Repeat for each ISS indexing host.

Running the setup Script on Web Hosts

Perform the following steps on each ISS web host.

  1. Run the ISS setup script.
  2. Configure the cluster setup.
  3. Configure the local install settings.
  4. Configure Mail Server settings.
    In every configuration the User/Group Directory Server information should be identical. For every mail server in the cluster you need to add it to the list of mail.server.ip.
    In the Mail Server section mail server parameters every configuration for the User/Group Directory Server information should be identical. For every mail server in the cluster you need to add it to the list of mail.server.ip.
  5. Configure GlassFish Server settings.
  6. Configure JMQ settings.
  7. Configure the service setup.
  8. Repeat for each ISS web host.

Generating and Importing Cluster Configuration Files

  1. On the first index host (this example uses bco04.example.com):
  2. On the next index host (this example uses bco29.example.com):
  3. On the web hosts (this example uses bco01.example.com and bco22.example.com):
    1. Update iss.store.dir and iss.attach.dir in configuration files.
      • bco04.conf
      • bco29.conf
  4. Set the owner and access permissions on the *.conf files.

Starting Cluster Search Services

  • On the web hosts that are running the Cluster Search Service, perform the following commands (in this example, bco01.example.com and bco22.example.com):

Other useful csearchmgr.sh commands:

cd /opt/sun/comms/jiss/bin
./csearchmgr.sh -l (list cluster search service entries)
./csearchmgr.sh -D (remove all cluster search services)
./csearchmgr.sh -a -n <name> (add cluster search service for name)
./csearchmgr.sh -d -n <name> (delete cluster search service for name)

Indexing Users on Indexing Hosts

  • Run the following commands on the indexing hosts, to bootstrap users on the message store hosts.
    • For example, on bco04.example.com:
    • On bco29.example.com:

Verifying Users on Web Hosts

  1. On each web host, log in as the bootstrapped user.
    In this example, log in to bco01 and bco22.
  2. Perform a search.
    The search defaults to the same mail host. Thus, you might need to change the hostname parameter in the URL if the user resides a different mail host. For example, on bco01.example.com the search url might resemble the following for user c1:

    You would then change the username and hostname fields in the URL to the following for user u1:

Troubleshooting Indexing and Search Service High Availability

This section contains the following topics:

General Differences Between Indexing Hosts and Web Hosts

The following table shows the two main differences, other than host name, in the jiss.conf file between the indexing hosts and web hosts. The installer should automatically configure these parameters based on the iss.cluster.enable parameter and type of node being configured.

jiss.conf File Differences Between Indexing and Web Hosts

Parameter Name Indexing Host Web Host
java.naming.factory.initial com.sun.jndi.ldap.LdapCtxFactory com.sun.jndi.fscontext.RefFSContextFactory
iss.accountstate.dst.name AccountState.instance.name AccountState

The naming factory change tells the web host to use file-based JNDI lookups that point to the local host's JMQ broker. The account state change is used to create a single point where all account state updates are funneled through a single topic to the GlassFish Server war files (rest,storeui).

Troubleshooting Web Hosts

  1. Verify that the Cluster Search Services are running.
  2. Check the Cluster Search Services log files.
  3. Check JMQ connections.
    For each running Cluster Search Service, you should see the following information.
    • 1 consumer to SearchTopic topic
    • 1 consumer to Index_instance.name_ queue
    • 1 consumer per configuration in iss.cluster.d to AccountState.instance.name on indexing host (shown in the following output)
    • GlassFish Server war files produce the following connections to JMQ running on the web host
    • 1 consumer per configuration in iss.cluster.d to AccountState
    • 512 or iss.rest.proxypool.size producers to SearchTopic assuming that at least one RESTful search has been issued (otherwise this will be 0)

Troubleshooting Indexing Hosts

  1. Check the log files.
    <iss.log.dir>/iss-indexsvc.log.0
    
  2. Check JMQ connections on indexing node
    You should see the following information:
    • 1 consumer for AccountState.instance.name topic for each cluster search service started plus one for jmqconsumer.

Uninstall Indexing and Search Service High Availability

To uninstall Indexing and Search Service, run setup -u on each node.

Labels:
indexsearchservice indexsearchservice Delete
highavailability highavailability Delete
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.

Sign up or Log in to add a comment or watch this page.


The individuals who post here are part of the extended Oracle community and they might not be employed or in any way formally affiliated with Oracle. The opinions expressed here are their own, are not necessarily reviewed in advance by anyone but the individual authors, and neither Oracle nor any other party necessarily agrees with them.