
Building a Central Authentication Server
This document will explain the steps necessary to install the Apache Tomcat and http servers, mod_auth_cas Apache2 module, Java 1.6.x runtime environment, and JA-SIG CAS 3.3.5 software, and get it all to peacefully co-exist, as either a standalone server, or as part of a clustered high-availability solution.It is probably not without its faults (formatting and style aside); email me (user: swball; domain: uwaterloo.ca) should you find any.
Setup Overview
The server on which all of this software is installed is running RHEL Server 5. This Linux distribution includes a full Apache Tomcat 5 software suite as part of the base RPM repository, which was lovely to see, as keeping with vendor-supplied software in packaged format is nice on many levels. However, we ultimately gave up on using RHEL's (which seems to be JPackage's) tomcat bundle due to many unresolvable issues which we explain further in this document, instead opting to retrieve and install unpackaged software directly from each project website.
The basic steps involved in establishing a standalone CAS service:
- Download all necessary software and unpack it into appropriate locations;
- Edit the Tomcat server.xml file to disable the 8080 listener;
- Enable LDAP functionality in CAS software;
- Configure the httpd AJP connector to forward CAS requests to the application server;
- Build, install, and configure the mod_auth_cas module for httpd;
- Start httpd and tomcat and make sure everything is working.
Required Non-Packaged Software
We have had success running the following combination of software, which after the author finishes typing this sentence, will likely be obsolete.
- apache-tomcat-5.5.26.tar.gz
- jdk-6u6-linux-i586.rpm
- jdk-6u17-linux-i586-rpm.bin
- cas-server-3.3.5-release.tar.gz
- apache-maven-2.0.9-bin.tar.gz
Installing Tomcat and Building LDAP-enabled CAS software
- Download all of the above software into a common location, and then move it into a build directory:
mv apache*gz jdk*bin cas-server*gz /usr/local/build
- Create a tomcat user:
useradd -c "Tomcat application user" -d /usr/local/tomcat tomcat
- Unpack tomcat into /usr/local/tomcat, change ownership of all files and prepare to configure it to run with CAS:
tar zxfv --owner tomcat --group tomcat apache-tomcat-5.5.26.tar.gz -C /usr/local cd /usr/local && mv apache-tomcat-5.5.26 tomcat && cd tomcat/conf
- Add server.xml to your favorite version control system, and open it in an editor, making the following modifications:
- Find the Connector for port 8080 and comment it out:
<!-- <Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" /> -->
- Find the Connector for port 8080 and comment it out:
- Install the following init script, or something that accomplishes the same thing:
cat > /etc/init.d/tomcat && chmod 755 /etc/init.d/tomcat <<EOF #!/bin/bash # # tomcat This shell script takes care of starting and stopping # the tomcat subsystem. # # chkconfig: - 86 14 # description: tomcat server # processname: tomcat # See how we were called. case "$1" in start) runuser -s /bin/bash - tomcat -c 'JAVA_HOME=/usr/java/default /usr/local/tomcat/bin/startup.sh' ;; stop) runuser -s /bin/bash - tomcat -c 'JAVA_HOME=/usr/java/default /usr/local/tomcat/bin/shutdown.sh' ;; restart) runuser -s /bin/bash - tomcat -c 'JAVA_HOME=/usr/java/default /usr/local/tomcat/bin/shutdown.sh' && \ runuser -s /bin/bash - tomcat -c 'JAVA_HOME=/usr/java/default /usr/local/tomcat/bin/startup.sh' ;; *) echo "Usage: $0 {start|stop||restart}" exit 1 esac EOF
- Shoehorn the init script into the runlevels:
chkconfig --add tomcat chkconfig --level 2345 tomcat on chkconfig --list tomcat
- Install the JDK:
cd /usr/local/build chmod u+x jdk-6u6-linux-i586-rpm.bin ./jdk-6u6-linux-i586-rpm.bin
Accept the license, and wait for the rpm installation to finish. Probably a good idea to set:export JAVA_HOME=/usr/java/default
This path is a symlink that the JDK installer creates pointing to the first installation in /usr/java/jdk_<ver>.
- Unpack CAS and Maven:
tar zxfv cas-server-3.3.5-release.tar.gz tar zxfv apache-maven-2.0.9-bin.tar.gz
- Enter the CAS software directory:
export CAS_HOME=/usr/local/build/cas-server-3.2.1 && cd $CAS_HOME/cas-server-webapp
- Delve deep into the cas-server-webapp directory:
cd $CAS_HOME/cas-server-webapp/src/main/webapp/WEB-INF
- Add deployerConfigContext.xml to your favorite version control system, and open it in an editor, making the following modifications:
- Add this bean at the bottom, before the </beans> tag:
<bean id="contextSource" class="org.springframework.ldap.core.support.LdapContextSource"> <property name="urls"> <list> <value>ldaps://ads.uwaterloo.ca/</value> </list> </property> </bean>
- Within the property "authenticationHandlers", comment out the following bean used for testing the CAS service:
<!-- <bean class="org.jasig.cas.authentication.handler.support.SimpleTestUsernamePasswordAuthenticationHandler" /> -->
- Add the following bean within the "authenticationHandlers" property:
<bean class="org.jasig.cas.adaptors.ldap.FastBindLdapAuthenticationHandler" > <property name="filter" value="%u@ads.uwaterloo.ca" /> <property name="contextSource" ref="contextSource" /> </bean>
Note that your "filter" value is going to be different from ours in at least one respect. See the CAS documentation for other LDAP options.
- Add this bean at the bottom, before the </beans> tag:
- Build the war file:
cd $CAS_HOME && ../apache-maven-2.0.9/bin/mvn -Dmaven.test.skip=true package install
- Install it:
cp $CAS_HOME/cas-server-webapp/target/cas.war /usr/local/tomcat/webapps
- Start the httpd and tomcat servers:
/etc/init.d/httpd start && /etc/init.d/tomcat start
- Use lsof to verify that a java process is in fact listening on port 8009/tcp.
lsof -i | grep LISTEN | grep java
Tomcat and JBoss Clustering
We followed the configuration steps from http://www.ja-sig.org/wiki/display/CASUM/Clustering+CAS and they were pretty accurate. This configuration was used to deploy two CAS 3.2.1 servers on the same subnet hiding behind a Cisco IOS load balancer. The following configuration steps, repeated here for the sake of consistency, should be performed on each server at the same time. Please note that this documentation may not be valid for newer versions of CAS.
- Enter the appropriate CAS software directory:
export CAS_HOME=/usr/local/build/cas-server-3.2.1 && cd $CAS_HOME/cas-server-webapp/src/main/webapp/WEB-INF
- Add web.xml to version control and open in an editor, adding the following beneath the first </context-param> tag:
<distributable />
- Head over to the Tomcat configuration directory:
cd /usr/local/tomcat/conf
- Edit server.xml, enabling the multicast section (scroll down to around line 300, and you'll see the default cluster configuration -- we just chose to add our own):
<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster" managerClassName="org.apache.catalina.cluster.session.DeltaManager" expireSessionsOnShutdown="false" useDirtyFlag="true"> <Membership className="org.apache.catalina.cluster.mcast.McastService" mcastAddr="239.255.0.1" mcastPort="45564" mcastFrequency="500" mcastDropTime="3000" mcastTTL="1" /> <Receiver className="org.apache.catalina.cluster.tcp.ReplicationListener" tcpListenAddress="auto" tcpListenPort="4001" tcpSelectorTimeout="100" tcpThreadCount="6"/> <Sender className="org.apache.catalina.cluster.tcp.ReplicationTransmitter" replicationMode="synchronous" ackTimeout="15000" waitForAck="true"/> <Valve className="org.apache.catalina.cluster.tcp.ReplicationValve" filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/> <ClusterListener className="org.apache.catalina.cluster.session.ClusterSessionListener"/> </Cluster>
You may have to chat with your network admin about ensuring that multicast is supported on your network.
- Go to the iptables configuration directory:
cd /etc/sysconfig
- Taking care that iptables is checked out of your version control, open it in an editor, and add the following lines to the appropriate sections of your firewall, where X.X.X.X is the IP of the other CAS server.
-A INPUT -d 224.0.0.1/4 -j ACCEPT -A UDPINPUT -d 224.0.0.1/4 -p udp -m udp --dport 45564 -j ACCEPT -A UDPINPUT -d 224.0.0.1/4 -p udp -m udp --dport 48866 -j ACCEPT # this being the IP of the other CAS server -- port 4001 is how Tomcat communicates session data to the other host, # and as there were also some other higher numbered ports in use (as indicated by iptables DROP logs), likely by # JBoss for ticket cache replication, we decided to fully open communications between the hosts -A TCPINPUT -s X.X.X.X/32 -p tcp -m tcp -j ACCEPT
- Restart iptables:
/sbin/service iptables restart
At this point you've told Tomcat that the CAS application is distributable, and configured Tomcat to run as a cluster, performing node discovery via multicast and sending replication data over 4001/tcp. Once these steps have been performed on both servers, if successful, you'll see in your catalina.out log:
May 7, 2008 3:05:51 PM org.apache.catalina.cluster.tcp.SimpleTcpCluster memberAdded
INFO: Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://X.X.X:X,4001,catalina,X.X.X.X,4001, alive=115058]
On the second server you should see the session information being passed along by the first server to have been started:
May 7, 2008 3:05:53 PM org.apache.catalina.cluster.session.DeltaManager getAllClusterSessions
WARNING: Manager [/cas], requesting session state from org.apache.catalina.cluster.mcast.McastMember[tcp://X.X.X.X:4001,catalina,X.X.X.X,4001,
alive=117065]. This operation will timeout if no session state has been received within 60 seconds.
May 7, 2008 3:05:54 PM org.apache.catalina.cluster.session.DeltaManager waitForSendAllSessions
INFO: Manager [/cas]; session state send at 5/7/08 3:05 PM received in 225 ms.
Now on to the JBoss ticket cache replication.
- Go to the CAS software directory:
cd $CAS_HOME/cas-server-webapp/src/main/WEB-INF/spring-configuration
- Add ticketRegistry.xml to version control, and open in an editor, changing:
<bean id="ticketRegistry" class="org.jasig.cas.ticket.registry.DefaultTicketRegistry" />
to look like:<bean id="ticketRegistry" class="org.jasig.cas.ticket.registry.JBossCacheTicketRegistry"> <property name="cache" ref="cache" /> </bean> <bean id="cache" class="org.jasig.cas.util.JBossCacheFactoryBean"> <property name="configLocation" value="classpath:jbossTicketCacheReplicationConfig.xml" /> </bean>
- Change directories again:
cd $CAS_HOME/cas-server-integration-jboss/src/test/resources
- Add the JBoss cache configuration file to version control, and open in an editor, commenting out the following lines in jbossTestCache.xml:
<!-- <depends>jboss:service=TransactionManager</depends> --> <!-- <attribute name="TransactionManagerLookupClass"> org.jboss.cache.DummyTransactionManagerLookup</attribute> -->
- In the same file, find the section that looks like this:
<UDP mcast_addr="228.1.2.3" mcast_port="48866"
Change the fields in this block to the following:mcast_addr="239.255.0.2" ip_ttl="1"
and add the following field where X.X.X.X is the IP of the system you are configuring:bind_addr="X.X.X.X"
Again, this information may vary depending on how your network is configured to support multicast traffic.
- Save and close the file, and then copy it into its working location with a different name:
cp jbossTestCache.xml /usr/local/tomcat/common/classes/jbossTicketCacheReplicationConfig.xml
- Change directories again:
cd $CAS_HOME/cas-server-webapp
- Open pom.xml in an editor, adding the following snippet beneath the ldap one we configured earlier:
<dependency> <groupId>org.jasig.cas</groupId> <artifactId>cas-server-integration-jboss</artifactId> <version>3.1</version> <scope>runtime</scope> </dependency>
- Open /etc/init.d/tomcat in an editor, adding the following variable declaration near the top:
export CATALINA_OPTS=-Djava.net.preferIPv4Stack=true
- Rebuild the CAS war file:
cd $CAS_HOME && ../apache-maven-2.0.9/bin/mvn package install
This will take a few minutes to complete. If you need to skip tests for whatever reason, pass the following as an argument to mvn:-Dmaven.test.skip=true
- Copy the war file into place:
cp $CAS_HOME/cas-server-webapp/target/cas.war /usr/local/tomcat/webapps
- Start the httpd and tomcat servers, and check the Tomcat log to see what is happening:
/etc/init.d/httpd start && /etc/init.d/tomcat start && tail -f /usr/local/tomcat/logs/catalina.out
You should now see the following entries in the log, hinting that the JBoss ticket cache replication is working:
2008-05-07 15:05:54,734 INFO [org.jasig.cas.util.JBossCacheFactoryBean] - <Starting TreeCache service.>
-------------------------------------------------------
GMS: address is X.X.X.X:33676
-------------------------------------------------------
The Apache configuration steps we provide later in this document are to rewrite and forward http{s,} traffic for a single CAS server; we now have to reconfigure this aspect of the server in order that each of the CAS servers presents themselves as the hostname of the load balancer, since we will no longer be accessing the servers outside of the load balancer. YMMV on these points, depending on the load balancing solution you have at hand. For the sake of brevity, on each of the servers we now have to:
- Change the RedirectMatch directive in ssl.conf to forward to the SLB hostname instead of that of the server itself;
- Change the RewriteRule in httpd.conf to forward to the SLB hostname;
- Change the ServerName in httpd.conf to that of the SLB hostname;
- Install a SSL keypair generated for the SLB hostname into /etc/pki/tls and modify the SSLCertificate{Key,}File directives in ssl.conf accordingly;
- Restart httpd services on both hosts
For the Cisco SLB, we found that we had to disable sticky sessions. When we authenticated to CAS through the SLB, took down the specific server to which we had authenticated, and then reloaded the SLB session, we found that we were still being routed to the downed server. Since our SLB doesn't provide a way to probe the services in its server farms and stop routing to them if a certain service fails to return expected data, a sudden service failure will still be visible through the load balancer; with sticky sessions disabled at least clients can still reach the active server, albeit after a few failed connections. That said, our Nagios monitoring system can probe the cluster and alert upon failure, at which point admins can disable the affected machine in the SLB configuration and perform diagnostics.
You should be able to access the SLB hostname via your web browser and reach one of the CAS servers. Login to it, and then check your browser cookies, and you should see a cookie called CASTGC with a value that resembles TGT-1-kOorRC3byiPrLTGh9LpYgYxrtfwvJAeOIreapJeDvSRuxtni3kNz-server02, where server02 is one of your two CAS servers. Stop CAS and httpd on server02, head back to your browser and reload the page; with our Cisco SLB there seems to be a delay of several seconds as the load balancer apparently tries to contact the downed service, and being unable to, routes you to the other server where your session and cache information have been replicated. (Apparent) success!
Clustering Problems
We have encountered two serious issues with this CAS cluster: both concern re-establishing one of the cluster members after taking it offline to simulate a failover situation. After stopping the Tomcat/JBoss and httpd services on one of the machines and then restarting them a few moments later (more like say 30-60 seconds), expecting the the system will retrieve both session and ticket caches from the running system, we see a state transfer problem at the JBoss level:
PropertyAccessException 1: org.springframework.beans.MethodInvocationException: Property 'configLocation' threw exception; nested exception is
java.lang.RuntimeException: org.jboss.cache.CacheException: Initial state transfer failed: Channel.getState() returned false
We discovered that the JBoss ticket cache replication subsystem uses a socket opened with SO_KEEPALIVE to communicate with other JBoss ticket caches. Once the initial connection has been made and data transferred, the link remains open instead of being torn down and another link getting established when the need to transfer data arises. The keepalive timer will begin to run as soon as the initial JBoss connection is made; you can view this by running netstat with the -o option. The Linux default for the TCP networking setting tcp_keepalive_time is 7200 seconds -- two hours! After the keepalive timer on the socket runs out, the system will send probes to the remote system over the link -- the value of tcp_keepalive_probes is the number of probes -- and the interval at which these probes will be sent is set in tcp_keepalive_intvl in seconds. If the remote system does not respond to the probes, then the OS will close the socket and inform the application. Unless we adjust tcp_keepalive_time to something reasonable, or tell the JBoss subsystem not to reuse the same socket, then the running system will keep waiting for its now-defunct JBoss connection. We can set the tcp_keepalive_time value to something more reasonable:
sysctl -w net.ipv4.tcp_keepalive_time=60
echo "net.ipv4.tcp_keepalive_time=60" >> /etc/sysctl.conf
Now, the longest you should have to wait before bringing a downed cluster member back into operation is 60 seconds. Further testing will reveal how well this configuration change actually works.
On a related note, http://www.mail-archive.com/jboss-user@lists.jboss.org/msg65396.html suggests increasing InitialStateRetrievalTimeout. Another JBoss config found in http://www.nabble.com/JbossTicketCache-td15276812.html#a15278099 has the value increased from the default 15000ms (15 seconds) to 120000ms (2 minutes). We should set this to the longest period of time we should have to wait to retrieve the initial state from a running cluster member: 60 seconds.
The second issue with reintegrating a failed cluster member is with the Tomcat session replication. The log entries on the restarted server:
May 8, 2008 2:51:09 PM org.apache.catalina.cluster.session.DeltaManager getAllClusterSessions
WARNING: Manager [/cas], requesting session state from org.apache.catalina.cluster.mcast.McastMember[tcp://X.X.X.X:4001,catalina,X.X.X.X,4001,
alive=120606]. This operation will timeout if no session state has been received within 60 seconds.
May 8, 2008 2:52:09 PM org.apache.catalina.cluster.session.DeltaManager waitForSendAllSessions
SEVERE: Manager [/cas]: No session state send at 5/8/08 2:51 PM received, timing out after 60,031 ms.
And on the running server:
May 8, 2008 2:51:09 PM org.apache.catalina.cluster.tcp.ReplicationTransmitter sendMessage
SEVERE: Unable to send replicated message to member [org.apache.catalina.cluster.mcast.McastMember[tcp://X.X.X.X:4001,catalina,X.X.X.X,4001,
alive=416]], has only senders for []
May 8, 2008 2:51:09 PM org.apache.catalina.cluster.tcp.SimpleTcpCluster memberAdded
INFO: Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp://X.X.X.X:4001,catalina,X.X.X.X,4001, alive=11]
This seems to happen when the two servers are passing replication information between each other, and one of the servers is stopped; the running server keeps the connection over 4001/tcp open for a short period of time, and if the downed server is restarted while the running server is still in the process of tearing down the interrupted connection, then the restarted server will not be able to retrieve replication data from the running server. The TCP connection appears to go into TIME_WAIT for a period of 30 seconds once replication updates have been made. The solution for this seems to be to not stop and restart the clustered services on a downed host within 30 seconds.
AJP Configuration
AJP stands for `Apache JServ Protocol', and the AJP Connector is used to seamlessly integrate Apache HTTP and Tomcat Server installations. We decided to go this route instead of having Tomcat serve its own content because:
- httpd can handle static requests, while leaving the Java component to the Tomcat server;
- configuring SSL for httpd is a hair simpler than for Tomcat;
- being able to redirect arbitrary URL locations to the Java component is desirable;
The configuration steps are as follows:
- Install httpd and friends, and go to the httpd config directory:
yum install httpd mod_ssl cd /etc/httpd/conf.d
- Add the following to the proxy-ajp.conf file:
ProxyPass /cas ajp://localhost:8009/cas
- Make the following changes to the ssl.conf file:
- Modify the following directives accordingly:
SSLCertificateFile /etc/pki/tls/certs/hostname.crt SSLCertificateKeyFile /etc/pki/tls/private/hostname.key
- add to <VirtualHost:443>
RedirectMatch / https://cas-dev.uwaterloo.ca/cas
- Modify the following directives accordingly:
- Change dir again:
cd ../conf
- Add to <VirtualHost:80> in httpd.conf:
RewriteEngine On RewriteRule ^/.*$ https://cas-dev.uwaterloo.ca/cas [L,R]
- Save and close the edited files, and restart the web server, verifying a secure service is correctly offered on 443/tcp.
mod_auth_cas Configuration
We have setup mod_auth_cas as a part of our testing. Here are the steps to follow to build, install and configure it:
- Fetch the code and install:
cd /usr/local/build svn co https://www.ja-sig.org/svn/cas-clients/mod_auth_cas/trunk mod_auth_cas cd mod_auth_cas/src apxs -i -c mod_auth_cas.c
- Insert the first line into httpd.conf, and check to ensure that the second line is present:
LoadModule auth_cas_module modules/mod_auth_cas.so LoadModule authz_user_module modules/mod_authz_user.so
- Add to /etc/httpd/conf.d/mod_auth_cas.conf:
<IfModule mod_auth_cas.c> CASVersion 2 CASDebug On CASCertificatePath /etc/pki/tls/certs/ist-ca.pem CASLoginURL https://cas-dev.uwaterloo.ca/cas/login CASValidateURL https://cas-dev.uwaterloo.ca/cas/serviceValidate CASCookiePath /var/tmp/cas/ </IfModule>
- Fetch the CA root certificate from IST-CA:
cd /etc/pki/tls/certs wget -O ist-ca.pem http://ist.uwaterloo.ca/security/IST-CA/cacert.pem
- This mod_rewrite code is used to test the mod_auth_cas module on the CAS server itself; we poke a hole for /test because the default is to redirect all requests to /cas where the AJP connector picks it up:
# RedirectMatch / https://cas-dev.uwaterloo.ca/cas RewriteEngine On RewriteCond %{REQUEST_URI} !^/test RewriteCond %{REQUEST_URI} !^/cas RewriteRule ^/.*$ https://cas-dev.uwaterloo.ca/cas [L,R]
- Enable .htaccess files in the <Directory "/var/www/html"> container in httpd.conf:
AllowOverride AuthConfig
- Create a test directory and put something into it:
mkdir /var/www/html/test echo "<html>Hello, world</html>" > /var/www/html/test/index.html
- Add to /var/www/html/test/.htaccess:
AuthType CAS require user your_userid
- Create temp directory for CAS cookies:
mkdir -m 700 /var/tmp/cas chown apache:apache /var/tmp/cas
- Restart web server:
service httpd restart
- Open a web browser to https://cas-dev.uwaterloo.ca/test; you should be redirected to the CAS login page, and your URL should be
https://cas-dev.uwaterloo.ca/cas/login?service=https://cas-dev.uwaterloo.ca/test
- Authenticate correctly and you will be taken to the protected page
mod_authnz_ldap Configuration
Using pre-established ADS groups to permit who can authenticate and view web pages is handy. Here's how we do it using mod_authnz_ldap:
- Ensure the following module is being loaded in your httpd.conf:
LoadModule authnz_ldap_module modules/mod_authnz_ldap.so
- Ensure that AllowOverride AuthConfig is set should you want to put the directives in a .htaccess file, or alternatively add them to a VirtualHost.
- You can determine the appropriate ldap-group(s) as seen below by binding to ADS, searching for a few different users, and finding common memberOf attributes.
- Add the following to your htaccess/httpd.conf file:
AuthBasicProvider ldap AuthLDAPBindDN "CN=to,a,bind,user" AuthLDAPBindPassword bindpass AuthLDAPURL ldap://ads.uwaterloo.ca/DC=ads,DC=uwaterloo,DC=ca?sAMAccountName?sub?(objectClass=*) AuthLDAPGroupAttributeIsDN on require ldap-group CN=UWdir-HR-staff,OU=Is A,OU=Security Groups,OU=Administration,DC=ads,DC=uwaterloo,DC=ca
Please note that mod_authnz_ldap is only available in httpd versions >2.1. Otherwise you'll have to use mod_auth_ldap. Configuration steps are forthcoming.
Java Keystore Configuration
Much of the time spent configuring this software in the beginning was dedicated to enabling SSL services for Tomcat on port 8443. This firstly requires that $CATALINA_HOME/conf/server.xml be edited and the connector below
<!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
be uncommented to enable it. The Java keystore is a file that contains at minimum a private and public key pair, and usually the CA certificate if the public key is signed by an authority. The guide located at http://tomcat.apache.org/tomcat-5.0-doc/ssl-howto.html is helpful.
The steps below were those that actually worked for us, but only using the version of keytool found in the Sun Java 1.5 JVM (the GCJ software's version, which is supposed to be functionally identical, was failing on the last step below, as noted at the beginning of this document):
keytool -genkey -alias tomcat -keyalg RSA -keystore /usr/local/tomcat/.keystore
keytool -certreq -alias tomcat -file certreq.csr -keystore /usr/local/tomcat/.keystore
<have a certificate authority work wonders with the request>
keytool -import -trustcacerts -file cacert.pem -alias root -keystore /usr/local/tomcat/.keystore
keytool -import -trustcacerts -file server.crt -alias tomcat -keystore /usr/local/tomcat/.keystore
keytool -list -v -storetype jks -keystore /usr/local/tomcat/.keystore
As keytool doesn't handle private keys well, if you have a keypair generated by your CA, you can also tell Tomcat to use the PKCS12 format:
openssl pkcs12 -export -chain -CAfile cacert.pem -in publickey.pem -inkey privatekey.pem -out /usr/share/tomcat5/.keystore
Note the following carefully:
- If the keystore resides under $CATALINA_HOME and is named .keystore with a password of `changeit' (no quotes), then you do not have to specify:
keystore="/path/to/keystore" keystorePass="password"
in the HTTPS Connector section of the Tomcat server.xml file. - If you decide to use a PKCS12 keystore as noted, you will have to specify:
keystoreType="PKCS12" in the same connector.
See the aforementioned Tomcat SSL-HOWTO guide for more detail, and remember that Google is your friend. Making friends with this program is unusually complicated, though luckily there is much information available if you look hard enough.
After we decided that using httpd as the front-end was best, we had a Java keystore file with a private key contained within that needed to come out. Here's how that works:
wget http://mark.foster.cc/pub/java/ExportPriv.java
javac ExportPriv.java
java ExportPriv <keystore file> <alias> <password>
You can redirect standard output to the final resting place of the private key (e.g. /etc/pki/tls/private/my.key)
Java Problems
The following were show-stoppers:
- The GCJ keytool program was not 100% compatible with the Java keytool, and as such, we could not get a SSL keystore fully working. From catalina.out:
SEVERE: Exception trying to load keystore /usr/share/tomcat5/.keystore java.security.KeyStoreException: JKS
This output is similar to:# keytool -list -storetype JKS -keystore .keystore Provider fully qualified class name: blah keytool error: java.security.KeyStoreException: JKS
- Installing the Sun JDK 1.5.0 and the jpackage compat RPM (used to link the Sun java software into the /etc/alternatives tree) brought about this error written to catalina.out:
sun.misc.InvalidJarIndexException: Invalid index at sun.misc.URLClassPath$JarLoader.getResource(URLClassPath.java:769) at sun.misc.URLClassPath$JarLoader.getResource(URLClassPath.java:682) at sun.misc.URLClassPath.getResource(URLClassPath.java:161) at java.net.URLClassLoader$1.run(URLClassLoader.java:192) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) [...]
We surmise that this might be a result of trying to use the Sun 1.5.0 JVM when all of the tomcat5 dependencies were built/configured using the GNU Compiler for Java software that comes as the default Java environment (due to licensing restrictions with including a vendor supplied JVM). Unfortunately we don't have enough Java experience to know how to fix this, and Google wasn't up-front with pertinent information.
- Installing the Sun 1.4.2 JDK from RPM and then attempting to install a compat RPM (such as was done for 1.5.0) was soon abandoned after the revision numbers of the JDK and the compat rpm were different, and the sysadmin did not feel like trying to make it all kosher.
- Foregoing a Java keystore in favour of httpd performing SSL encryption was the KISS solution, and permitted us to continue with the RHEL supplied packages; it worked, but we came across problems with the Spring LDAP libraries apparently expecting a JVM more fully-featured than the GNU version. I will post the specific errors if and when I can find them.
- The software we are running was all fetched directly from project websites, so we have to keep an eye out for updates and apply them manually to the test, and then later the production system. Having a vendor like Red Hat look after software maintenance gives us more time to work on exciting things.
Useful Links
- http://www.ja-sig.org/wiki/display/CASUM/Home
- CAS server documentation
- http://www.ja-sig.org/wiki/display/CASC/mod_auth_cas
- CAS Apache module homepage
- http://www.nabble.com/Getting-started:-CAS-Server-3.0.7-+-LDAP-(AD)-Authentication-t3649868.html
- Working example of the Apache ant build process (useful only for CAS 3.0 installation)
- http://www.weiqigao.com/blog/2007/01/14/tomcat_5_on_fedora_core_6_in_five_easy_steps.html
- This guy's blog is a good resource on setting up Tomcat
|