Clustering
Here are some steps necessary to enable clustering...
Passwords
When deploying in Production Mode, WebLogic will ask for a username and password before starting.
To automate startup, create a "security" folder in the server directory of the domain. Inside it, create a "boot.properties" file.
Example:
mkdir -p ./servers/AdminServer/security echo "username=weblogic" >> ./servers/AdminServer/security/boot.properties echo "password=weblogic123" >> ./servers/AdminServer/security/boot.properties
Don't worry. Upon startup, the username and password will become encrypted.
Node Manager
Steps to create a WebLogic Systemd startup script...
Properties File
A file "nodemanager.properties" will be automatically created when you run the script "./startNodeManager.sh".
./bin/startNodeManager.sh
Hit ctl+c to kill the process. Now we can edit the "nodemanager.properties" file.
Change Listen Address
Change the listen address from 'localhost' to a reachable IP address.
# ListenAddress=localhost ListenAddress=0.0.0.0
Disable Start Script
In WebLogic 12.1.1 (and higher), the nodemanager has been modified to user the "startWebLogic.sh" script to launch WebLogic. I prefer the old technique because it allows creating custom startup parameters for each server.
# StartScriptEnabled=true StartScriptEnabled=false
Systemd Startup Scripts
Create a file called "/etc/systemd/system/nodemanager.service".
[Unit] Description=WebLogic Node Manager Service [Service] Type=simple # Note that the following three parameters should be changed to the correct paths # on your own system WorkingDirectory=/home/oracle/wsc72/user_projects/domains/replicated_domain ExecStart=/home/oracle/wsc72/user_projects/domains/replicated_domain/bin/startNodeManager.sh ExecStop=/home/oracle/wsc72/user_projects/domains/replicated_domain/bin/stopNodeManager.sh User=oracle Group=oinstall KillMode=process LimitNOFILE=65535 [Install] WantedBy=multi-user.target
You can now use the following commands to start / stop nodemanager:
sudo systemctl enable nodemanager sudo systemctl start nodemanager
Admin Console
Let's create a startup script for the Admin Console as well...
Create a file called "/etc/systemd/system/weblogic.service".
[Unit] Description=WebLogic Admin Console Service [Service] Type=simple # Note that the following three parameters should be changed to the correct paths # on your own system WorkingDirectory=/home/oracle/wsc72/user_projects/domains/replicated_domain ExecStart=/home/oracle/wsc72/user_projects/domains/replicated_domain/bin/startWebLogic.sh ExecStop=/home/oracle/wsc72/user_projects/domains/replicated_domain/bin/stopWebLogic.sh User=oracle Group=oinstall KillMode=process LimitNOFILE=65535 [Install] WantedBy=multi-user.target
You c
Admin Console
Now we need to make some configuration changes to the domain. The best way to do this is from the Admin Console.
Create Machines
In the WebLogic Admin Console, create 'machines' to logically represent each physical server (VM or bare-metal) the WebLogic Managed Server will be running on.
Assign the "Node Manager Listen Address". This will be the IP Address that the Admin Console will use to contact the node manager. (It can't be 0.0.0.0.)
Create Dynamic Clusters
Troubleshooting
Here are a few 'gotchas' discovered over the years...
Multicast
Use of multicast for clustering is more efficient than 'unicast', but it can lead to some configuration heartache.
If you get an error similar to:
Solve the problem by telling WebLogic to prefer the IPv4 stack.
You can use the Multicast Monitor to check that heartbeat messages are being sent or received. Run this on any or all WebLogic servers:
java weblogic.cluster.MulticastMonitor <multicast_address> <multicast_port> <domain_name> <cluster_name>
Example:
source setDomainEnv.sh java weblogic.cluster.MulticastMonitor 239.192.0.0 7001 replicated_domain BEA_ENGINE_TIER_CLUST
Node Manager