Thursday, October 25, 2012

How can you timeout invocation of external endpoints from BPEL

SyncMaxWaitTime setting applies to synchronous process invocations when the process has a breakpoint in the middle of the process. If there are no breakpoints, the entire process will be executed by the client thread. If there is a breakpoint then a new thread will be spawned to continue the processing after the break. For more details, follow the link.

In order to explicitly set the timeout for the endpoints invoked from within the BPEL, use the following reference binding properties to configure timeouts while invoking external services.

<reference name="HWService">
 <interface.wsdl interface="writeHW_ptt">
 <binding.ws port="helloWS">
 <property name="oracle.webservices.httpReadTimeout" type="xs:string" many="false">10000</property>
 <property name="oracle.webservices.httpConnTimeout" type="xs:string" many="false">10000</property>
 </binding.ws>
 </reference>

 
The property "oracle.webservices.httpReadTimeout" specifies how long to wait until the target service processes the request and "oracle.webservices.httpConnTimeout" specifies the wait-time to connect to the external service.

In asynchronous invocations, you may use the Pick action for configuring the invocation timeouts.

Friday, October 19, 2012

Oracle Traffic Director : Extract Private Key to Decrypt and View SSL Snoop Data

Oracle Traffic Director (OTD) is the last software load balancer released and is based on iPlanet Web Server. It is a fast, reliable, and scalable layer-7 software load balancer that you can deploy as the reliable entry point for all HTTP and HTTPS traffic to application servers and web servers in your network. It leverage the NSS Shared DB for storing the private key and certificates for the SSL encryption and if you are looking to decrypt the SSL traffic using the private key, you would require to first extract it and the steps for it are as under:

For some reason, the pk12util that comes with OTD installation did not work for me so I have to move the cert9.db and key4.db onto my windows machine and follow the below steps:
  1. Downloaded  NSS Tools for windows from here: NSS_Tools_x86_from_NSS_3.12.7 Tools.zip into C:\
  2. Copied the key4.db and cert9.db to “C:\Users\nj\keys” folder
  3. Go to command prompt (cmd C: ) and executue  c:\pk12util.exe -o C:\Users\nj\keys\cert.p12 -d sql:C:\Users\nj\keys -n "<>" (populated on SSL >> Server Certificates on OTD Admin Console)
  4. Prompted for password, enter <>
  5. This should create a cert.p12 under keys folder
  6. Use OpenSSL to execute: openssl pkcs12 –in cert.p12 –out private.key –nocerts –nodes
  7. Prompted for password, enter <>
  8. private.key file should be created in the folder

 
 


 

Thursday, October 18, 2012

Cisco VPN error : The VPN Client was unable to setup IP Filtering

If you are getting an error "The VPN Client was unable to setup IP Filtering" when trying to use the Cisco Any Connect client then here is the solution for you:
  1. Save the file "BFE.reg" locally and then execute the file by double clicking
  2. Click Start > Run > regedit
  3. Browse to “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BFE\Parameters\Policy”
  4. Right click on “Policy” and select permission
  5. In the "Permissions for Policy" window, select advanced
  6. Unselect “Include inheritable permission from this object’s parent”
  7. Select Add from the Windows Security popup box
  8. Remove Users and CREATOR OWNER
    • Select Add button
    • Enter in "NT Service\BFE" and select OK
    • Give the Object the following Allow permissions: Query Value, Set Value, Create Subkey, Enumerate Subkeys, Notify, and Read Control
    • Select OK to close all of the boxes
  9. Reboot Windows
  10. Connect with AnyConnect to test the connection

Weblogic setting wrong protocol in WSDL (Load Balancer terminating SSL)

In most of the architectures, the SSL is terminated at the hardware load balancer for performance reasons and allows the internal traffic to use HTTP for communication.

Client ---[HTTPS] --> Hardware LB (SSL termination) --- [HTTP] --> WLS (WebService)

The client will typically fetch the WSDL for the webservice hosted on WLS and use the endpoint available in the WSDL for invoking the webservice. So the calls for fetching the WSDL would happen as under:

Client (https://lbhost:lbhttpsport/URI?wsdl)  --- > Hardware LB (http://wlshost:wlshttpport?wsdl) --- WLS (set the endpoint in wsdl as frontendhost:frontendhttpport if provided or will return http://lbhost:lbhttpsport/URI)

Please note that the endpoint in the WSDL has the http protcol whereas the client is only used to call the LB on https protocol. The reason why WLS sets the protocol as http is because the request was recieved on http and there is no way for WLS to identify if the actual request was made on https.

To solve the issue, you need to set an extra header variable "WL-Proxy-SSL: true" at the load balancer so that WLS identifies the request is called on https. Also, you need to set the flag Weblogic-Proxy-Plugin Enabled at the WLS managed server.

Weblogic Service Migration (Issues and Workarounds)

Pinned services, such as JMS-related services, the JTA Transaction Recovery Service, and user-defined singleton services are hosted on individual server instances within a cluster—for these services, the WebLogic Server supports failure recovery with service migration. There is a lot of documentation and blogging on this topic and in this post I want to just cover two of the issues that you may face during the service migration setup:

Issue1: If you have multiple clusters within a domain and you have setup service migration (database leasing) for only some of the clusters in your domain then you may find that the other cluster members start throwing errors as under:


" #### <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <> <> <8bca0b730cda1738:17608ce6:1382c32b7b9: -8000-0000000000000002="-8000-0000000000000002"> <1340771329481> 'WseeJmsModule'.
java.lang.IllegalArgumentException: Cannot add Singleton Service M_MS1 (migratable) as SingletonServicesManager not started.  Check if MigrationBasis for cluster is configured."
 
Workaround: Configure Database leasing for all the clusters in the domain though no need to configure the full service migration but just the cluster level service migration.
 
Issue2: If you are using multi-datasource (MDS) for your service migration then you may see that everything is working fine but actually behind the scenes the service migration framework pins itself to first datasource in MDS list and it really does not failover to the other datasource in case if the first datasource goes DOWN. You can easily find if this is the case by issuing the following database query:
 
" select username, gv$sqlarea.inst_id, sql_text, gv$sqlarea.executions, gv$sqlarea.first_load_time from gv$session, gv$sqlarea where gv$session.sql_id = gv$sqlarea.sql_id and username ='<db_username>'; "
 
You see that all the sql to udpate the ACTIVE table are issued against the same RAC instance/datasource. If you shutdown the RAC instance/datasource where the service migration framework is pinned, it results that no more update happen to the ACTIVE table. Please note that the server periodically renews its lease by updating the timestamp in the lease table. By default a migratable server renews its lease every 30,000 milliseconds—the product of two configurable ServerMBean properties:

HealthCheckIntervalMillis, which by default is 10,000.
HealthCheckPeriodsUntilFencing, which by default is 3.

 
But there will no session created once the first datasource in the MDS configuration goes DOWN. Though, there will be lot of exceptions in the managed server logs but neither the migration happens nor the managed are able to secure a lease.
 
Workaround: There are couple of workarounds to resolve the issue:
1) Use TNS connect string for the datasource rather than using MDS
2) This is a reported bug (9365773) and should ask the Oracle support for a patch to fix the issue.
 
Also would like to mention some of the debug parameters specific for logging the service migration internals as under:
 
-Dweblogic.StdoutDebugEnabled=true
-Dweblogic.log.LoggerSeverity=Debug
-Dweblogic.log.LogSeverity=Debug
-Dweblogic.debug.DebugServerMigration=true
-Dweblogic.debug.DebugSingletonServices=true
-Dweblogic.debug.DebugUnicastMessaging=true
-Dweblogic.debug.DebugServerLifeCycle=true
-Dweblogic.slcruntime=true
-Dweblogic.slc=true
 
Please note that, both the above issue happened in Weblogic 10.3.4 & 10.3.5 and might have got fixed in later versions of Weblogic releases.

Tuesday, July 17, 2012

What is T3, soadirect, sb? When to use them?

Weblogic installation provides a range of standalone client that access WebLogic Server applications range from simple command-line utilities that use standard I/O to highly interactive GUI applications built using the Java Swing/AWT classes.

T3: RMI communications in WebLogic Server use the T3 protocol to transport data between WebLogic Server and other Java programs, including clients and other WebLogic Server instances. A server instance keeps track of each Java Virtual Machine (JVM) with which it connects, and creates a single T3 connection to carry all traffic for a JVM.  

SOADIRECT: The SOADIRECT transport provides native connectivity between Oracle Service Bus and Oracle SOA Suite service components. Oracle SOA Suite provides a "direct binding" framework that lets you expose Oracle SOA Suite service components in a composite application, and the Oracle Service Bus SOA-DIRECT transport interacts with those exposed services through the SOA direct binding framework, letting those service components interact in the service bus layer and leverage the capabilities and features of Oracle Service Bus.

SB Transport: SOA Suite can invoke Oracle Service Bus proxy services with an SB transport binding, including the transaction and security context using the direct binding reference.
You can google for more details on each of them. Now, all of the above internally uses RMI/t3 for protocol for communication   But, I could not find any specific internals about how t3 works and most importantly how to load balance these using software/hardware load balancer. In this blog, I will reveal the details that I got from the experts:

1.  Can you loadbalance t3 connections?

You cannot explicitly load balance t3 connections. So, even if you put a hardware/software loadbalancer in front of SOA/OSB/Weblogic cluster it would not help. Yes, the initial connection request will go through the load balancer (only as part of connection establishment i.e. as an alternative to DNS round-robin) but after the intial connection is established, it will stay for the lifetime of the server or client. Once a connection(t3) is established, it consistently goes routes back to the same server directly without going through the loadbalancer. If the connection fails, the loadbalancer doesn't attempt to re-establish it. Responsibility for failover and reconnection remains with the t3 protocol itself.

2. What happens when the client/SOA/OSB requests a t3/soadirect/sb connection?

In general, t3 takes care of its own load balancing (stateless stubs will round-robin calls; stateful stubs will be sticky, with appropriate failover), but requires unfettered access to establish connections to the target servers. Further, once a client has made a connection to one node in the cluster, it learns about all cluster members (regardless of the contents of the URL used to connect); and is actively notified of the member health so it will rarely try to contact a server that is not running. T3 is a stateful protocol; once a connection has been established, the LB cannot re-route it. Responsibility for failover and reconnection must remain with the RMI stub.

3. What happens if the connection is dropped?

Once the channel (t3) is established, the two endpoints will issue regular heartbeats and keep the connection alive.

4. how can you loadbalance your t3 connections?


No, you cannot loadbalance them so it is recommended to not use it until you need to propogate transactions as t3 provides transactional capabilities. You may some link specifying that soadirect/sb/t3 is faster than regular http/ws but I would recommend don't believe it until you test it in your environment.

Bottom line: Use sb/soadirect/t3 only if you need to propogate transactions between end systems else don't use it!!

Tuesday, July 10, 2012

SOA, OSB and Weblogic Performance Tuning

I have compiled this document with references to various links/documents/guides illustrating performance tuning for SOA, OSB and Weblogic infrastructure:
  • Performance Tuning Guide
     [http://docs.oracle.com/cd/E23943_01/core.1111/e10108/toc.htm]
  • Tuning Your SOA Infrastructure for Performance and Scalability OOW 2011[http://www.oracle.com/technetwork/middleware/soasuite/learnmore/odtugsoaperformancetuningsession-427186.pdf]
  • SOA Suite Performance Tuning Presentation OOW 2010 [http://www.oracle.com/technetwork/middleware/soasuite/soaperformancetuning-176286.pdf]
  •  [http://www.oracle.com/technetwork/database/features/availability/soa11gstrategy-1508335.pdf]
  • Oracle SOA Suite 11g WP on DB-RAC Configuration [http://www.oracle.com/technetwork/database/focus-areas/availability/maa-fmw-soa-racanalysis-427647.pdf]
  • JVM Memory Monitoring, Tuning, Garbage Collection, Out of Memory, and Heap Dump Analysis For SOA Suite Integration 11g (Doc ID 1358719.1)
  • Oracle Fusion Middleware Performance and Tuning for Oracle WebLogic Server 11g [http://download.oracle.com/docs/cd/E21764_01/web.1111/e13814.pdf]
  • Oracle Fusion Middleware Performance and Tuning Guide (This one include SOA and OSB) [http://download.oracle.com/docs/cd/E21764_01/core.1111/e10108.pdf]

Friday, July 6, 2012

Oracle Service Registry Installation Issue (Error 500 Internal Service Error)

If you are trying to install OSR 11g in a domain that has OSB or SOA managed servers configured, you will run into one of the issues as below:

java.lang.LinkageError: Class javax/xml/namespace/QName violates loader constraints
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
 at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
 at weblogic.utils.classloaders.GenericClassLoader.defineClass(GenericClassLoader.java:343)
 Truncated. see log file for complete stacktrace

org.idoox.wasp.WaspInternalException: java.lang.RuntimeException: Updates to config files not supported
 at com.systinet.wasp.WaspImpl.boot(WaspImpl.java:399)
 at org.systinet.wasp.Wasp.init(Wasp.java:151)
 at com.systinet.transport.servlet.server.Servlet.init(Unknown Source)
 at weblogic.servlet.internal.StubSecurityHelper$ServletInitAction.run(StubSecurityHelper.java:283)
 at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
 Truncated. see log file for complete stacktrace
Caused By: java.lang.RuntimeException: Updates to config files not supported
 at com.idoox.config.xml.XMLConfigurator.updateConfigFile(XMLConfigurator.java:720)
 at com.idoox.config.xml.XMLConfigurator.prepareConfigFile(XMLConfigurator.java:570)
 at com.idoox.config.xml.XMLConfigurator.init(XMLConfigurator.java:423)
 at org.idoox.config.Configurator.init(Configurator.java:65)
 at com.systinet.wasp.WaspImpl.boot(WaspImpl.java:359)
 Truncated. see log file for complete stacktrace

The reason for this issue is the conflict among the classpath jars and the solution is as below:
Replace the snippet in setDomainEnv.sh/cmd where POST_CLASSPATH variable is created as below (sample from SOA +OSR and same applies to OSB+OSR):

if NOT "%SERVER_NAME%"=="osr_server1" (
 set POST_CLASSPATH=%SOA_ORACLE_HOME%\soa\modules\oracle.soa.fabric_11.1.1\oracle.soa.fabric.jar;%SOA_ORACLE_HOME%\soa\modules\oracle.soa.fabric_11.1.1\fabric-runtime-ext-wls.jar;%SOA_ORACLE_HOME%\soa\modules\oracle.soa.adapter_11.1.1\oracle.soa.adapter.jar;%SOA_ORACLE_HOME%\soa\modules\oracle.soa.b2b_11.1.1\oracle.soa.b2b.jar;%POST_CLASSPATH%
 set POST_CLASSPATH=%DOMAIN_HOME%\config\soa-infra;%SOA_ORACLE_HOME%\soa\modules\fabric-url-handler_11.1.1.jar;%SOA_ORACLE_HOME%\soa\modules\quartz-all-1.6.5.jar;%POST_CLASSPATH%
 set POST_CLASSPATH=%COMMON_COMPONENTS_HOME%\modules\oracle.xdk_11.1.0\xsu12.jar;%BEA_HOME%\modules\features\weblogic.server.modules.xquery_10.3.1.0.jar;%SOA_ORACLE_HOME%\soa\modules\db2jcc4.jar;%POST_CLASSPATH%
 set POST_CLASSPATH=%UMS_ORACLE_HOME%\communications\modules\usermessaging-config_11.1.1.jar;%POST_CLASSPATH%
 set POST_CLASSPATH=D:\SOA\installations\11g6\tempInstallation\Oracle_SOA1\soa\modules\oracle.soa.common.adapters_11.1.1\oracle.soa.common.adapters.jar;%POST_CLASSPATH%
 set POST_CLASSPATH=%COMMON_COMPONENTS_HOME%\soa\modules\commons-cli-1.1.jar;%COMMON_COMPONENTS_HOME%\soa\modules\oracle.soa.mgmt_11.1.1\soa-infra-mgmt.jar;%POST_CLASSPATH%
) else (
 set POST_CLASSPATH=""
)

Basically, you need to remove all the jars from the POST_CLASSPATH for the OSR managed server. This is bug [Patch 9499508] as reported on oracle support.

Wednesday, June 20, 2012

Managing Database growth: SOA 11g Running Purge Scripts -> A Step by Step Approach

Recently, there is a step by step document has been published for using the purge scripts bundled with SOA 11.1.1.4 + versions which can be found here [ID 1345957.1].


Also, the state details have been really confusing and the details but a consolidated document can be found here [ID 1362028.1].

Monday, June 4, 2012

Best Practices for Coherence Portable Object Format (POF)



POF objects are indexed so it is possible to quickly traverse/navigate the binary (serialized form) to a specific element for extraction or updating using the indices without de-serializing the entire binary. Out of the box, coherence provides SimplePofPath class that can navigate a POF value based on integer indices. In the simplest form, all you need to do is to provide the index of the attribute that you want to extract/update. Some of the best practices that are coming to my mind are as under:
  • Order your reads and writes: start with the lowest index value in the serialization routine and finish with the highest. When deserializing a value, perform reads in the same order as writes.
  • Use the smallest possible integer for indexing the objects
  • Non-contiguous indexes are acceptable but must be read/written sequentially.
  • For subclasses reserve index ranges: index's are cumulative across derived types. As such, each derived type must be aware of the POF index range reserved by its super class.
  • Do not re-purpose indexes: to support Evolvable, it's imperative that indexes of attributes are not re-purposed across class revisions.
  • Label each of the index for the attribute
  • The most frequently used attribute in the Filter(s) and Extractor(s) should be assigned lowest index.
  • Enable POF Object  reference if the objects have circular/nested references (feature available in 3.7.1 + only)

Thursday, May 3, 2012

Coherence Extend vs TCMP member

Coherence* Extend provides lot more operation flexibility at the cost of slight performance because the communication protocol changes from TCMP to TCP and an extra hop. Here are some of the advantages of using C*Extend:

- Client application  and coherence servers can run different coherence versions and can be upgraded independently
- Client and server configuration can be managed independently
- Data Access can be secured more efficiently
- Client application and servers can reside across subnets and WANs

Some of the disadvantages of using C*Extend:
- Extra hop that occurs through the proxy
- Extra JVM and hardware to run and manage proxies including, load balancing proxies, memory allocation for proxies, number of proxies and so on.
- Increased latency due to an extra hop and use of TCP compared to TCMP.

I would recommend to use C*Extend if:

1) client application(s) may join and/or leave the coherence cluster frequently (common reason is client application performing frequent long GCs)
2) client application(s) does not reside in the same subnet as the coherence servers
3) client application(s) are not tightly bound at an application layer to the coherence cluster.
4) client application(s) and coherence servers are managed by two different groups and may have have different upgrade schedules.
5) Requires data access security between client application(s) and coherence servers.

Search This Blog