Increasing the number of threads and working on a queue in the messaging system, didn't not solve the performance problem ?.
Why can't we try Adapter Parallelism....
The below are the three strategies to work around.....
1) Create additional communication channels with a different name and adjust the respective sender/receiver agreements to use them in parallel.
2) Add a second server node that will automatically have the same adapters and communication channel running as the first server node. This does not work for polling sender adapters (File, JDBC, or Mail) since the adapter framework scheduler assigns only one server node to a polling communication channel.
3) Install and use a non-central Adapter Framework for performance-critical interfaces to achieve better separation of interfaces.
Some of the most frequently used adapters and the possible options are:
Polling Adapters (JDBC, Mail, File):
At the sender side these adapters use an Adapter Framework Scheduler which assigns a server node that does the polling at the specified interval. Thus, only one server node in the J2EE cluster does the polling and no parallelization can be achieved. Therefore scaling via additional server node is not possible. For example, since the channels are doing the same SELECT statement on the database or pick up files with the same file name, parallel processing will only result in locking problems. To increase the throughput for such interfaces, the polling interval has to be reduced to avoid a big backlog of data that is to be polled. If the volume is still too high, think about creating a second interface, for example, the new interface would poll the data from a different directory or database table to avoid locking. At the receiver side, the adapters work sequentially on each server node by default. For example, only one UPDATE statement can be executed for JDBC for each Communication Channel (independent of the number of consumer threads configured in the Messaging System) and all the other messages for the same Communication Channel will wait until the first one is finished. This is done to avoid blocking situations on the remote database. On the other hand, this can cause blocking situations for whole adapters as discussed in section Avoid Blocking Caused by Single Slow/Hanging Receiver Interface. To allow better throughput for these adapters, can configure the degree of parallelism at the receiver side. In the Processing tab of the Communication Channel in the field “Maximum Concurrency”, enter the number of messages to be processed in parallel by the receiver channel. For example, if you enter the value 2, then two messages are processed in parallel on one J2EE server node. The parallel execution of these statements at database level of course depends on the nature of the statements and the isolation level defined on the database. If all statements update the same database record database locking will occur and no parallelization can be achieved.
JMS Adapter:
The JMS adapter is per default using a push mechanism on the PI sender side. This means the data is pushed by the sending MQ provider. Per default every Communication channel has one JMS connection established per J2EE server node. On each connection it processes one message after the other so that the processing is sequential. Since there is one connection per server node scaling via additional server nodes is an option ( refer SAP Note 1502046 ).By doing so you can specify a polling interval in the PI Communication Channel and PI will be the initiator of the communication. Also here the Adapter Framework Scheduler is used which implies sequential processing. But in contrary to JDBC or File sender channels the JMS polling sender channel will allow parallel processing on all server nodes of the J2EE cluster. Therefore scaling via additional J2EE server nodes is an option.The JMS receiver side supports parallel operation out of the box. Only some small parts during message processing (pure sending of the message to the JMS provider) are synchronized. Therefore to enable parallel processing on the JMS receiver side no actions are necessary.
SOAP Adapter:
The SOAP adapter is able to process requests in parallel. The SOAP sender side has in general no limitations in the number of requests it can execute in parallel. The limiting factor here is the FCAThreads available to process the incoming HTTP calls.On the receiver side the parallelism depends on the number of the threads defined in the messaging system and the ability of the receiving system to cope with the load.
RFC Adapter:
The RFC adapter offers parameters to adjust the degree of parallelism by defining the number of initial and maximum connections to be used. The initial threads are allocated from the Application thread pool directly and are therefore not available for any other tasks. Therefore the number of initial thread should be kept minimal. To avoid bottlenecks during the peak time the maximum connections can be used. A bottleneck is indicated by the following exception in the Audit Log:com.sap.aii.af.ra.ms.api.DeliveryException: error while processing message to remote system:com.sap.aii.af.rfc.core.client.RfcClientException: resource error: could not get a client from JCO.Pool: com.sap.mw.jco.JCO$Exception: (106) JCO_ERROR_RESOURCE: Connection pool RfcClient … is exhausted. The current pool size limit (max connections) is 1 connection,this should be done carefully since these threads will be taken from the J2EE application thread pool. Therefore a very high value there can cause a bottleneck on the J2EE engine and therefore cause a major instability of the system. As per RFC adapter online help the maximum number of connections are restricted to 50.
IDoc_AAE Adapter:
The IDoc adapter on Java (IDoc_AAE) was introduced with PI 7.3. On the sender side the parallelization depends on the configuration mode chosen. In Manual Mode the adapter works sequential per server node. For channels in “Default Mode” it depends on the configuration of the inbound Resource Adapter (RA). Via the parameter MaxReaderThreadCount of the inbound RA you can configure how many threads are globally available for all IDoc adapters running in Default Mode.Hence, this determines the overall parallelization of the IDoc sender adapter per Java server node.Currently the recommended maximum number of threads is 10.The receiver side of the Java IDoc adapter is working parallel per default.
The table below gives a summary of the parallelism for the different adapter types.....