Quantcast
Channel: SCN : Blog List - Process Integration (PI) & SOA Middleware
Viewing all 676 articles
Browse latest View live

Twitter [Receiver] Adapter for HCI

$
0
0

In this blog I am going to focus on how to access real-time social media information using Twitter [Receiver] adapter which was released on 2015-10-24 as part of SAP Hana Cloud Integration [HCI].


Introduction


Twitter is a social networking and micro-blogging service that enables its users to send and read messages known as tweets. Tweets are text-based posts of up to 140 characters displayed on the author’s profile page and delivered to the author’s subscribers who are known as followers.


The Twitter adapter provides an easy way of integrating a flow of tweets. Once configured correctly with Twitter credentials, all that is necessary is to implement the Twitter listener, instantiate the adapter and handle the objects received.


We can use the Twitter receiver adapter to extract information from the Twitter platform (which is the receiver platform) based on certain criteria such as keywords, user data. We can also perform a Twitter search based on a schedule and publish the search results within Messages.


To communicate with Twitter only as the currently authenticated user, then you can obtain the OAuthAccessToken and OAuthAccessTokenSecret directly from this page on Twitter. The OAuthAccessToken and OAuthAccessTokenSecret are listed under the OAuth Settings in the Your Access Token section.

 

For authenticated operations, Twitter uses OAuth - an authentication protocol that allows users to approve an application to act on their behalf without sharing their password. The connection works that way that the tenant logs on to Twitter based on an OAuth authentication mechanism and searches for information based on criteria as configured in the adapter at design time. OAuth allows the tenant to access someone else’s resources (of a specific Twitter user) on behalf of the tenant.

 

In order to use OAuth authentication/authorization with Twitter you must create a new Application on the Twitter Developers site. Follow the directions below to create a new application and obtain consumer keys and an access token.

 

  • Click on the Register an app link and fill out all required fields on the form provided; set Application Type to Client and depending on the nature of your application select Default Access Type as Read & Write or Read-only and Submit the form. If everything is successful you’ll be presented with the Consumer Key and Consumer Secret. Copy both values in a safe place.
  • On the same page you should see a My Access Token button on the side bar (right). Click on it and you’ll be presented with two more values: Access Token and Access Token Secret. Copy these values in a safe place as well.


Twitter receiver adapter features

 

Adapter Type:         Twitter

Messages Protocol: HTTPS


Operations


Twitter adapter provides send tweet, search and send direct message operations.To access twitter account you can choose the following operations.


  • Send Tweet -Allows you to send content to a specific user timeline.
  • Search - Allows you to do a search on Twitter content by specifying keywords under filter settings.
  • Send direct message -Allows you to send messages to Twitter (write access, direct message).
  • Page size - specifies the maximum number of tweets per page.
  • Number of pages - Number of pages to consume.


Authentication methods


Twitter adapter supports basic authentication and OAuth mechanism based on the shared secret technology. We should make a note of below parameters while doing twitter configuration and we need to specify the same in the receiver twitter adapter.

 

We must specify both the OAuthClientId [Consumer Key] and OAuthClientSecret [Consumer Secret] to connect to an OAuth server.

 

  • Consumer Key

       OAuth requires you to register your application. As part of the registration, you will receive a client Id, sometimes also called a consumer key, and a client secret. An alias by which the consumer (tenant) that requests Twitter resources is identified by using this OAuthClientid / Consumer key.

  • Consumer Secret

       OAuth requires you to register your application. As part of the registration you will receive a client Id and a client secret, sometimes also called a consumer secret.An alias by which the shared secret is identified (that is used to define the token of the consumer (tenant).

  • Access token

       The OAuthAccessToken property is used to connect using OAuth. The OAuthAccessToken is retrieved from the OAuth server as part of the authentication process. It has a server-dependent timeout and can be reused between requests.


       The access token is used in place of your username and password. It also protects your credentials by keeping them on the server. In order to make authorized calls to the TwitterAPI, your application must first obtain an OAuth access token on behalf of a Twitter user.

  • Access Token secret

       The OAuthAccessTokenSecret property is used to connect and authenticate using OAuth. The OAuthAccessTokenSecret is retrieved from the OAuth server as part of the authentication process. It is used with the OAuthAccessToken and can be used for multiple requests until it times out.

 

Twitter Adapter Documentation & Download Links

 

https://proddps.hana.ondemand.com/dps/d/preview/93810d568bee49c6b3d7b5065a30b0ff/2015.10/en-US/frameset.html?6dc953d97322434e9f2a5acdc216844d.html

 

http://scn.sap.com/community/pi-and-soa-middleware/blog/2016/03/02/integrating-hci-with-twitter--part-1

 

  http://scn.sap.com/community/pi-and-soa-middleware/blog/2016/03/02/integrating-hci-with-twitter--part-2


Auto Client - Tool from SWIFT for your Payment File Integration with Bank

$
0
0

Hello Everyone,

 

This blog is about the Auto-client a software/tool from SWIFT to connect to the Alliance Lite2 server and this tool will be installed on one of the servers present in the client premises for routing the payment transactions from the client's SAP system to various Banks.

 

Auto-client is an optional tool used to connect to the SWIFT Alliance Lite2 server. Alliance Lite2 will in turn route the files to the Banks through the most secure SWIFTNet network.

 

Auto-client installation can be done on the C drive of any windows-based PC and the default folder for 64-bit based Windows PC is %Program Files (x86)%\SWIFT\Alliance Lite2. Inside the Alliance Lite2 folder, there will be one folder called files and this folder in turn consists of 4 sub-folders named Emission, Reception, Archive, Error.

 

In the emission folder the payment files like MT100, SEPA, MT103 etc generated by the SAP BCM/ECC system must be dropped. In SAP ECC/BCM system the files will be dropped under AL11 folder, files can be picked up using the NFS connection by SAP PI File Sender channel or SFTP connection can also be enabled for more secured way of communication. SFTP connection is highly recommended.  In SAP PI a normal pick and drop interface can be created. NFS connection needs to be enabled between SAP PI server and files folder of Auto-client.

 

Auto-client continuously polls for the emission folder (just like our good old file sender channel ) and as we can set the polling interval in our File sender channel, we can set the same for Auto-client too. This timer can be set by editing the AutoClient.Properties file present under the following path - %Program Files (x86)%\SWIFT\Alliance Lite2\config. EmissionTimerInMillis is the parameter for setting the polling interval time to send the files from Auto-client directories to Alliance Lite2.

 

Once the files are successfully uploaded on to the Alliance Lite2 server, the files will be archived to the Archive folder under files directory. What if the file was not uploaded successfully to Alliance Lite2 server, what would happen to the file? Does it stay in the Emission folder itself? If it stays in Emission folder, then how would i know what is happening, why it was not picked up? Good serious of questions The files which resulted in Error, will be moved to the present Error folder under main directory files.

 

Once the file reaches Alliance Lite2 server and from Alliance Lite2 if the file is sent on to SWIFTNet network successfully, you will receive a successful Acknowledgement from the Alliance Lite2 server. If there was any error in uploading the files on to the SWIFTNet network, there will be a Negative Acknowledgement, sent from Alliance Lite2. These messages are popularly known as ACK/NAK messages. These files will be dropped in the Reception folder present under main directory files. Similar to the polling interval we have for the Emission folder to upload the files to Alliance Lite2 server, we have parameter called ReceptionTimerInMillis which can be set to download the files from Alliance Lite2 server on to the Auto-client Reception folder.

 

ACK/NAK messages tell us the status about the payment sent on SWIFTNet Network to the Bank. But the actual status of the payment processed or not processed can be only given by the Bank. The payment response from the bank will be received from the Bank on the SWIFTNet Network by the Alliance Lite2 server and from the Alliance Lite2 Server, response files will be downloaded by Auto-client on to the Reception folder based on the timer set for the parameter, ReceptionTimerInMillis.

 

Have you made it successfully till here reading all the things mentioned above, congratulations and thank you All through the blog, you might be thinking, how the files dropped on the Emission folder of Auto-client will be sent to Alliance Lite2 server, Auto-client is like a normal software which is installed under the Program Files of the Windows PC. What is the magic it does and the files are sent to Alliance Lite2 server and from Alliance Lite2 to the Banks on SWIFTNet Network? What is it? How come? Please do not break your head Answer is very simple.

 

Auto-client uses a secure USB token, which is shipped by SWIFT team to the Client who procures the license for SWIFT. This  USB-token must be plugged into one of the USB ports and unique password set for the same must be provided to get your Auto-client running. Unless the correct credentials are provided your Auto-client do not start and hence the exchange of files won't happen. Please note, this password need to be provided only one-time to get the Auto-client running and not for each and every payment file to be sent to the Bank.

 

Okay, the most curious question about Auto-client and Alliance Lite2 question was finally answered But now the question arises regarding the security  of the payment files exchanged between Auto-client and Alliance Lite2. How the communication between Auto-client and Alliance Lite2 is secure? Again the answer is very simpledAuto-client USB token is a tamper-proof hardware security module. This module digitally signs and authenticates every communication with the Alliance Lite2 server using a strong 2048-bit PKI certificate that resides on the token.

 

Auto-client offers 2-system landscape, Live and Test system. It is highly recommended to keep 2 separate servers, one for Live system and another for Test system. For both the systems you will have 2 different USB tokens. Test system can be used to send the files to Bank Test system.

 

I hope you have made the reading till the end of this boring blog and liked the provided information

 

Regards,

Nitin Deshpande

FILE LOOKUP IN SAP PI USING UDF

$
0
0

This is my first post on SCN. We have few threads on File Lookup but i face few challenges when i was going through it. Thus i thought to create a simple step by step document for the same.

 

PI has two standard types of lookup available in ESR (i.e. JDBC and RFC lookup).  But we can achieve two more types of lookup in PI to fulfill our needs, those are File and SOAP lookup.

 

File Lookup is done when we want to map a value against an incoming payload value. In this scenario file may be maintained on some other server. We do a lookup to read the value against incoming field and map that value to the target payload.

 

To achieve this functionality we need to write a small UDF, which will leverage our requirement.

 

Step1: Create a UDF and import two packages as mentioned in screenshot.

           java.net.URL

           java.net.URLConnection

imp1.PNG

 

 

 

Step2: Copy paste below code in your PDF

(You need to replace username, password, server,path directory and filename in below code accordingly.)

 

String custId = "";

try{

 

URL url =new URL("ftp://<UserName>:<Password>@<IPAddress>/ <pathtodirectory>/ <filename.txt>");

 

URLConnection con = url.openConnection();

InputStream lookupStream = con.getInputStream();

 

InputStreamReader reader = new InputStreamReader(lookupStream);

BufferedReader buffer = new BufferedReader(reader);

 

String read;

while((read=buffer.readLine()) != null){

String temp = read.substring(0,key.length());

if(key.equals(temp)){

custId = read.substring(key.length()+1,read.length());

 

if (  read.substring(key.length()+1,read.length()) != "00"){

 

int num = Integer.parseInt( read.substring(key.length()+1,read.length()));

num = num+2;

 

custId = Integer.toString(num);

}

}

}

 

  1. buffer.close();

 

} catch (Exception e){

          return e.toString();

}

 

return custId;

 

Step3: Create a file on external server which has input and output separated by "=".

flkp1.JPG

 

 

Step4: We can verify our code through Display queue in message monitor's tab by giving input in the source field.

Below screenshot shows if we are getting output as expected.

flkp2.JPG

 

 

This is the simplest way of doing a file lookup in SAP PI through UDF.


Note: File Lookup is not encouraged by SAP.


I hope you all like this post.!!!!!!

HCI Content Monitoring Tools

$
0
0

Before you can process messages with an integration flow (iflow), you need to deploy this iflow to the HCI runtime node. When you complete the iflow modelling and invoke the ‘deploy’ command, the following steps are performed:

  • The iflow is checked for correctness
  • The iflow is sent to the tenant management node (tmn node)
  • The iflow is converted into an executable program (iflow bundle) on tmn node
  • The iflow bundle is distributed from tmn to runtime nodes (iflmap node)
  • The iflow bundle is started on iflmap nodes

 

The following tools can be used for tracking the iflow deployment process:

  • Console– shows client logs
  • Deployed Artifacts– provides a list of deployed artifacts on HCI tenant
  • Tasks View– provides a status of deployment task
  • Component Status View– displays the various statuses of the deployed iflows over time
  • Tail Log– provides access to the tail end of the server logs

 

Most of these tools are currently available in eclipse only.

 

 

Console

When you deploy an iflow, the iflow is first checked for correctness. Results of the checks can be found in the console. When you deploy an iflow, make sure that there were no validation errors.

 

How to access the console

Window -> Show View -> Console

 

mt-console.png

 

 

Deployed Artifacts

Deployed artifacts view shows a list of iflows and other artifacts deployed on a tenant. When you deploy an iflow, it should appear in the list of deployed artifacts in DEPLOYED state. If after a couple of minutes this does not happen, this indicates a possible deployment error.

 

How to access the Deployed Artifacts view

  1. Open Node Explorer view
  2. Double click on tenant
  3. Switch to Deployed Artifacts

 

mt-deployed-artifacts.png

 

 

Tasks View

Tasks View shows the status of various tasks running on the server including deployment tasks.

When you deploy an iflow, the following task is executed: “Build and deploy ‘YOUR_IFLOW’ Project. For successful deployment this task must be completed with status “SUCCESS”. If this is not the case, select the failed task and check the task trace for more details.

 

How to access the Tasks View:

Window -> Show view -> Tasks View

 

mt-tasks-view.png

 

 

Component Status View

In Component Status view you can check the current status of a node’s components. After successful deployment, an iflow should appear in the component status view of a runtime node with runtime status “started”.

If the status is not “started” you can invoke context menu “Show Error Details” for additional info. You can also continue error analysis by looking into the tail log.

 

If you want to restart an iflow, you can do so by clicking on “Restart” button in component status view. This is the easiest way to restart iflows triggered by a Timer with “Run Once” option. The other way would be to redeploy the iflow.

 

How to access Component Status view:

  1. Select IFLMAP node in the Node Explorer
  2. Open Component Status View
  3. Use a filter to narrow down list of components
  4. Check your component’s status

 

mt-csv.png

 

 

Tail Log

Each HCI node has its own server log which can be used for error analysis. Using the Tail Log view, you can download the most recent part of the log. You can also specify how many records should be downloaded (in KB).

 

In case of deployment issues you should check the tail log of both TMN and IFLMAP nodes.

 

How to access Tail Log:

  1. Switch to Node Explorer
  2. Select IFLMAP node
  3. Switch to or open the Tail Log view (Window -> Show view -> Tail Log)
  4. Specify the size of the log to be downloaded
  5. Click on Refresh

 

mt-log.png

 

 

Summary

Iflow deployment consists of several steps performed on the tenant management and runtime nodes. Using tools described above, you can monitor these steps and search for a root cause in case of deployment issues.

 

You can use the following checklist to make sure your iflow is deployed successfully

  • In Console: there should be no validation errors
  • In Deployed Artifacts: your iflow should have deploy status “DEPLOYED”
  • In Tasks View: a task “Build and deploy ‘YOUR_IFLOW’” should have status “SUCCESS”
  • In Component Status View: your iflow should have the runtime status “started”
  • In Tail Log: there should be no errors and no stack-traces, neither in the TMN nor in the IFLMAP tail log

Monitoring your integration flows

$
0
0

HCI offers several possibilities to monitor message processing. Here we will give on overview of available tools.

 

  • Message Monitoring– provides an overview on processed messages
  • Tail Log– provides access to server log file
  • Message Tracing– provides access to message payload

 

In addition to monitoring tools, you can enhance your iflow to persist additional information for future analysis. You can achieve these using following HCI features:

  • MPL Attachments– provides an API to store data in the message processing log
  • Data Store– an iflow component to persist arbitrary data in the HCI database

 

Most of the monitoring tools are currently available in eclipse only.

 

 

Message Monitoring

Use the Message Monitoring view to check the status of recently processed messages. In case you have lot of messages you should specify a time period and/or integration flow to monitor in order to narrow down the search results.

 

How to Access Message Monitoring

  1. Open Node Explorer
  2. Double-click on your tenant (root node)
  3. Switch to the Message Monitoring view
  4. Specify a search filter
  5. Click on the ‘Search’ button
  6. Select a message of interest
  7. Check the Log for more details

 

mm1.png

 

 

 

 

Tail Log

When you search for the root cause of a message processing issue, you can check the server log for more details. Each HCI node has its own server log. since message processing only happens on IFLMAP nodes, we are only interested in the IFLMAP nodes logs. Using “Tail Log” view you can download the most recent part of the log. You can specify how big this part should be in Kilobytes.

 

How to Access Tail Log

  1. Switch to Node Explorer
  2. Select IFLMAP node
  3. Switch to or open Tail Log view (Window -> Show view -> Tail Log)
  4. Specify size of the log to be downloaded
  5. Click on Refresh

 

mm2.png

Note: By default, only errors are written to the server log. However, some components log useful information with a lower severity level. For example, you can dump SOAP request and response messages to the server log by increasing log level of org.apache.cxf.* packages to INFO.

 

How to dump SOAP envelopes to the log

1. In main iflow properties enable ‘Enable Debug Trace’ option

 

mm3.png

2. Open a ticket on Cloud Operations Team on component LOD-HCI-PI-OPS for changing the log level. Example:

 

Dear Colleagues,

 

Please set the log level for the following loggers:-

 

TMN url: https://v0XXX-tmn.avt.us1.hana.ondemand.com

Application: IFLMAP

Tenant Name: v0XXX

Logger name: org.apache.cxf.*

Set Logger Level to DEBUG

 

Thanks and regards,

 

3. Execute scenario and check log for SOAP envelopes

 

mm4.png

 

 

 

Message Tracing

Use message tracing when you need to access the payload of processed messages. When activated, the message tracing gathers payload and headers on each step during iflow execution. You can access this information later from the message monitoring.

 

When you want to use message tracing, you first need to activate it on your tenant. The following steps are required:

  1. Activate message tracing on your tenant via ticket to cloud ops team
  2. Enable tracing in your iflow
  3. Provide your user with authorizations to access traces

 

More details can be found in HCI documentation:

https://cloudintegration.hana.ondemand.com/PI/help

  • Designing and Managing Integration Content

    • Activating Tenant and Integration Flow Tracing

 

How to access the message payload

  1. Open Message Monitoring
  2. Select a message
  3. Click on ‘View Trace’ button
  4. Click on a yellow envelope once the iflow gets opened
  5. Check message payload

 

mm5.png

 

 

mm6.png

 

 

MPL Attachments

HCI keeps a message processing log for every processed message. The log contains the basic information: status, error message, start/end timestamps for iflow components.

Using custom script (Groovy or Javascript) you can put additional entries into the log.

 

There are two possibilities:

  • properties
  • attachments

 

Properties are simple name/value pairs written into the log.

In the following example, the “payload” property contains a JSON structure:

 

mm7.png

 

Attachments are complete documents which are ‘attached’ to the message processing log. You can store the entire payload or other document as attachment

 

In the following example the same JSON payload is stored as a “weather” attachment.

mm8.png

Clicking on "weather" link opens an attachment viewer

mm9.png

 

Example Iflow:

mm10.png

 

write_mpl.gsh

 

import com.sap.gateway.ip.core.customdev.util.Message;

import java.util.HashMap;

 

 

def Message processData(Message message) {

  def payload = message.getBody(String.class);

  def messageLog = messageLogFactory.getMessageLog(message)

  messageLog.setStringProperty("payload", payload)

  messageLog.addAttachmentAsString("weather", payload, "application/json");

  return message;

}

 

More details on scripting in HCI documentation:

https://cloudintegration.hana.ondemand.com/PI/help

  • Designing and Managing Integration Content
  • Designing Integration Content With the SAP HCI Web Application
  • Configure Integration Flow Components
  •    Define Message Transformers
  •      Define Script

 

 

 

Data Store

Another way to persist the message payload for further analysis is to use the data store. The primary goal of the data store is to enable asynchronous message processing. You can write messages into the data store from one iflow and read them back from another one.

With some limitations you can also use the data store to persist messages for monitoring purposes.

The main limitation: if a message goes into FAILED state, no data is committed to the data store for it. For example, if you do a SOAP call in your iflow and this call fails, normally the iflow goes into FAILED state. In such case, no data will be written to the data store. Only if the iflow completes with the COMLETED state, data is committed to the database and can be accessed afterwards.

 

Example iflow:

mm11.png

 

You can access the data from data store in two ways

  1. Using another iflow with SELECT or GET operation
  2. Using eclipse data store viewer

 

How to access eclipse data store viewer

  1. Double-click on your tenant in Node Explorer
  2. Switch to “Data Store Viewer” tab
  3. Select the data store
  4. Inspect entries, use “Download Entry” context menu command to download the content

 

 

mm12.png

Note: your user needs the AuthGroup.BusinessExpert role on TMN application of your tenant in order to access data store entries using eclipse data store viewer. Raise ticket on LOD-HCI-PI-OPS for getting this authorization.

 

More details on the Data Store in HCI documentation:

https://cloudintegration.hana.ondemand.com/PI/help

  • Designing and Managing Integration Content
    • Defining Data Store Operations

 

 

Summary

In this blog we walked through available monitoring features of HCI. Using the out-of-the box message monitoring you can search for processed messages and check their status. In the tail log you can find additional low level information i.e. error messages or soap envelopes. Using message tracing you can access the payload of processed messages. Using MPL attachments API you can enhance the message log with additional information. Using the Data Store is another way to persist message payloads for monitoring purposes.

 

 

 

 

 

 



How to skip header record from simple Plain2XML (file type is .csv,SFTP Sender adapter) without using any UDF at message mapping level ?

$
0
0

Hello All,

 

In this blog I am going to explain how to skip header record from simple Plain2XML(file type is .csv,SFTP Sender adapter).

 

Input file structure at Sender adapter side :

 

SFTP Input file.png

 

I did not used any UDFs , I am just using standard node functions to remove first line from file.

 

 

Mapping doc.png

 

First record suppressed at message mapping level :

message mapping1.png

Hope this will be useful ..

 

For above requirement I am referring below thread.

How to skip header record from simple Plain2XML (file type is .csv,SFTP Sender adapter)

 

Thanks you

Handling Mass Data Upload for Value Mapping which can't be easily handled with NWDS and VMR interface

$
0
0

I was having a scenario where i need to handle more than 10000 values in Value Mapping which was very tedious task. Entering large number of values manually in ID was not possible otherwise it would end in months. Then i tried Value Mapping Replication (VMR) interface available in Basis component but it was also not that efficient. Then i also tried uploading with NWDS directly by creating CSV file for Value Mapping but it fails when we have any "," (comma) in key or Value.

So this option was not helpful for me.

 

Instead of this we were knowing that if we are doing lookups in any database or file server it will hit our interface's execution time if number of lookups are more.

Then i thought why we can't handle these much values in ESR only in any file format and directly read values from those file which will be much quicker than any other option. So i was having only one option where i can upload any kind of file in SAP PI i.e. Imported Archive. Imported archive is normally used for JAVA or XSLT mappings. But it provides an option to upload files in .zip format that gave me a loophole from where we can upload any kind of file  in ESR after zipping the files together.

 

It was amazing when i got success to do lookup among >50K values in milliseconds. So i thought to share this new concept with all of you because i searched whole SCN and SAP documents to handle this problem and i returned empty hand.

 

I will explain step by step procedure to handle any number of values in Key- Value pair format in ESR and can easily do a lookup through a small UDF.

 

Step 1: Create text files containing key - Value pair separated by space or "=" as shown in below screenshot


 

Step 2: Now create all the files that you want to create for lookup in text format and zip it together.

      ( I was having a requirement where i need to transform for 20 EDI segments incoming input values into its standard actual values as shown in above figure.           So i created different file for different segment. If you want you can merge all files together and upload single text file.)

          I created 21 files as per the requirement and zipped those together as below:

 

 

Now i have a zipped file that contains all key-value pair in it. Now we can upload this file in imported Archive without any issue

 

Step 3: Create an Imported Archive as ValueMapping  and import .zip file into it.


 

 

Now you can see all text files in imported Archive as shown in below screenshot.

 

 

Click on any file , you can see the content of the file as below:

 

 

Now Save and Activate your imported Archive.

 

Step 4: Now assign your imported archive in your message mapping in Function tab under Archive used as shown below:


 

Step 5: Now we will create a simple UDF which will take two input  values first value will be key against which i want description and second input of UDF will be file in which i want to lookup values.

   (If you creating one file then you can pass only one input to UDF and directly write file name in UDF.)

 

Step 6: Copy paste the below code in your UDF:


//public String FileLookup(String key, String filename, Container container) throws StreamTransformationException{


String returnString="";

     try {

          InputStream lookupStream = this.getClass().getClassLoader().getResourceAsStream(filename);

         InputStreamReader reader = new InputStreamReader(lookupStream);

             BufferedReader buffer = new BufferedReader(reader);

 

String read;

while((read=buffer.readLine()) != null){

String temp = read.substring(0,key.length());

if(key.equals(temp)){

returnString = read.substring(key.length()+1,read.length());

if (  read.substring(key.length()+1,read.length()) != "00"){

 

int num = Integer.parseInt( read.substring(key.length()+1,read.length()));

num = num+2;

 

returnString = Integer.toString(num);

 

     }    }

}

        } catch (Exception e) {

          returnString = e.getMessage();

     }

 

     return returnString;

//}

 

Step 6 : Now we will create one more UDF for trimming fixed extra description that will always come when we are using lookup code.

               There will be only one input for this UDF. We will pass the output of the Lookup UDF into it and it will give actual output to us. If you are a bit confused                you will get clear picture once you do display queue on these udf.

 

Below is the code for trimValue UDF. ( Input Parameter of UDF : value )

 

if(value.length() >0){

String str ="";

str = value.substring(19,value.length()-1);

return str;

}else{

    return "";

}

 

Now our UDFs are ready for testing

 

Step 7 : Now i will pass a key that is available in DE_365.txt file and our output will be actual value against this key.

             I have shown every input and output using display queue that will explain everything clearly and now you can understand why i wrote trimvalue function


 

 

Now we can compare the key value available in our text file:



This will never hit performance and execution time of message mapping as we are maintaining the lookup files in ESR as an ESR object.

 

Value Mapping has a constraint over the length of target field (i.e. it can't be more than 300 ) but here you can pass more than that as we are maintaining the values in text file.


Hopefully this will solve most of the problems related to large Value Mapping data maintenance. You can upload millions of data without much effort in ZIP format. 

 





Rapid Static Java Source Code Analysis with JLin in NWDS

$
0
0

Intro

This blog contains brief outlook on a tool named JLin, that is embedded in SAP NetWeaver Developer Studio and used for static Java source code analysis. There are customers who actively use JLin during development and source code early quality assurance phase, but there are others who don't have JLin in their development toolbox - together with that, lack of SCN discussions regarding usage of JLin for source code quality analysis makes me think that either this tool is obvious and straightforward for Java developers in SAP world, or it is not very common in SAP developers community and undeservingly neglected and underestimated by Java developers. If you are a part of the first group and are already active users of JLin, further reading may not be exciting for you, but if you are interested in figuring out how you can leverage through using NWDS built-in features on the way of Java source code quality improvement, let's move forward...

 

JLin is an integral part of NWDS distribution: it is already equipped with pre-configured default set of checks and it doesn't require any extra infrastructure components for assessing source code, it is highly integrated with other NWDS/Eclipse tools, results processing and presentation (all tasks are performed by JLin purely within NWDS - which also means, the tool can be used for offline standalone analysis). All together, mentioned features make the tool a great choice for rapid analysis of Java source code that can be conducted right from NWDS in very few clicks.

 

It shall be noted that JLin is in no way a replacement for well-known, commonly used and mature static source code analysis products like SonarQube, FindBugs, PMD, Checkstyle and others, and shall not be matched against them. SonarQube and its equivalents provide complete solution and infrastructure for static source code analysis, that is focused on centralized management and governance of this process, and provisioning of sophisticated features which include (but not limit to) such functionality as central storage of source code check templates and source code inspection results, capabilities for historic analysis and visualization tools, extensibility of existing patterns and development of custom checks, support of variety of programming languages for which source code can be analyzed. This also means such tools shall be installed and properly configured before they can be effectively utilized - that, in its turn, requires some efforts. In contrast to this, JLin is a lightweight tool that comes with default check patterns (that can be enabled/disabled, but cannot be extended), it employs basic Eclipse based user interface and does not provide rich aggregated analysis and visualization capabilities, as well as it only supports analysis of code written in Java. But in many general cases, it can be used either completely out of the box or with minimum adjustment - as a consequence, its initial setup becomes a matter of few minutes.

 

Having written so, I shall encourage developers to differentiate use cases for JLin and dedicated general purpose static source code analysis tools mentioned earlier, and point out that JLin can be considered as a complementary tool for fast preliminary analysis of source code quality, it can be included as an optional step in a process of quality assurance of developed Java applications, followed by (preferably) mandatory extensive analysis of submitted development by means of static source code analysis infrastructure that is used across the organization.

 

JLin is described in details in SAP Help - refer to Testing Java Applications with JLin - SAP Composition Environment - SAP Library. Even though help materials for JLin are placed in SAP Composition Environment space, the tool is generic and can be used for vast majority of Java developments related to other areas, PI/PO being one of them.

 

On high level, process of JLin usage consists of following principal steps:

  1. (Optional) Define JLin initial (general) preferences and properties like optional export of results to an external file, priority threshold, etc.;
  2. (Optional) Define custom JLin variant containing selected checks from a list of checks shipped with NWDS and their priorities;
  3. Create JLin launch configuration for an assessed Java project using default or custom JLin variant;
  4. Run created JLin configuration;
  5. Review and analyze JLin tests results.

 

 

JLin initial preferences and variant configuration

A default JLin configuration already contains commonly used major checks for tests, but if it is required to enable or disable some checks, or change their priority, then in NWDS, go to menu Window > Preferences. In Preferences window, select Java > JLin:

Preferences - JLin.png

Here, it is possible to check default variant configuration or create a custom one.

 

For a created variant, it is possible to customize general properties like results export to an XML file (which is helpful for further analysis of results produced by JLin, outside of NWDS, especially in case of further machine parsing and processing), priority threshold, etc..

 

Central aspect of JLin variant configuration is definitely selection of tests that form basis of JLin variant and scope of checks that will be applied to examined source code:

JLin variant.png

A set of JLin checks is shipped by SAP as a part of NWDS and cannot be extended. In case of uncertainty and unclarity regarding any specific check, it is possible to get description of test scope and useful explanatory notes (available from context menu of a check):

JLin check - description.png

For selected checks, it is possible to adjust their priority based on individual estimation of severity and impact of issues detected in the source code and corresponding to those checks:

JLin check - options.png

 

 

JLin tests launch configuration

If JLin launch configuration for the analyzed project hasn't been created yet, then in NWDS, go to menu Run > Run Configurations. In Run Configurations window, select JLin and create new launch configuration for it (either from a context menu or using corresponding button in a window). In a newly created JLin launch configuration, specify examined source code base (for example, Java project) and JLin variant (default or created on the previous step) that has to be used for analysis execution:

Launch configuration - JLin.png

 

 

JLin tests execution and results analysis

After initial preferences are configured and launch configuration is prepared, we are ready to execute JLin checks, which can be done from the same window that was used to configure launch configuration on the previous step.

 

JLin outputs checks results and general statistics about executed JLin test to NWDS view Problems:

JLin results - overview.png

For every found issue JLin creates a problem marker, that can be explored in more details - corresponding Java resource can be opened and analyzed, Eclipse task can be created for that marker, as well as description of an issue (retrieved based on definition of a check that was not passed) can be viewed by selecting JLin test description from context menu of a respective selected marker:

JLin results - marker in source code.png

Similarly to other problem markers in Eclipse, it is possible to use filters in Problems view in order to focus analysis on specific examined Java resources or issue types by selecting View menu > Configure Contents in Problems view and specifying required filter options:

JLin results - filter.png

 

As it can be seen from the above, just in few steps, we accomplished JLin configuration and conducted basic static source code analysis for a selected Java project - with absolutely no necessity in additional environment setup and/or external servers / Internet connection. As a result, already preliminary code check highlighted several issues and produced generic recommendations for their elimination, which can be addressed before developed source code reaches further quality assurance phases.


Creating a custom HCI Adapter - Part 1

$
0
0

Introduction

Recently I did some work in SAP HANA Cloud Integration (HCI) and started to fiddle with the available adapters.  Although they are capable and cater for most needs you might find yourself in a similar situation where you need to create your own adapter.

 

For those wondering - adapters are used to connect external systems to a HCI tenant. An adapter encapsulates the details of connectivity and communication.  An integration designer will make use of the design tool and choose an appropriate adapter from the available options.  The HCI documentation lists all the available adapters.

 

This blog is the result of my experience which I would like to share.  It is a step-by-step guide with screenshots and examples.  Also read the SAP documentation on creating HCI adapters for more information.



 

What we'll do

I will show you how to create a new adapter in HCI, deploy the adapter and test that it is working.  The adapter will not be based on an existing camel component, that is the topic of a future blog .  We'll create the "echo" adapter that mocks the endpoints by inserting a dummy message.

 

Part 1 - Create a Camel component

Part 2 - Create the HCI adapter

Part 3 - Create integration flow using new adapter and see it working

What you'll need

 

All code used for this blog is available here GitHub - nicbotha/hci-echo-adapter

 

Part 1 - Create a Camel component

 

Camel components are extension points used for integration. A Camel component creates endpoints that understands the many protocols, data formats, API's etc.  Remember at runtime HCI uses Camel for the mediation and routing of messages, so we can make use of this extension point and add our own component.  The outcome of this part is a Camel component.

 

Step - Create a Camel component using Maven

 

Open a command prompt and run the following maven command:

 

mvn archetype:generate \   -DarchetypeGroupId=org.apache.camel.archetypes \   -DarchetypeArtifactId=camel-archetype-component \   -DarchetypeVersion=2.12.3  \   -DgroupId=my.domain \   -DartifactId=echo

-DarchetypeVersion is the most interesting because it determines the camel-core version in the pom file (which you can change later).  But make sure it is not greater than the Camel version on your tenant.

 

You will be prompted for a name and a scheme.  Your component has to have a unique URI and should not match any existing components.

 

The output of this step should be a success message looking like:

mvn-arch-success.png

Step - Create an Eclipse project

 

Staying in the same command prompt navigate to where your new pom file is and run the following maven command:

 

mvn eclipse:eclipse \
-DdownloadSources \
-DdownloadJavadocs

Lines 2 and 3 are optional but I always like to pull the docs and source into my projects

 

The output of this step should be a success message same as previous step.

Step - Import project into Eclipse

 

I'm not going into detail here as you surely know how to import an existing project into Eclipse.  What I'd like to point out is the generated code and project structure.  After importing you should have a project as below.

  1. A unit test is already created and you can run it.  It creates a camel route and assert that at least 1 message is present.
  2. The main code contains component, endpoint and consumer/producer.
  3. This file is used to map the URI scheme to the component.

 

eclipse-project.png

Step - Define a URI parameter

 

You can specify URI parameters that will be passed to the component at runtime using the @UriParam annotation.  We will add a parameter to allow a user to specify the delay of the consumer.  Later you will see how this parameter is used during design time in the editor allowing the designer to configure a delay time.

 

Edit the echoEndPoint.java and add:

 

public class echoEndpoint extends DefaultEndpoint {  @UriParam
 private long delay;
....
public long getDelay() {  return delay;  }  public void setDelay(long delay) {  this.delay = delay;  }
  • line 4 - use @UriParam annotation to define a parameter
  • line 5 - create the variable

 

Edit the echoConsumer.java and add:

 

public class echoConsumer extends ScheduledPollConsumer {    private final echoEndpoint endpoint;    public echoConsumer(echoEndpoint endpoint, Processor processor) {        super(endpoint, processor);        this.endpoint = endpoint;        setDelay(this.endpoint.getDelay());    }
  • line 8 - the delay value is set using the parameter that was passed in the URI to the endpoint.

 

Step - update the test

 

The unit test will not pass at this stage as the endpoint now expects a parameter in the URI.  Lets modify the echoComponentTest.java by adding this expected parameter:

 

protected RouteBuilder createRouteBuilder() throws Exception {        return new RouteBuilder() {            public void configure() {                from("echo://foo?delay=1")                  .to("echo://bar")                  .to("mock:result");            }        };    }
  • line 4 - added delay=1 parameter

Step - build and test

 

All that remains is to build the project and ensure the test pass.  Open a command prompt and navigate to where the project pom is.  Run the following Maven command:

 

mvn clean install

The output of this step should be a success message looking like:


part1-end.png

Conclusion

 

In Part 1 we have created a working Camel component.  At this point the consumer expects a parameter and will add a dummy message to be routed.  In the next part we will create the HCI Adapter project.

 

Creating a custom HCI Adapter - Part 2

$
0
0

Overview

 

Part 2 - Create the HCI adapter

 

In Part 1 of this series we created the Camel component.  Now we need to wrap that component in a HCI adapter project. We will create the echo adapter using the echo component created in part 1.  The outcome of this part is a HCI adapter deployed on a tenant.

 

Step - Create the adapter project

The first thing todo is to create the adapter project.  For this you will need to open Eclipse and select File > New > Other..  Expand the SAP HANA Cloud Integration section and select Adapter Project

eclipse-wiz1.png

 

click Next.

eclipse-wiz2.png

fill in required fields and clickFinish

 

Step - Generate metadata

At this step you should have a project looking similar to:

adap-1.png

The project itself contains a few empty folders and a metadata.txt file.  Next we want to copy the echo-*.jar from the echo camel component project into the component folder of the echo adapter project.  Once done your adapter project should look like:

adap-2.png

The next action is to generate the component metadata.  You can read all about what component metadata is in the online help.  But it is basically the place where you setup the info required by the editor in design time. 


Right click on the echo-adapter project and select Generate Component Metadata

 

adap-3.png

Next, right click on the echo-adapter project and select Execute Checks.  The output of this step should be:

 

adap-4.png

Step - Modify metadata.xml

 

Expand the metadata folder and you will see a newly generate file named metadata.xml.  Open this file.  Notice that the generation tool inspected our echo component and generated appropriate xml content from it.  What we will do now is modify it a bit by removing unnecessary elements.

 

There are only two parameters we want users of our adapter to specify: first part URI and delay.  We can remove the others as the adapter will use default values.  Once done your metadata.xml should look like:

 

<?xml version="1.0" encoding="UTF-8" standalone="yes"?><ComponentMetadata ComponentId="ctype::Adapter/cname::me:echo/version::1.0.0" ComponentName="me:echo" UIElementType="Adapter" IsExtension="false" IsFinal="true" IsPreserves="true" IsDefaultGenerator="true" MetadataVersion="2.0" xmlns:gen="http://www.sap.hci.adk.com/gen">    <Variant VariantName="Echo Component Sender" gen:RuntimeComponentBaseUri="echo" VariantId="ctype::AdapterVariant/cname::me:echo/tp::echo/mp::echo/direction::Sender" MetadataVersion="2.0" AttachmentBehavior="Preserve">        <InputContent Cardinality="1" Scope="outsidepool" MessageCardinality="1" isStreaming="false">            <Content>                <ContentType>Any</ContentType>                <Schema xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"></Schema>            </Content>        </InputContent>        <OutputContent Cardinality="1" Scope="outsidepool" MessageCardinality="1" isStreaming="false">            <Content>                <ContentType>Any</ContentType>                <Schema xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"></Schema>            </Content>        </OutputContent>        <Tab id="connection">            <GuiLabels guid="f20e0d5e-6204-42b8-80db-7dc179641528">                <Label language="EN">Connection</Label>                <Label language="DE">Connection</Label>            </GuiLabels>            <AttributeGroup id="defaultUriParameter">                <Name xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">URI Setting</Name>                <GuiLabels guid="fef82a37-793e-45d2-9c62-261a5a2987fa">                    <Label language="EN">URI Setting</Label>                    <Label language="DE">URI Setting</Label>                </GuiLabels>                <AttributeReference>                    <ReferenceName>firstUriPart</ReferenceName>                    <description>Configure First URI Part</description>                </AttributeReference>            </AttributeGroup>            <AttributeGroup id="echoEndpoint">                <Name xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">Echo Endpoint</Name>                <GuiLabels guid="7a914dcf-ab5e-4dfc-a859-d4709bd7402d">                    <Label language="EN">Echo Endpoint</Label>                    <Label language="DE">Echo Endpoint</Label>                </GuiLabels>                <AttributeReference>                    <ReferenceName>delay</ReferenceName>                    <description>Configure Delay</description>                </AttributeReference>            </AttributeGroup>               </Tab>    </Variant>    <Variant VariantName="Echo Component Receiver" gen:RuntimeComponentBaseUri="echo" VariantId="ctype::AdapterVariant/cname::me:echo/tp::echo/mp::echo/direction::Receiver" IsRequestResponse="true" MetadataVersion="2.0" AttachmentBehavior="Preserve">        <InputContent Cardinality="1" Scope="outsidepool" MessageCardinality="1" isStreaming="false">            <Content>                <ContentType>Any</ContentType>                <Schema xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"></Schema>            </Content>        </InputContent>        <OutputContent Cardinality="1" Scope="outsidepool" MessageCardinality="1" isStreaming="false">            <Content>                <ContentType>Any</ContentType>                <Schema xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"></Schema>            </Content>        </OutputContent>        <Tab id="connection">            <GuiLabels guid="8c1e4365-b434-486a-8ec8-2a8fd370f77f">                <Label language="EN">Connection</Label>                <Label language="DE">Connection</Label>            </GuiLabels>            <AttributeGroup id="defaultUriParameter">                <Name xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">URI Setting</Name>                <GuiLabels guid="d1004d88-8e54-4f46-a5e7-e6e4d0dca053">                    <Label language="EN">URI Setting</Label>                    <Label language="DE">URI Setting</Label>                </GuiLabels>                <AttributeReference>                    <ReferenceName>firstUriPart</ReferenceName>                    <description>Configure First URI Part</description>                </AttributeReference>            </AttributeGroup>               </Tab>    </Variant>    <AttributeMetadata>        <Name>firstUriPart</Name>        <Usage>false</Usage>        <DataType>xsd:string</DataType>        <Default></Default>        <Length></Length>        <IsParameterized>true</IsParameterized>        <GuiLabels guid="eaa8b525-3e51-4d18-81ca-16ec828567dc">            <Label language="EN">First URI Part</Label>            <Label language="DE">First URI Part</Label>        </GuiLabels>    </AttributeMetadata>    <AttributeMetadata>        <Name>delay</Name>        <Usage>false</Usage>        <DataType>xsd:long</DataType>        <Default></Default>        <Length></Length>        <IsParameterized>true</IsParameterized>        <GuiLabels guid="7630e90d-5982-403b-a217-6a9be329ee04">            <Label language="EN">Delay</Label>            <Label language="DE">Delay</Label>        </GuiLabels>    </AttributeMetadata></ComponentMetadata>

To ensure all is good run the Execute Checks again.


Step - Deploy


So at last we can deploy our adapter to a HCI tenant.  Right click on the echo-adapter project and select Deploy Adapter Project


adap-5.png

In the Deploy Integration Content dialog select the Tenant you want to deploy to and click OK


adap-6.png


Where after you should see the confirmation as below


adap-7.png


Test - Validate successful deployment


Double click on the tenant in the Node Explorer view and then open the Deployed Artifacts view.  You might need to refresh the view by clicking on the yellow arrows and if the deployment was a success the adapter will display as follows.


adap-8.png

Also, lets ensure it started on the worker node by first selecting the node in the Node Explorer and then opening the Component Status View

 

adap-9.png

 

Conclusion

 

In Part 2 we have created a HCI adapter and deployed it to a HCI tenant.  At this point the echo adapter is available for use.  In the final part we will create an integration flow to use the adapter and ensure the endpoints are working.

Creating a custom HCI Adapter - Part 3

$
0
0

Overview

 

Part 3 - Create integration flow using new adapter and see it working

 

You made it!  In Part 1 of this series we created the Camel component.  In Part 2 we created the HCI adapter and deployed it to our tenant.  All that is left to do is to use the new adapter in a integration flow and test that the endpoints works as expected.

 

Step - Create an Integration project

 

We will create a basic HCI integration flow using our echo adapter as sender.  To do this lets first create a new integration project called echo-integration.  The outcome should be

part3-1.png

Step - Modify the channel

Select the sender channel (line between Sender and Start) and select the Channels tab

part3-2.png

On the Adapter Type click Browse. The choose adapter dialog appears and shows all the available adapters.  Select the echo adapter.

part3-3.png

After selection you will notice the Transport Protocol and Message Protocol fields contain values.  These values are directly related to the metadata.xml generated in Part 2.  If you have a look at the metadata.xml sender variant element you will see the VariantId element.  And this is where the transport protocol and message protocol are specified. 

 

<Variant VariantName="Echo Component Sender" gen:RuntimeComponentBaseUri="echo" VariantId="ctype::AdapterVariant/cname::me:echo/tp::echo/mp::echo/direction::Sender" MetadataVersion="2.0" AttachmentBehavior="Preserve">

Select the Adapter Specific tab and enter the following values

  • First URI Part = foo
  • Delay = 5000

part3-4.png

 

Finish the integration flow by configuring the receiver channel.  Again you can use the echo adapter, this time set the First URI Part to bar.

 

Step - Deploy and test

 

Deploy the echo-integration project to your tenant and monitor the messages.  If everything worked you will see something like this

 

part3-5.png

Conclusion

 

And that is it, we are done.  Thank you for reading and I hope you found this information useful.  Goodbye from a sunny Melbourne!

Place double quotes for records in message mapping level using UDF

$
0
0

Hi All,

 

Requirement  : If  there is special characters like  ( # or , ) in data we need to place double quotes for that data at mapping level and please find below snapshot for our requirement.

 

Mapping.png

 

For above requirement I have written a small java UDF i.e.


udf1.png

 

Assign that udf in message mapping.


mapping2.png

 

Check the below output.

opmapping.png

Adapter Module: ExceptionCatcherBean

$
0
0

Adapter Module: ExceptionCatcherBean

 

Use

 

 

 

When a database error occurs in a synchronous proxy-to-jdbc scenario, the error information is not transferred back to the sender system. And unless the end user has access to SAP PI monitors, she/he only receives a PARSING GENERAL exception without any other information.

 

 

error_1.png

 

You use ExceptionCatcherBean module to wrap CallSAPAdapter module execution to catch any module exception to generate a new ModuleException object with error information.

 

 

error_2.png

 

 

All the information is then transferred back to the sender system.

 

 

sxmb_moni_error.png

 

 

Deployment

 

Enterprise Java Bean Project: ExceptionCatcher-ejb

Enterprise Java Bean Application: ExceptionCatcher-ear

 

Integration

 

The module can be used in any Sender Adapter.

 

Activities

 

This section describes all the activities that have to be carried out in order to configure the module.

 

Entries in processing sequence

 

Remove CallSAPAdapter module and insert ExceptionCatcherBean as shown in the picture below.

 

config.PNG

 

Entries in the module configuration

 

The adapter module doesn’t expect any parameter.

 

Audit Log

 

The execution process can be followed in the audit log generated per message.

 

 

Code


import java.util.Hashtable;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import javax.ejb.CreateException;
import javax.ejb.Local;
import javax.ejb.LocalHome;
import javax.ejb.Remote;
import javax.ejb.RemoteHome;
import javax.ejb.Stateless;
import javax.ejb.TransactionManagement;
import javax.ejb.TransactionManagementType;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import com.sap.aii.af.lib.mp.module.Module;
import com.sap.aii.af.lib.mp.module.ModuleContext;
import com.sap.aii.af.lib.mp.module.ModuleData;
import com.sap.aii.af.lib.mp.module.ModuleException;
import com.sap.aii.af.lib.mp.module.ModuleHome;
import com.sap.aii.af.lib.mp.module.ModuleLocal;
import com.sap.aii.af.lib.mp.module.ModuleLocalHome;
import com.sap.aii.af.lib.mp.module.ModuleRemote;
import com.sap.engine.interfaces.messaging.api.Message;
import com.sap.engine.interfaces.messaging.api.MessageDirection;
import com.sap.engine.interfaces.messaging.api.MessageKey;
import com.sap.engine.interfaces.messaging.api.PublicAPIAccessFactory;
import com.sap.engine.interfaces.messaging.api.auditlog.AuditAccess;
import com.sap.engine.interfaces.messaging.api.auditlog.AuditLogStatus;
import com.sap.engine.interfaces.messaging.api.exception.MessagingException;
@Stateless(name = "ExceptionCatcherBean")
@Local(value = { ModuleLocal.class })
@Remote(value = { ModuleRemote.class })
@LocalHome(value = ModuleLocalHome.class)
@RemoteHome(value = ModuleHome.class)
@TransactionManagement(value=TransactionManagementType.BEAN)
public class ExceptionCatcherBean implements Module {
private AuditAccess audit; 
private MessageKey key;
@PostConstruct
public void initialiseResources() {
try {
audit = PublicAPIAccessFactory.getPublicAPIAccess().getAuditAccess();
} catch (Exception e) {
throw new RuntimeException("Error in initializeResources: " + e.getMessage()); }
}
@Override
public ModuleData process(ModuleContext context, ModuleData inputModuleData) throws ModuleException {
ModuleData outputModuleData = inputModuleData;
key = getMessageKey(inputModuleData);
try { Hashtableenv = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sap.engine.services.jndi.InitialContextFactoryImpl");
Context ctx = new InitialContext(env);
Object adapterObj = ctx.lookup("localejbs/CallSapAdapter");
if (adapterObj != null) {
try {
ModuleLocalHome adapterModule = (ModuleLocalHome) adapterObj;
ModuleLocal moduleLocal = adapterModule.create();
outputModuleData = moduleLocal.process(context, inputModuleData);
} catch (ModuleException e) {                       
throw new ModuleException((MessagingException) e.getCause());
} catch (CreateException e) {
audit.addAuditLogEntry(key, AuditLogStatus.ERROR, "Error found while trying  ModuleLocal instance" );
throw new ModuleException(e);
}
}
else { 
audit.addAuditLogEntry(key, AuditLogStatus.ERROR, "Unable to find adapter module.");
throw new ModuleException("Unable to find adapter module.");
}
}
catch (NamingException e) {
audit.addAuditLogEntry(key, AuditLogStatus.ERROR, "NamingException found: " + e.getMessage()); throw new ModuleException(e);
}
return outputModuleData;
}
private MessageKey getMessageKey(ModuleData inputModuleData) throws ModuleException {
MessageKey key = null;
try {
Object obj = null;
Message msg = null;
obj = inputModuleData.getPrincipalData();
msg = (Message) obj;
if (msg.getMessageDirection().equals(MessageDirection.OUTBOUND))
key = new MessageKey(msg.getMessageId(), MessageDirection.OUTBOUND);
else key = new MessageKey(msg.getMessageId(), MessageDirection.INBOUND);
}
catch (Exception e) {
throw new ModuleException("Unable to get message key",e);
}
return key;
}
@PreDestroy public void releaseResources() {
}
}

XPath expressions and Groovy script in HANA cloud integration

$
0
0

Since we started developing in HANA Cloud Integration we came to appreciate the integration pattern Content Modifier more and more.

It is a very powerful way of creating variables that can be used both inside (using properties) and outside the iFlows (using headers). These variables can contain a single value but also a complete xml message.

 

There are many ways in which the variables you want to use can be set. Just look at the list of types and you will see:

  • Constant
  • XPath
  • Expression
  • Property
  • External parameter
  • Local variable
  • Global variable

 

One of the types we have used extensively during our developments is the XPath expression.

This is an easy way of reading the value of an element, e.g. /Order/LineItems/Number/text() returns the value/text in <Number> or for counting the number of occurrences of an element, e.g count(/Order/LineItems) results in the number of <LineItems> in a messages.

 

Using these XPath expressions in the Content Modifier the results can easily be stored in variables and used throughout the iFlows.

 

Some examples from real messages coming from Salesforce:

xpath_expressions.png

 

There are however limits to the usage of XPath expressions in the Content Modifier.

 

Case 1: XPath returning multiple values

In one of the iFlows we received a messages containing multiple UUID elements, but we only needed the UUIDs belonging to the element AddressInformation and not the UUIDs in the other parts of the message.

Take a look this message (it is part of the response from the querybusinesspartnerin webservice on ByDesign):

 

<n0:BusinessPartnerByIdentificationResponse_sync xmlns:n0="http://sap.com/xi/SAPGlobal20/Global" xmlns:prx="urn:sap.com:proxy:K7F:/1SAI/TAS37D7D57087649686562C:804"><BusinessPartner>  <UUID>00163e06-fdd4-1ed5-a0bf-81adb3476f60</UUID>  <InternalID>3000009166</InternalID>  <CategoryCode>2</CategoryCode>  <CustomerIndicator>true</CustomerIndicator>  <SupplierIndicator>true</SupplierIndicator>  <LifeCycleStatusCode>2</LifeCycleStatusCode>  <Organisation>       <CompanyLegalFormCode>32</CompanyLegalFormCode>       <FirstLineName>Test Customer</FirstLineName>  </Organisation>  <AddressInformation>       <UUID>00163e06-fdd4-1ed5-a0bf-81adb347cf64</UUID>       <AddressUsage>            <AddressUsageCode>XXDEFAULT</AddressUsageCode>       </AddressUsage>       <Address>           …        </Address>  </AddressInformation>  <AddressInformation>       <UUID>00163e06-fdd4-1ed5-a0bf-81adb348af65</UUID>       <AddressUsage>            <AddressUsageCode>XXDEFAULT</AddressUsageCode>       </AddressUsage>       <Address>           …       </Address>  </AddressInformation></BusinessPartner></n0:BusinessPartnerByIdentificationResponse_sync>

As you can see there are three elements UUID, one directly underneath <BusinessPartner> and two underneath <AddressInformation>.

 

Retrieving the required values of the UUIDs is quite easy with an XPath expression, using the expression /n0:BusinessPartnerByIdentificationResponse_sync/BusinessPartner/AddressInformation/UUID/text()

will get result in:

Text='00163e06-fdd4-1ed5-a0bf-81adb347cf64'

Text='00163e06-fdd4-1ed5-a0bf-81adb348af65'

 

However, when you use this XPath in a header or property variable of the Content Modifier it will only put the first value, i.e. '00163e06-fdd4-1ed5-a0bf-81adb347cf64', in the variable ignoring the second one completely.

We have discussed this with Holger Kunitz from Product Management Process and Network Integration at SAP and he confirmed that the Content Modifier works this way.

 

Since we needed to have both values we turned to Groovy scripting and after experimenting a bit we found that using XMLSlurper was the easiest way to get both values, it only took a couple of lines in Groovy:

 

import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
def Message processData(Message message) {       def xml = message.getBody(String.class)        def completeXml= new XmlSlurper().parseText(xml)       def AddressUUIDs = completeXml.BusinessPartner.AddressInformation.'**'.findAll{ node-> node.name() == 'UUID' }*.text()        message.setProperty("AddressUUIDs", AddressUUIDs)        return message
}

As you can see in the script we use XMLSlurper to put the body of the message into the variable completeXML so we can then use the function findAll to find the UUIDs under the path we have specified, in this case completeXML.BusinessPartner.AddressInformation.

This way the values of UUID that are found are placed in one string separated by a comma:

[00163e06-fdd4-1ed5-a0bf-81adb347cf64, 00163e06-fdd4-1ed5-a0bf-81adb348af65].

This string is put into a property variable called AddressUUIDs and the message is returned.

This last statement is very important because without it the variable will not be set in the message.

 

Now we had both values in the property and we could query them later on our iFlow.

 

Case 2: Namespace challenges

We have also used XMLSlurper in a case where we received data from Salesforce and wanted to use an XPath expression but were unable to do so because Salesforce used a default namespace xmlns=urn:partner.soap.sforce.com.

 

At first we did not understand why a simple XPath expression was not working with this namespace until we tested it with an online XPath test tool.

 

We wanted to read the session ID from the following Salesforce login response using //sessionId:


<loginResponse xmlns="urn:partner.soap.sforce.com" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><result>       <metadataServerUrl>https://salesforce_url</metadataServerUrl>       <passwordExpired>false</passwordExpired>       <sandbox>true</sandbox>       <serverUrl>https://salesforce_url</serverUrl>       <sessionId>Hliushlasdh32847o31bfaefbB(AS9hfadoif</sessionId>       <userId>userId</userId>       <userInfo>      ….       </userInfo></result></loginResponse>


The online tool gave the following error:

The default (no prefix) Namespace URI for XPath queries is always '' and it cannot be redefined to 'urn:partner.soap.sforce.com'.

 

We realized that HCI was probably having the same problems with the namespace and therefore used the XMLSlurper again, which did not have this problem.

 

We hope that this small bit of groovy can help you too in your developments.

 

Frank Bakermans, YourIntegration

Martin Jaspers, Q for IT BV

Pass Throgh/ByPass Scenario in SAP PI

$
0
0

There are requirements when we pass through files with the help of SAP PI interfaces. Its very simple to achieve this functionality i PI.

To achieve this scenario we don't require ESR objects. We have to use dummy sender/receiver interface and namespace.

 

Below are the steps for creating this scenario:

 

Step1: Create a File/SFTP sender channel to pick the file from source directory.

 

 

Step 2: Create a Receiver file Channel to place the file in target directory.


 

Step 3: Create an ICO with dummy Sender and Receiver Interface.



 

 

 

Step 4: Now Place a test file in the source directory, so that channel picks it and send it to the target directory.

 

 

Step 5: Now start your sender communication Channel and you can see the log in communication channel that file is successfully picked and placed to target directory.



 

You can see the file in the target directory and file is successful message log in message monitoring tab as below:

 

In target directory you can see your output file.

This is simple and basic scenario but it can be helpful for newbies.

 

Regards,

Rahul


Monitering and storing Fault Messages through RFC LookUp

$
0
0

Hi Guys,

 

This is my first blog on SCN so just sharing one of the functionality which we can use for storing FM messages in database tables.

 

Hope this can be helpful for beginners. Appreciate your feedback :-) in case it was helpful.

 

 

Summary


In SAP Net Weaver Process Integration (PI) 7.1, there is now graphical support for RFC mapping lookup in message mappings, instead of creating a user-defined function with the relevant lookup code as in previous releases (e.g. XI 3.0/PI 7.0).


1.   Overview

 

This Document will explain you RFC mapping lookup in message mappings, instead of having to create a user-defined function This new feature significantly facilitates the creation of such mapping lookup and allows the PI developer to allocate his/her time. The Request is sent through WS call.

Using RFC LOOK-UP, the RFC is called and the response gets generated and the response from the SOA is sent and stored to ABAP Z table (ZPI_FM_ERROR) which can be helpful for monitoring the fault error messages and manual work to every time search message is reduced . Here we have a synchronous scenario

WS  <-------> Soap


RFC Lookup is a small UDF which helps to execute the RFC query to any SAP system and fetch the output the RFC inside the UDF. This method is generally is very helpful when we want to execute RFC method in SAP for further details from SAP based on the data passed to the mapping.

This UDF can be used to connect to any SAP system and execute the RFC statements.

 

 

 

2.   Prerequisites and Requirement

 

  • The RFC channel to be used for the lookup must be configured and activated in the Integration Directory.
  • The definition of the RFC structure used for the lookup must already be imported into the ES Repository as an imported archive.

 

Here we are going to use ZPI_FM_ERROR. The BAPI will take the input as the MessageID, ErrorType, ErrorMessage, ErrorCode, FieldName, InterfaceName, HostSystemId and the data is stored in ABAP table. Create the communication channel for the RFC Receiver. Select the adapter type as RFC. Give the R/3 System details like, Application server name, system number, logon user, password, client, language.

 

RFC Table

 

Fields Name

Length

Occurence

 

 

 

messageId

36

1..1

errorType

Possible values

1..1

 

ValidationError (V)

 

 

BusinessError (B)

 

 

systemError (S)

 

errorMessage

1000

 

ErrorCode

20

 

fieldName

 

 

timestamp

 

  1..1

interfaceName

30

 

Host systemId

20

 

 

 

Capture1.PNG

 

 

 

RFC Lookup Use case:

 

 

  • UDF uses the channel (PIRFCLookUp) to access the SAP System
  • UDF passes the RFC XML message to the channel
  • Channel execute the RFC and returns the results to the UDF.
  • UDF can retrieve the output content.

 

 

Capture2.PNG

 

 

 

 

 

 

 

Source Code:


Global UDF Name: getHeaderFields
Input Parameter constant Value:REF_TO_MESSAGE_ID,
RECEIVER_NAME
Output: Stores the result from the RFC call to ABAP table

1.Logic to get header fields

  • MessageID
  • InterfaceName

 

public String getHeaderFields(String param, Container container) throws StreamTransformationException{

 

 

 

 

java.util.Map map = container.getTransformationParameters();

 

 

String headerField = "";

 

 

if (param.equals("REF_TO_MESSAGE_ID"))

headerField = (String) map.get(StreamTransformationConstants.REF_TO_MESSAGE_ID);

else if (param.equals("RECEIVER_NAME"))

headerField = (String) map.get(StreamTransformationConstants.RECEIVER_NAME);

 

 

return  headerField;

 

 

}

 

 

2. Logic to dynamically store sub-string up to 255 characters


public String subString(String input, int start, int length, Container container) throws StreamTransformationException{

 

 

//Extracts the substring of an input String

// start - position index for start of string

// length - length of substring

 

 

int startPosition = start;

int endPosition = start + length - 1; 

String output = "";

if ( startPosition < 0 )

{

  output = "";

}

else if ( startPosition >= 0 && startPosition < input.length() )

            { 

  if ( endPosition < input.length() )

  { 

 

  output = input.substring( startPosition , endPosition  + 1 );

   }

  else if ( endPosition >= input.length() )

  { 

 

  output = input.substring( startPosition , input.length() );

   }

  }

else if ( startPosition  >= input.length() )

{  

  output = ""; 

}

return output;

 

 

}



Source Code Reference snap shots:-


Capture3.PNG

 

Capture4.PNG

 

Message Mapping


Select the source message and target message type.

A message mapping parameter must be created within the Signature tab. It should be an 'Import' parameter and have Category 'Adapter' and Type 'RFC'. The name of the parameter should be the RFC channel name which we have created in Integration Directory. In our case it is ‘LOOKUP_CHANNEL’.

 

Capture5.PNG

 

 

Steps:-


  1. In mapping functions, under conversions, select RFC Lookup.
  2. Double click on RFC Lookup. Select the RFC which we have imported. We can see now the request and response message elements.
  3. Select and Double click on the request fields and response fields. Then we can get the fields which we have selected in below the RFC structure.


Note:-

Check that the RFC should be in green color and at last there should be input to RFC call and output too (Refer below screen shot) i.e input fields to be mapped to output field.


Capture6.PNG

 

 

 

Operational Mapping


Click on the binding and select the binding parameter as “LOOKUP_CHANNEL” in case the parameter used/passed in the binding of mapping program is above.



Capture7.PNG


Capture8.PNG

 

 

 

To test go to the test tab and then select the value for LOOKUP_CHANNEL


Capture9.PNG



 

4.  Results:


Test Results stored in ABAP table:-


Capture10.PNG

 

 

Note:

 

Click on binding in the operational mapping so that mapping parameters are assigned to it, else it this will error out if not done, while testing. Also check in interface determination whether the RFC lookup parameters are passed or not.


Capture13.PNG

Capture12.PNG

 

 

5.  Bottleneck Issues/Challenges:-


1. Error Message:

 

MAPPING">RUNTIME_EXCEPTION</SAP:Code> <SAP:P1>Thrown: java.lang.NullPointerException: while trying to invoke the method java.lang.String.length()of a null object loaded from local variable 'guid'at com.sap.guid.GUID.parseHexGUID(GUID.java:104RuntimeException&quot; survenue lors du mappage

 

         Solution:


This Error  was seen in QA environment, as in interface determination  there are no parameter assigned and it was taking this as a null value since transports for ID were missed, once the transport of interface determination object and communication RFC channel object to QA environment was done, this was resolved .

 

2. Change in requirement for handling/storing data in ABAP table up to 256 characters only (rest will be truncated) rather than 1000 characters.

 

Challenges:-


During testing it was found that we cannot pass more than 256 characters to RFC lookup table as it accepts data as String type whose predefined length is 256, so rest all the characters will be truncated.

 

Possible Solution to fix this requirement:-


Here as the max. Length of data passed via RFC is limited to 256 characters i.e. 0-255, so to resolve the point, we may need to use the Table parameter of the BAPI. Finally, we can map our table parameter in PI with line type TDLINE and write a function in PI that will automatically create a paragraph of text into multiple lines of 132 characters and pass to the BAPI Table parameters which can then be re-combined in SAP.

 

 

 

Regards

S Tomar

How To Transport SAP XI/PI/PO Objects from DEV SLD-->QA SLD-->PRD SLD

$
0
0

Introduction:


This blog describes in detail about transporting ESR/ID objects across different SLD’s for DEV, QA and PRD instead of having a single central SLD or common SLD for DEV&QA and separate SLD for PRD

SAP PI interfaces which involves integrating with SAP ECC, SAP SRM, SAP SLC and SAP SUS etc. will have different business systems and clients in their respective DEV, QA and PRD environments.


ESR objects does not have any impact while transporting to different environments however ID objects which involves WEB AS ABAP/JAVA business systems should automatically pick their respective business system related to the environment while moving from DEV-->QA-->PRD to avoid the reconfiguration/recreating ID objects in both PI QA and PI PRD. Below image shows the landscape setup

 

1.jpg

Let’s discuss the steps involved to achieve this functionality.

 

Step 1: Create technical systems of type WEB AS ABAP related to DEV SAP systems in PI DEV SLD either manually or by registering using RZ70 Tcode and create respective business systems

 

2.jpg

 

Step 2: Followstep 1 and create technical systems and business systems of type WEB AS ABAP related to QA SAP system into PI QA SLD and PRD SAP system into PI PRD SLD


Step 3: Login to PI QA SLD and configure unidirectional SLD Synchronization pointing to PI DEV SLD to get all the DEV SAP technical systems into PI QA SLD. We can also create manually or using RZ70. These includes technical systems of SAP systems and native PI technical system as per below screen

 

3.jpg

 

 

Step 4: Select the PI DEV technical system (highlighted in above screen shot) and click on Add New Business System and select the role of business system as Integration Server from the drop down and provide pipeline URL as per below screen


4.1.jpg

4.2.jpg

 

Step 5: Create business systems to the respective technical system and assign related integration server as BS_DPICLNT100

 

5.jpg

 

Step 6: Create two different Groups GR_DEV and GR_QA in PI QA SLD and assign the respective Integration Server from drop down

 

6.jpg

 

Step 7: Select DEV business system and make sure the group GR_DEV assigned and click on Add/ChangeTarget push button to assign the transport target’s as per below screens

 

7.jpg

7.1.jpg

 

Step 8: Repeat the same above steps in PI PRD SLD by using QA technical and business systems

 

Please note for AEX/SAP PRO as we don’t have ABAP stack, pipeline URL will be different for Integration server business system. So instead of creating business systems manually do an export and import across different SLD’s.

 

I hope this blog will clear doubts for many PI consultants who are struggling to understand the concept of Groups and transport targets in SLD.

 

Happy Learning!!!

Step by step to enable debugging for ABAP Mapping class at run time in PI

$
0
0

Hi All,

 

Requirement : I Want to enable debugging for validating data in Abap mapping level.

mp1.png

 

Step 1 : Get the input payload( In my case I am getting input payload from SXMB_MONI).

 

mp2.png

 

Step3 : Go to SE24 and select Abap-class.

mp3.png

Click  Execute method and place external debugging breakpoint like below.

mp4.png

 

Step4 :Go to SXI_MAPPING_TEST tcode and provide all the details.

mp5.png

Then upload input payload and click on Execute button.

mp6.png

 

Now debugging enabled for Abap-class.

mp7.png

 

For above requirement I am referring below blog.

How to test ABAP mapping when transforming messages 1..N

 

Hope this will be helpful.

 

Thank you,

Narasaiah T

Outbound IDoc: Status '03'. Absolute Success? Not Necessarily!

$
0
0

Intro

In outbound IDoc scenarios (mediated via SAP PI/PO or used in point-to-point integration), it is not unusual to witness situation when support team or application consultant checks generated IDoc in a sender ABAP system (for example, SAP ERP) using standard built-in IDoc monitoring tools (like transactions WE02, WE05 or BD87), examines IDoc's status record verifying that its last status is '03' ("Data passed to port OK") and provides a notice of the IDoc being successfully sent by a system. It can also be a case in such situation, that a receiver system hasn't received any corresponding inbound IDoc. As a result, it is commonly stated that the IDoc is lost or, if SAP PI/PO is used as a middleware system in the scenario, it is claimed that processing in PI/PO failed. But the described situation does not imply loss of the transmitted IDoc and doesn't literally mean something is broken in PI/PO - this can mean the IDoc hasn't actually left a sender system and has never been sent out by it. Below in this blog, I would like to provide few notes regarding meaning of the IDoc status '03' and steps how hypothetical depicted case can be analyzed and further root cause analysis performed.

 

 

Status '03': looking under the hood

As a demo for this blog, I'm going to use outbound IDoc scenario where IDoc is sent out via tRFC - probably, one of mostly common faced communication techniques in SAP outbound IDocs integration. On high level, processing in a sender system involves three layers: application, ALE and communication:

Flow diagram.png

In the most basic scenario, the last step of outbound IDoc processing in ALE layer is dispatching of the IDoc to a port - if this step is successful, status '03' is assigned to the IDoc. After the IDoc is sent to a port, by default, ALE layer does not have visibility regarding further processing steps occurring in communication layer, and their outcome.

 

Now let us assume a delay or error occurred during processing in communication layer - this can happen due to various reasons: sender system overload and lack of CPU or RFC resources causing delay in processing outbound RFC call, failure to establish RFC connection to a receiver system due to misconfiguration, network issues or receiver system unavailability, failure on receiver side during acceptance and handling of inbound RFC call. By default, RFC errors are not propagated back to ALE layer and IDoc remains persisted with status '03'. Status '03' is status indicating success and is depicted using green semaphore colour in monitoring tools, since precisely speaking, ALE layer succeeded with its mission and accomplished all required tasks, finally dispatching the communication IDoc to a selected port.

 

In order to illustrate this situation, let us send two outbound IDocs - the first one is processed and delivered to a receiver system successfully, whereas before sending the second IDoc, I "broke" the scenario so that IDoc fails when being delivered to a receiver system.

 

Checking IDoc monitor (here, I used transaction WE02), we can see both IDocs are assigned status '03':

IDoc monitor - default.png

But checking tRFC outbound calls (transaction SM58), we can evidence one error that occurred during the second IDoc processing:

RFC error.png

To ensure that this is really the call corresponding to the second IDoc, we can explore content of the RFC call (which will contain IDoc content) or alternatively we can fastly verify transaction ID: IDoc status record entry for status '03' contains indication of a created RFC transaction ID (tab "Sts details", field "1st parameter" in transaction WE02), which matches the one observed in tRFC outbound monitor above (field "Transaction ID" in transaction SM58):

IDoc status record 03.png

As it can be seen, just using IDoc monitoring tools, it is not obvious if the IDoc reached a receiver system or if it got stuck in communication layer of a sender system.

 

 

IDoc ALE / RFC status reconciliation

SAP systems are equipped with a standard tool that can be used to improve transparency of status of corresponding RFC transaction correlated with the dispatched outbound IDoc, from ALE layer. This can be accessed using transaction BD75 or program RBDMOIND. There are materials describing this functionality (for example, refer to Setting Dispatch Status to Dispatch OK - IDoc Interface/ALE - SAP Library and SAP Note 1157385). Even though functionality is available in SAP systems, it is not commonly familiar and used by support teams when troubleshooting errors in IDoc scenarios.

 

Using the mentioned transaction, it is possible to verify if RFC transactions containing outbound IDocs, were processed successfully, or if they are pending processing, in process of execution or in error status in RFC layer. Transaction is not intended to be used for non-RFC port types.

 

General logic of the transaction is as following:

  • Retrieve IDocs with status record '03' and non-empty transaction ID, which were last updated on or after date specified on a selection screen of the transaction;
  • For selected IDocs, identify corresponding transaction IDs;
  • Retrieve tRFC outbound entries for identified transaction IDs;
  • Check if RFC entry is found for the IDoc and proceed accordingly:
    • If RFC entry is not found for the IDoc (meaning that IDoc was processed successfully in RFC layer and was delivered to a receiver system), add IDoc status entry for status ‘12’ ("Dispatch OK") to the IDoc,
    • If RFC entry is found for the IDoc (meaning IDoc transmission failed, or is still being processed, or waiting in RFC layer and was not yet delivered to a receiver system), no IDoc status record update occurs and the IDoc remains with status '03'.

 

Here is an outlook at IDoc monitor containing same two IDocs, but after execution of transaction BD75 - as seen, the first IDoc (which was successfully delivered to a receiver system) got status '12', whereas the second IDoc (which failed due to communication errors) remains in status '03':

IDoc monitor - after BD75.png

Details of status record for status '12' of the first IDoc provide additional information, where it can be seen that the status was added by already mentioned program RBDMOIND (called by earlier executed transaction BD75):

IDoc status record 12.png

 

Optionally, it is possible to enable display of not sent IDoc numbers by using corresponding check box on a selection screen of the transaction. If done so, list of IDoc numbers that were not yet delivered to a receiver system, will be generated in an output.


It shall be noted that the described functionality is only relevant for ports of RFC type. Thus, if a port of another type is used and tRFC layer is not involved, transaction BD75 will not yield to an illustrated outcome and IDocs will remain in status '03'.

 

 

Performance considerations

You may be wondering why the described reconciliation mechanism is not enabled out of the box. Reason for this is possible performance impact on a sender system: as mentioned earlier, when being executed, transaction BD75 accesses RFC and IDoc tables. In case of inefficient or time consuming access to these tables (for example, if there are many obsolete RFC entries, or proper cleanup / housekeeping of RFC and IDoc layers is missing, or in case of global database performance issues), performance of this transaction may be degraded. Execution of this transaction at a time of high load on RFC and/or IDoc layer may also cause potential performance issues.

 

Periodic execution of program RBDMOIND (in case IDoc ALE / RFC status reconciliation is to be conducted on a regular basis) shall also be performed with caution. Intention of making reconciliation near real time and corresponding too frequent execution of program RBDMOIND (e.g. triggered by scheduled background job) may only increase risk of negative performance impact highlighted above for individual manual execution of transaction BD75.



Lessons learned

As it was illustrated in the blog, green semaphore colour of the IDoc status in IDoc monitoring tools shall be interpreted correctly - failure to do this may cause misleading steps in further problem root cause analysis and result delays in identification of an actual issue and its resolution.


If an outbound IDoc has status '03' and is due to transmission over RFC port, before checking target system, it is recommended to start root cause analysis from looking into RFC layer of a sender system: in case of tRFC, transaction SM58 shall be checked, in case of qRFC (IDoc queue processing) - transaction SMQ1. For systematic mass verification, the described transaction BD75 / program RBDMOIND can be used to facilitate and complement this process.


If an outbound IDoc has status '03' and is due to transmission over RFC port, if there is no corresponding RFC entry for it in RFC layer (assuming there was no manual interference and RFC entry was not deleted manually), but execution of transaction BD75 doesn't result in the IDoc status update to '12', there can be purely internal technical error in IDoc update process (for example, IDoc object couldn't be locked).

 

If an outbound IDoc has status '03' and used port is not of RFC type, transaction BD75 is not helpful in status reconciliation. In such scenarios, it is worth checking monitoring tools and logs of corresponding used communication layer.

Using existing Camel component for Gmail adapter in HCI

$
0
0

In a previous post I explored how to create a custom HCI adapter.  That is good if you have some very specific needs but most often you can just reuse an existing Camel component.  There are many Camel components already (just check which ones are compatible with your HCI tenant version).

 

 

The aim is to retrieve a Gmail profile but really you can also get messages, labels etc if you want.  I've decided to use the GoogleMail component.  The process is relatively easy:

  • Create OSGi bundle for dependent libraries
  • Create the HCI adapter and modify the metadata.xml
  • Create a test integration project to see how it works.


Create OSGi bundle

If the Camel component has dependencies on other 3rd party libraries then they need to be deployed with the adapter.  The way to do it is to create an OSGi bundle that contains all the dependent libraries.  Later this OSGi bundle will be imported into the adapter project.

 

Check dependencies

The easiest way to see the dependencies is to open the pom.xml in Eclipse and click the Dependency Hierarchy tab.  Below is the dependencies camel-google-mail jar has:

 

g001.png

 

Create a new project

Follow the HCI doco for this part (I'll just do it here with a few screenshots).

 

InstructionScreenshot
  • In Eclipse, go to Start of the navigation path File Next navigation step New Next navigation step Project End of the navigation path.
  • In the New Project wizard, search for Plug-in Development and select Plug-in from Existing JAR Archives.
  • Choose Next.
g002.png
  • If you have the JAR in the current workspace, then choose Add and select the JAR file from the JAR selection dialog.
  • If you do not have the JAR in your current workspace, then choose Add External and select the required JAR file from your local system.
  • Choose Next.
g003.png
  • In the Plug-in Project Propertiesdialog, specify a name in the Project name field.
  • In the Execution Environment drop down list box, select any java version that is less than or equal to JavaSE-1.7.
  • In the Target Platform, select an OSGi framework option.
  • Choose Finish.
g004.png
  • Select the MANIFEST.MF file of your project, and go to Runtime tab.
  • Choose Add.
  • Select the required export packages.
  • Choose OK and save the MANIFEST.MF file.
g005.png

 

  • Right Click on the project and choose Export.
  • In the Export wizard search for plug-in Development and choose Deployable plug-ins and fragments.
  • Choose Next.
g006.png
  • Choose Browse and select the directory where you want the plug-in to be generated.
  • Choose Finish.
g007.png

 

Create the HCI adapter

The next step is to create a new HCI adapter project.  Open Eclipse and select File > New > Other..  Expand the SAP HANA Cloud Integration section and select Adapter Project

 

g008.png

click Next

g009.png

click Finish

 

Now we need to do a bit of copy/paste.

  • Copy the google-mail-library.x.jar that was exported in the previous step and paste it in the gmail-adapter\libs folder.
  • Locate the camel-google-mail-x.jar (central repo or in your local maven repo) and paste it to the gmail-adapter\component folder.

 

Lastly right click the gmail-adapter project and select Generate Component Metadata.  The resulting project should look like this:

 

g010.png

 

Modify metadata.xml

Modify the metadata.xml file by adding the <Varian> element.  Make sure your metadata.xml looks like below:

 

<?xml version="1.0" encoding="UTF-8" standalone="yes"?><ComponentMetadata ComponentId="ctype::Adapter/cname::apache:google-mail/version::1.0.0" ComponentName="apache:google-mail" UIElementType="Adapter" IsExtension="false" IsFinal="true" IsPreserves="true" IsDefaultGenerator="true" MetadataVersion="2.0" xmlns:gen="http://www.sap.hci.adk.com/gen">  <Variant VariantName="Get Profile" gen:RuntimeComponentBaseUri="google-mail" VariantId="ctype::AdapterVariant/cname::apache:google-mail/tp::https/mp::none/direction::Receiver" IsRequestResponse="true" MetadataVersion="2.0" AttachmentBehavior="Preserve">  <InputContent Cardinality="1" Scope="outsidepool" MessageCardinality="1" isStreaming="false">  <Content>  <ContentType>Any</ContentType>  <Schema xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"></Schema>  </Content>  </InputContent>  <OutputContent Cardinality="1" Scope="outsidepool" MessageCardinality="1" isStreaming="false">  <Content>  <ContentType>Any</ContentType>  <Schema xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"></Schema>  </Content>  </OutputContent>  <Tab id="connection">  <GuiLabels guid="8c1e4365-b434-486a-8ec8-2a8fd370f77f">  <Label language="EN">Connection</Label>  <Label language="DE">Connection</Label>  </GuiLabels>  <AttributeGroup id="defaultUriParameter">  <Name xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">URI Setting</Name>  <GuiLabels guid="d1004d88-8e54-4f46-a5e7-e6e4d0dca053">  <Label language="EN">URI Setting</Label>  <Label language="DE">URI Setting</Label>  </GuiLabels>  <AttributeReference>  <ReferenceName>firstUriPart</ReferenceName>  <description>Configure First URI Part</description>  </AttributeReference>  <AttributeReference>  <ReferenceName>userId</ReferenceName>  <description>Gmail User Id</description>  </AttributeReference>  </AttributeGroup>  <AttributeGroup id="googleMailEndpoint">  <Name xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">Google Mail Endpoint</Name>  <GuiLabels guid="7a914dcf-ab5e-4dfc-a859-d4709bd7402d">  <Label language="EN">Google Mail Endpoint</Label>  <Label language="DE">Google Mail Endpoint</Label>  </GuiLabels>  <AttributeReference>  <ReferenceName>clientId</ReferenceName>  <description>Client Id</description>  </AttributeReference>  <AttributeReference>  <ReferenceName>clientSecret</ReferenceName>  <description>Client Secret</description>  </AttributeReference>  <AttributeReference>  <ReferenceName>accessToken</ReferenceName>  <description>Acess Token</description>  </AttributeReference>  <AttributeReference>  <ReferenceName>refreshToken</ReferenceName>  <description>Refresh Token</description>  </AttributeReference>  </AttributeGroup>  </Tab>  </Variant>  <AttributeMetadata>  <Name>firstUriPart</Name>  <Usage>false</Usage>  <DataType>xsd:string</DataType>  <Default></Default>  <Length></Length>  <IsParameterized>true</IsParameterized>  <GuiLabels guid="d78fa735-e35e-49f1-95cb-a82b075615be">  <Label language="EN">First URI Part</Label>  <Label language="DE">First URI Part</Label>  </GuiLabels>  </AttributeMetadata>  <AttributeMetadata>  <Name>userId</Name>  <Usage>false</Usage>  <DataType>xsd:string</DataType>  <Default></Default>  <Length></Length>  <IsParameterized>true</IsParameterized>  <GuiLabels guid="71bcdd72-6b26-4423-b684-76d1ef626ca1">  <Label language="EN">Gmail User Id</Label>  <Label language="DE">Gmail User Id</Label>  </GuiLabels>  </AttributeMetadata>  <AttributeMetadata>  <Name>clientId</Name>  <Usage>false</Usage>  <DataType>xsd:string</DataType>  <Default></Default>  <Length></Length>  <AttributeBehavior>SecureAlias</AttributeBehavior>  <IsParameterized>true</IsParameterized>  <GuiLabels guid="71bcdd72-6b26-4423-b684-76d1ef626ca1">  <Label language="EN">Client Id</Label>  <Label language="DE">Client Id</Label>  </GuiLabels>  </AttributeMetadata>  <AttributeMetadata>  <Name>clientSecret</Name>  <Usage>false</Usage>  <DataType>xsd:string</DataType>  <Default></Default>  <Length></Length>  <AttributeBehavior>SecureAlias</AttributeBehavior>  <IsParameterized>true</IsParameterized>  <GuiLabels guid="dbf6df69-7666-46e5-9aaf-074d2fc0a080">  <Label language="EN">Client Secret</Label>  <Label language="DE">Client Secret</Label>  </GuiLabels>  </AttributeMetadata>  <AttributeMetadata>  <Name>accessToken</Name>  <Usage>false</Usage>  <DataType>xsd:string</DataType>  <Default></Default>  <Length></Length>  <AttributeBehavior>SecureAlias</AttributeBehavior>  <IsParameterized>true</IsParameterized>  <GuiLabels guid="b8748d93-076c-46cb-b694-c84f6cd3095c">  <Label language="EN">Access Token</Label>  <Label language="DE">Access Token</Label>  </GuiLabels>  </AttributeMetadata>  <AttributeMetadata>  <Name>applicationName</Name>  <Usage>false</Usage>  <DataType>xsd:string</DataType>  <Default></Default>  <Length></Length>  <IsParameterized>true</IsParameterized>  <GuiLabels guid="c17aa8da-765e-417c-987f-b06f7aabd4bf">  <Label language="EN">Application Name</Label>  <Label language="DE">Application Name</Label>  </GuiLabels>  </AttributeMetadata>  <AttributeMetadata>  <Name>refreshToken</Name>  <Usage>false</Usage>  <DataType>xsd:string</DataType>  <Default></Default>  <Length></Length>  <AttributeBehavior>SecureAlias</AttributeBehavior>  <IsParameterized>true</IsParameterized>  <GuiLabels guid="5373e7c4-cd24-40b8-957b-bc37f32376f7">  <Label language="EN">Refresh Token</Label>  <Label language="DE">Refresh Token</Label>  </GuiLabels>  </AttributeMetadata></ComponentMetadata>

Again, the parameters are secured.  To ensure all is good right click the project and select Execute Checks.  You should see the following in the console:

 

[9/05/16 3:49 PM] Checks executed successfully

And finally do a local build.  Right click the project and select Build Adapter Project.  Your should see the following in the console:

 

[9/05/16 3:51 PM] Build completed

Create a test integration project

Create a new integration project called google-mail-integration.  The outcome should be as follows:


g011.png

 

Select the Channels tab and click on Browse.  If all went well you will see the google-mail adapter as an option in the dialog.  Select the google-mail option.

 

g012.png

Select the Adapter Specific tab and enter details as required.  As you can see I am using secured parameters

g013.png

 

You can also use some other endpoint prefix, see     Apache Camel: GoogleMail   

 

 

Deploy the HCI adapter

Deploy to your HCI tenant and check it deployed successfully.

 

g014.png

 


Missing dependencies will throw an exception similar to this:

org.osgi.service.subsystem.SubsystemException: org.osgi.service.resolver.ResolutionException: Unable to resolve /tmp/inputStreamExtract2522187002793448101.zip/camel-google-mail-2.16.2.jar: missing requirement org.apache.aries.subsystem.core.archive.ImportPackageRequirement: namespace=osgi.wiring.package, attributes={}, directives={filter=(&(osgi.wiring.package=com.google.api.client.http.javanet)(version>=0.0.0))}, resource=/tmp/inputStreamExtract2522187002793448101.zip/camel-google-mail-2.16.2.jar

 

Go back and ensure the missing lib is in the OSGi bundle.


Deploy the integration project

Deploy the google-mail-integration project to your tenant and monitor the messages.  If everything worked you will see something like this

 

g015.png

Conclusion

So it is much easier to use an existing Camel component than creating your own from scratch.  And although this example does not do anything significant it shows how to create an adapter using Camel components.

Viewing all 676 articles
Browse latest View live