Quantcast
Channel: SCN : Blog List - Process Integration (PI) & SOA Middleware
Viewing all 676 articles
Browse latest View live

NWA Admin Doc for PI Developers

$
0
0

SAP PI Net weaver Administrator tasks for PI developers.

 

 

Start/Stop Adapter Services

  1. http://hostname:port/nwa
  2. Go to Operation Management  -> Systems-> Start & Stop -> Java EE Services -> XPI Adapter: *
  3. At the bottom, there will be push button to start/stop the service.


Adapter service Properties

  1. http://hostname:port/nwa
  2. Go to Configuration Management  -> Infrastructure -> JAVA system properties -> Services
  3. Select Adapter server  XPI Adapter:*
  4. Choose “ Extended Details” at the bottom to display the properties of the adapter

 

Certificate Key store

  1. http://hostname:port/nwa
  2. Go to Configuration Management  -> Certificates and Keys
  3. Select the Key store view
  4. The Entry Import dialog appears.
  5. In Select Entry Type, choose X.509 and browse to the location of the exported entry. Here you have three choices, depending on the type of entry you want to import:
    • a.       X.509
    • b.      PKCS#12 Key Pair
    • c.       PKCS#8 Key Pair

     6.      Choose Import

 

Adapter JAVA consumer Thread

1.      http://hostname:port/nwa

2.      Go to Configuration Management  à Infrastructure àJAVA system properties à Services

3.      Select the service “XPI Service: AF Core”

4.      Choose “messaging.connectionDefinition” Property

Using the “name=global” entry some global template settings are defined. For each Adapter type these defaults can be individually overwritten, by adding additional configuration entries, according to the following syntax:

(name=<AdapterTypeIdentifier>, messageListener=localejbs/AFWListener, exceptionListener=localejbs/AFWListener, pollInterval=<a>, pollAttempts=<b>, Send.maxConsumers=<c>, Recv.maxConsumers=<d>, Call.maxConsumers=<e>, Rqst.maxConsumers=<f>),


The example below adds a new property set for the RFC Adapter. This is appended after the global AFW entry,

(name=global, messageListener=localejbs/AFWListener, exceptionListener=localejbs/AFWListener, pollInterval=60000, pollAttempts=60, Send.maxConsumers=5, Recv.maxConsumers=5, Call.maxConsumers=5, Rqst.maxConsumers=5), (name=RFC_http://sap.com/xi/XI/System, messageListener=localejbs/AFWListener, exceptionListener=localejbs/AFWListener, pollInterval=60000, pollAttempts=60, Send.maxConsumers=8, Recv.maxConsumers=8, Call.maxConsumers=12, Rqst.maxConsumers=12)


 

JMS/JBDC Connection Properties

1.      http://hostname:port/nwa

2.     Go to Configuration Management  -> Infrastructure -> JAVA system properties -> Services

3.      Select the service “XPI Service: Messaging System”

4.      Choose following properties for verifying JDBC connection

 

    • messaging.connections
    • messaging.connectionParams
    • messaging.jdbc

 

5.      Choose following properties for verifying JMS connection

  • messaging.jms.providers

 

JAVA Adapter Queue Parallelism

1.      http://hostname:port/nwa

2.      Go to Configuration Management  -> Infrastructure -> JAVA system properties -> Services

3.      Select the service “XPI Service: Messaging System”

4.       Add or modify the property “queueParalellism.maxReceivers” based on the maximum JAVA consumer thread settings.

 

RFC Destinations

 

1.      http://hostname:port/nwa

2.      Go to Configuration Management  à Security Management

The available destinations appear in the Destinations List. If you select a destination, its details are shown in the lower pane.

3.      To create a new destination, choose Create.

The General Data screen appears.

4.      Enter the following information in the corresponding fields:

Hosting system: <system where the destination is located>

Destination Name: <Name>

Destination Type: RFC

5.      Choose Next.

The Connection and Transport Security Settings screen appears.

6.      Enter the parameters for the connection to the ABAP server (hostname and system number or system ID and logon group if load balancing is used). If the destination is a registered RFC server program, enter the corresponding gateway's hostname and service.

7.      If you use SNC to secure the connection, then enter the SNC parameters in the SNC section (active/inactive, quality of protection, and the target server's SNC name).

8.      Choose Next.

The Logon Data screen appears.

9.      Enter the authentication information to use. You can use either a predefined technical user or the current user for the connection. If you use a technical user, enter the user's data in the corresponding fields. If you use the current user, then specify whether a logon ticket or an assertion ticket should be used for authentication.

10.  If you need to access the ABAP repository, then enter a destination that contains the corresponding connection information in the Repository Connection section.

11.  Choose Next.

The Specific Settings screen appears.

12.  If the destination uses a pooled connection, select Pooled Connection Mode in the Pool Settings and enter the pool connection parameters accordingly.

13.  If a SAProuter is used for the connection, then enter the SAProuter connection information in the Advanced Settings.

14.  Save the data.

 

Log Viewer

Log Viewer allows you to view all log and trace messages that are generated in the whole SAP NetWeaver system landscape. These log records assist you to monitor and diagnose problems.

To access the tool,

1.      http://hostname:port/nwa

2.      Go to Troubleshooting  -> Logs and Traces  ->  Log Viewer


JCO RFC Provider

The JCo RFC Provider Service processes ABAP to Java requests, and dispatches the calls to Java applications. So, seen from an ABAP system, it provides an RFC destination. Technically, the service is based on the JCo (SAP Java Connector). In order to receive calls from ABAP, JCo Servers are started and registered at the gateways of the ABAP systems. The configuration of these JCo servers is done here.

To access the tool,

1.      http://hostname:port/nwa

2.      Configuration Management -> Jco RFC Provider


Message Prioritization on the ABAP Stack

Log on to your Integration Server, call transaction SMQ2, and execute. If you are running ABAP proxies on a separate client on the same system, enter ‘*’ for the client. Transaction SMQ2 provides snapshots only, and must therefore be refreshed several times to get viable information.

It’s highly recommended to use message prioritization on the ABAP stack to prevent delays for critical interfaces.


Heap Dumps Analysis

1.      http://hostname:port/nwa

2.      Go to Troubleshooting  -> Advanced Troubleshooting  -> Heap Dump Analysis

 

Generating Heap Dumps

1.      Choose Generate Heap Dump. A dialog window appears.

2.      Select a server process (node) and choose OK.

3.      The new heap dump is displayed in the table.


     Archiving Heap Dumps

      • Select the relevant heap dump.
      • Choose Archive.
      • A confirmation dialog window appears. Choose OK.
      • In the Archive Size column, a progress bar is displayed showing the archiving process in percent (%)

     Downloading Heap Dumps

      • Select the relevant heap dump.
      • Choose Download.
      • Save the downloaded content in your local file system


Removing Heap Dumps and Heap Dump Archives

1.      Select the relevant heap dump.

2.      Choose the Remove or Remove Archive button, accordingly.

3.      A confirmation dialog window appears

House-Keeping

1.      If your file system is out of memory because of too many heap dumps, a Delete column appears with a red indication.

2.      To delete old and obsolete heap dumps, select them and choose Remove.

 

Analyzing Thread Dumps

  1. http://hostname:port/nwa
  2. Go to Troubleshooting  -> Advanced Troubleshooting  -> Thread Dump Analysis

Triggering Thread Dumps

     For a Long Running Thread

1.      In the Availability and Performance work center, choose System Overview.

2.      Go to Threads and see if a long running thread is detected.

3.      If yes, from the Long running context menu, choose Trigger Thread Dump.

4.      The Thread Dump Analysis tool is opened.

5.      Choose Generate Thread Dump.

6.      From the dialog window, select Only Server Processes with Long Running Threads and choose OK.

7.      The archive file appears in the table. It has the Contains Red Threads column selected.

On a Particular Server Process

1.      Choose Generate Thread Dump.

2.      Choose Custom, select a server process and then choose OK.

3.      The archive file appears in the table.

On All Server Processes

1.      Choose Generate Thread Dump.

2.      Choose All Server Processes and then choose OK.

3.      The operation may take several minutes depending on the number of server processes.

The archive file appears in the table.


Analyzing Thread Dumps

1.      To see and resolve the problem, select the relevant thread dump and choose Download.

2.      Save the ZIP file to your local file system.

3.      Use the Eclipse Memory Analyzer tool to open the ZIP file and to analyze the dump.

4.      As a solution, you can decide whether to stop the application or to write a customer message.


Removing Thread Dumps

1.      Open the Thread Dump Analysis function.

2.      From the table, select the thread dump you want to delete.

3.      Choose Remove.


HCI: Developing custom OAuth 2.0 authentication in iFlows

$
0
0

Introduction

OAuth 2.0 is becoming a common authentication method to access REST-based services (i.e. Concur, Google, SFDC). Unfortunately, different organizations might have different implementations of OAuth 2.0 and thus there is lack of a standard approach to it. According to the blog Authenticating from HANA Cloud Integration, HCI currently only supports the Client Credentials Grant Type for OAuth 2.0.

 

Unlike PI, fortunately HCI has a flexible pipeline processing based on Apache Camel and this allows for easy inclusion of intermediate request-reply calls before the message reaches the final target.

 

In this blog, I will share how we can take advantage of HCI's flexible iFlow model to design a solution using custom OAuth 2.0 authentication that has yet to be supported by HCI natively. For the example, I will implement Concur's OAuth 2.0 Native Authorization Flow to retrieve an access token prior to accessing Concur's REST API. This is analogous to Option 3 of the following article detailing the support for Concur's OAuth 2.0 in PI's REST adapter.

PI REST Adapter - Connect to Concur

 

 

Component Details

As HCI is a cloud solution with automatic rolling updates, these steps are valid for the following versions and may change in future updates.

Below are component versions of the tenant and Eclipse plugins.

HCI Tenant Version: 2.8.5

Eclipse Plugin Versions: Adapter 2.11.1, Designer 2.11.1, Operations 2.10.0

 

 

iFlow Design

Below is the design of the iFlow for this example. For simplicity sake, it uses a start timer to trigger the flow once it is deployed. The target recipient is an HTTP server that logs the message posted to it.

oauth_iflow.png

 

Following is an overview of the various sections involved in the flow, which will be elaborated further.

  • Step 1 - Retrieve the user credentials and execute call to Concur token endpoint to retrieve the token
  • Step 2 - Extract the access token value from response of previous call and formulate the HTTP header for authentication using OAuth
  • Step 3 - Execute call to Concur REST API
  • Step 4 - Send REST API response to HTTP target

 

 

Design & Configuration

In this section, I will elaborate further on the design and configuration of each step.

 

Step 1

In order to retrieve an OAuth 2.0 access token from Concur, we first need to perform an HTTP GET call to its token endpoint. This token endpoint expects basic authentication which consists of the Base64 encoded value of the credential (userID:password) in the HTTP header. Additionally, the consumer key needs to be also passed in the HTTP header.

 

In order to achieve this, we first make use of the User Credentials artifact to securely store and deploy the user ID and password. For more details on that, refer to the following blog on how to configure and deploy the artifact. For this example, I have deployed the credentials under the artifact name ConcurLogin.

Building your first iFlow - Part 4: Configuring your credentials

 

Within the iFlow, this credential will be retrieved using a Groovy script by utilizing the SecureStoreService API.

script1.png

 

Below is the logic for the Groovy script. Basically it retrieves the user credential from artifact ConcurLogin. It then encodes the login credentials in Base64. Finally it sets the two required HTTP header fields (note that I have deliberately modified the actual value of the consumer key).

 

import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import javax.xml.bind.DatatypeConverter;
import com.sap.it.api.ITApiFactory;
import com.sap.it.api.securestore.SecureStoreService;
import com.sap.it.api.securestore.UserCredential;
def Message processData(Message message) {  def service = ITApiFactory.getApi(SecureStoreService.class, null);  def credential = service.getUserCredential("ConcurLogin");  if (credential == null){    throw new IllegalStateException("No credential found for alias 'ConcurLogin'");  }  String user = credential.getUsername();  String password = new String(credential.getPassword());  def credentials = user + ":" + password;  def byteContent = credentials.getBytes("UTF-8");  // Construct the login authorization in Base64  def auth = DatatypeConverter.printBase64Binary(byteContent);  message.setHeader("Authorization", "Basic " + auth);  message.setHeader("X-ConsumerKey", "xxxx");  return message;
}

 

Subsequently, the message is processed by a Request-Reply step that calls the token endpoint using the following channel configuration.

token_channel.png

 

Step 2

After the call to the token endpoint, the token is provided in an XML response as shown below.

token.png

 

We will then use a Content Modifier step to extract the token value via XPath and store it in a property.

extract.png

 

To access Concur's REST API, it requires the OAuth token to be specified in the HTTP header as shown below:-

Authorization: OAuth <token_value>

Once we have the token value, it will be used in the following Groovy script logic to set the HTTP header.

 

import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
def Message processData(Message message) {  // Get the OAuth token value from the properties  def map = message.getProperties();  def token = map.get("token");  // Remove the Consumer key from previous header  map = message.getHeaders();  map.remove("X-ConsumerKey");  // Set the OAuth authorization credentials  message.setHeader("Authorization", "OAuth " + token);  return message;
}

 

Step 3

Once the OAuth token has been set in the HTTP header, the subsequent call to Concur's REST API is configured as an HTTP receiver channel. In the example below, I configure the channel to use Concur's Extract API to retrieve an extract file.

restchannel.png

 

Step 4

Finally, the response of the REST API call is sent to a target HTTP logging system. For the example, I have used the tool described in the following blow.

Testing: Test Tools...Part 1 *HTTP *

 

 

Testing Results

Once the iFlow has been completed and deployed, the interface will be triggered and the outcome can be seen on the POST Test Server log. As shown below, we were able to retrieve the extract file using Concur's Extract API utilizing a custom OAuth 2.0 authentication.

 

output.png

 

Conclusion

As shown above, HCI's flexible pipeline allows us to built quite complex iFlows. Together with custom Groovy scripts, it allows for a lot of possibility to design a customized solution. In this example, I have shown how it is utilized to built a custom OAuth 2.0 authentication that has not been supported by HCI natively. These days, more and more services are using two-step approaches involving tokens and session IDs, and as such having such a flexible pipeline comes in very handy to tackle those integration requirements.

SAP Teched and the Integration future

$
0
0

I want to share the insight I have gained into the ever-changing world of SAP integration while participating at SAP TechEd 2015. SAP integration has changed a lot over the last couple of years, and it is constantly evolving. As someone who has been working with SAP for over a decade, I was able to see trends that will definitely change the way we do integration for our clients.

I want to give you not just a recap of what I’ve seen and heard at TechEd, but also more information on how to deal with these new trends, and how to implement the changes.

In this video, I tried to summarize the most important topics of TechEd 2015. New technologies, and thus new challenges, are on their way. Integration specialists need to stay updated and focused in order to implement the SAP integration strategies that are best for their companies.

Trends

Without further ado, here are the things that I have considered of greatest value at this year’s conference — from the perspective of an integration consultant. There are 3 important changes that will transform the way we do integration:

Cloud strategy

A lot more integration will be going on inside the cloud. If the business decides they want some specific integration, we will build cloud applications. Companies will still have their on-premises SAP systems, so there won’t be any need for changing those systems just yet, but you will need to interact with a hybrid strategy.

Different speeds of innovation

This issue gets interesting especially when we talk about API management, where interacting with our consumers (and the different methods of interaction) becomes the focal point.

Companies want to interact more with their customers. In order to do that, they need new methods of interaction, and new tasks may be completed to achieve this, such as pulling center data or other relevant endeavors.

Big Data

This is probably a more extensive topic for those who work in data analytics and related fields. From an integration perspective, the Internet of Things is of great importance. We need to integrate, to find out how to put data from multiple devices into our HANA, Hadoop or Spark systems, so that someone else can analyze it — this is our job, as integration consultants. It’s not our job to figure out what should be done with the data, but we need to make sure that the data is available to the right people.

We’ll also get a higher volume of frequency and number of concurrent connections, precisely because of these trends. A lot more interaction with the data is necessary.

On the mobile front, we’ll have more integration with people in our network who might want to do business with us, so this will lead to more users and integrations, because we need to expose the data that we have in our SAP ECC or S/4HANA systems to all the potential customers.

Protocols

These are the main trends I have used to create this round-up. SAP is betting quite a lot on OData. Partners are allowed to create OData services, which could happen through mobile apps, browser-based apps, enterprise software, or cloud and social software.

MQTT (Message Queue Telemetry Transport), which is an Internet of Things protocol, is a lot like MQ (Messaging Queuing) or JMS, but the handling functionalities have been removed, so it is a much leaner protocol, which is going to make interaction easier for companies. It is best for edge devices; smart media will communicate with them. It could be Wind River, which will collect all of the data before data is posted to SAP.

REST (Representational State Transfer), a protocol that works on sets of data, is important when doing integration with the outside, because it’s easier for mobile apps and third parties to use — it seems to become the standard we are moving towards.

SAP Process Orchestration

Regarding Process Orchestration, we have a few functionalities to discuss. The REST-based adapter has already been delivered, and they have also been developing the OData adapter.

There has been some improvement on the BPMN (Business process Model and Notation) front as well: you can create a task and task APIs. You can also generate and use the SAPUI5 interface with the click of a button, which makes it easier to create online apps. The current version does not support Fiori and line items, but it is a nice way to start.

PowerDesigner, an SAP tool that is able to describe the processes that are happening in your organization, should be easier to use. It may be used for developing and documenting processes. Data can be exported in BPMN format, then the needed processes can be enhanced and curated in Developer Studio.

I think Integration Advisor has been a much-discussed topic for a while now. It is relevant because it provides users an easier way of interacting with multiple suppliers. Usually, the process of onboarding B2B suppliers can be tedious, especially if you lack a predefined format that you wish to use. With this B2B add-on you might get some more instances that are relevant to your development.

Some improvements have been made to the B2B add-on, but since I haven’t been using it, I’m not that familiar with what needed to be improved. The Integration Advisor should be mentioned here, because it may enable you to deploy and develop applications faster, while also allowing you to integrate with B2B integrations more efficiently.

I think we will see more of ETL (Extract-Transform-Load) tools, especially when dealing with edge device integration.

NetWeaver 7.5

It’s a shame that TechEd takes place during the fall instead of the spring. I think there will be a lot more changes. I’m really looking forward to SAP NetWeaver 7.5. Some of its highlights are the following:

One of the most important topics is Java 8.  This is really great because there have been enhancements in Java, and it will enable the use of the newest JDBC/JMS adapter. There could be issues with backward compatibility, with customer functions and adapters.

With Eclipse Luna 4.4 you don’t have to install and run the NetWeaver Developer Studio zip file, you can use the standard Eclipse, and update the plugins. This is the same as you do with HCI at the moment.

The UI5 generator should also be better now, they should be able to support lower-level items, and the generated tasks should be fully Fiori-compliant.

A great feature is that you should be able to run HANA Cloud Integration content locally. You wouldn’t be able to work with HCI flows because they are structured in a slightly different way, but you could take the existing SAP content from HCI, download it, create a zip file or an executable compiler file, upload it to your on-premises PO system, and then use the content locally. Because it is currently somewhat limited, I believe it will be enhanced with other features as well.

HCI adapters and Camel — you would be able to use the Camel adapters on the logon system, and that means you would be able to download these adapters, and run them locally. Right now, after downloading an adapter, you have to start from scratch. With the Camel add-on, you get a lot of these predefined adapters, so you can easily select the ones you want, and enhance them, if needed.

While Operational Process Intelligence (OPINT) is not an important product just yet, it will probably become more significant in the future. Now it shows the real-time status of the existing data. It will be enhanced with more intelligence; the creators want to enable you to view the real-time status of the process, while also increasing the amount of interaction with the processes in an intelligent way. They also talk about smarter processes — you would be able to interact with HANA Cloud Platform services, which need certain requirements to be met for different scenarios. This could offer more insight into what is currently happening inside the processes, and show whether there are any delays.

You can see the rest of this post at https://picourse.com/sap-teched-from-an-integration-perspective/ which will cover IoT, Api, the keynote and other relavant information.

HCI: Using EGit for version management of Eclipse-based Integration Projects

$
0
0

Introduction

One of the most frustrating aspects of working with HCI is the buggy development environment of the Eclipse-based Integration Designer plugin.

 

While in general some of bugs are just minor annoyances (error pop ups that can be "cured" by a restart), the most disastrous is when a perfectly working iFlow no longer works after some changes. Even though the processing logic of the iFlow is reverted back to the original, it sometimes will still not work. Sometimes, iFlow changes will cause local checks to fail with non-descriptive errors. Sometimes, the changes will pass local checks but will fail during deployment to the tenant, again with non-descriptive errors in the tail log. It took a while to understand what was happening as it occurs in a random fashion.

 

As I was new to HCI, I tended to develop the iFlows in a stage-by-stage manner - include certain functionality, save, deploy and test it out before enhancing it with further functionality. Initially, the errors cause me to think that some of the logic that I was trying to implement were not possible or not supported, even though the errors do not specifically mention that. However, I found out that if I were to develop the similar logic all at once in a new iFlow, it would work!!

 

This led me to the conclusion that "iFlow files can be corrupted when making changes!"

 

In my experience, corruption tends to happen more often when there are deletion of objects or connectors in the iFlow. Although iFlows are designed using a graphical layout editor, the content is actually saved as an XML file. Therefore my guess is that when some editing is done via the graphical iFlow editor, the underlying XML is not adjusted correctly, causing corruption of the generated file.

 

Furthermore, HCI does not come with native version management, therefore changes are not easily reversible.

 

After much frustration getting my iFlows corrupted from time to time, and having to rebuilt them from scratch, I decided to use Git for version management of my HCI Integration Projects. In this blog, I will share with you how the EGit plugin within Eclipse can be used to configure the Git version management system for local development of HCI iFlows. This is similar to the approach I've blogged about below on using EGit for Java development in PI/NWDS.

Using EGit for Java source code management in NWDS

 

 

Component Details

Below are component versions of the Eclipse plugins that I was using. Hopefully future versions would provide a more stable iFlow editor.

Eclipse Plugin Versions: Adapter 2.11.1, Designer 2.11.1, Operations 2.10.0

 

 

Prerequisite

In order to use EGit, the EGit plugin needs to be installed in the Eclipse environment. For those using Luna SR2 (4.4.2), it should already come together with EGit 3.4.2 as shown below.

egit_luna.png

 

 

Initial Configuration

Before using EGit in Eclipse, some initial configuration needs to be performed first. Please follow the same steps as indicated in Initial Configuration section of the following blog.

Using EGit for Java source code management in NWDS

 

 

Manage Integration Project with EGit in Eclipse

Following are the steps to put an HCI Integration Project under EGit's version management.

 

Step 1 - Create Git Repository

Change to Git perspective - Window > Open Perspective > Other ... > Git

perspective.png

 

Click the Create a New Git Repository button.

new.png

 

Specify the location for the new Git repository.

create.png

 

Once the repository is created, it will be listed under the Git Repositories tab.

repo.png

 

 

Step 2 - Import Integration Project into Git Repository

Next we want to import/share the HCI Integration Projects in the newly created Git Repository. Switch back to the Integration Designer perspective.

 

Right click on the project, and select Team > Share Project.

share.png

 

Select Git from the sharing type.

type.png

 

Select the recently created repository from the dropdown list for Repository.

configure.png

 

Step 3 - Commit contents to be tracked

After the project has been shared in the Git repository, we can proceed to commit the contents that are to be tracked.

 

Right click on the project, and select Team > Commit.

 

Provide a message for the commit and select the files that are to be tracked - in general I will select all source codes (iFlow, XSD, WSDL, Scripts) and project configuration files.

commit.png

 

Now that the contents have been committed to the Git repository, I am free to modify the contents of the iFlow (or any of the commited objects) without worrying if any changes would render the iFlow useless.

 

Each time I've made changes to the iFlow and verified that it's working, the commit step is repeated to save a version of that to the repository.

 

Step 4 - Reverting iFlow to previous version

Let say after some changes have been made to the iFlow and it becomes corrupted causing errors during local check or deployment, we can revert to a previous version by right clicking the iFlow object in the Project Explorer and selecting Replace With.

 

We can either select HEAD revision (which is normally the last committed version) or Commit... for some other version from the commit history.

replace.png

 

Conclusion

As shown, by using EGit as the version management system for HCI Integration Projects, we can avoid the potential disastrous effects of changes rendering an iFlow corrupted and useless. There are also other benefits of using EGit since there is no native version management system for HCI objects that are developed locally on Eclipse. Besides iFlow, the other objects in an Eclipse HCI Integration Project can be tracked too like WSDLs, XSDs, Groovy scripts and even Message Mapping which cannot be easily "version-managed" by other methods.

 

I'd definitely recommend those working on HCI with Eclipse to use EGit if there is no other version management tool in place.

SAP HCI - Security FAQ and Checklist!

$
0
0

In this blog I want to talk about SAP HCI related Security questions, customers frequently ask.


SAP HANA Cloud Integration or SAP HCI as most call it, enables you to connect your cloud applications quickly and seamlessly to other SAP and non-SAP application (on-cloud or on-premise).

 

As more and more customers started using SAP HANA Cloud Integration for Process Integration, lots of questions were asked around security and connections. Setting up a secure connection between a customer system and the integration platform (which is based on SAP HANA Cloud Platform) also requires the cooperation of experts at SAP and at the customer's side.


We as a team(Piyush Gakhar, Patrick Kelleher and myself) have come up with a SAP HCI Security FAQ and Checklist.


Hope this helps you as you work on your customer project!

(Note: For terminology you could refer to the SAP HCI Operations Guide)


So here we go...

 

Section-1 FAQ

1)      How to add new users and Authorizations when customer gets the SAP HCI tenant? Who is authorized to add new users?

While SAP provisions a tenant, admin rights are given to customer’s S-userid as mentioned in the order form during contract signing. This admin user can go to HANA Cloud Platform cockpit and add further admin and users and assign them roles and authorizations. By default, SAP HCI uses SAP Cloud Identity provider. Hence all the users must have valid S-userids or P-user ids that can be requested/generated from Service Market Place or SAP Community Network.

2)      Where are all the roles & authorizations mentioned, that can be assigned to users?

                       

                        Please look at https://cloudintegration.hana.ondemand.com/PI/help> Operating SAP HCI > User Management for SAP HCI > Managing  Users and Roles Assignments > Defining Authorizations

 

3)      How to contact SAP HCI Cloud Operations support related to tenant provisioning and security related issue or information?

An incident can be raised on Component LOD-HCI-PI-OPS

4)      Are CA signed certificates mandatory for transport level authentication and for which scenarios CA signed certificates are needed

Please refer to table matrix available in Section-2 “Checklist for Security” of this document

5)      Where can I find the list of approved CAs by SAP?

                        Please look at https://cloudintegration.hana.ondemand.com/PI/help> Operating SAP HCI


6)      What if customer wants to use CA that is not present in the list?

Customer needs to create a ticket under component LOD-HCI-PI-OPS and attach Root CA of the CA that you would like SAP to evaluate. SAP will go through the security guidelines and will provide a response. If approved, it would be added to the trusted CA list

7)      While getting certificates signed from CA, we have multiple systems and we want to use same signed certificate for different systems. Can we put * in the Common Name field (eg *.xxxxx.com) while getting our certificates signed. Is it allowed by SAP? 

SAP supports wildcard in CN field only for certificate based client authentication technically but recommendation is to use full host name in the CN field for both inbound and outbound scenarios. However for HTTPS outbound, as SAP manages the CA signed key pairs, SAP uses full hostname in CN field.

8)    Can I use self-signed certificates for HTTPS certificate based client authentication (also referred as dual authentication)?

No, self-signed certificates are not supported for transport level security.

9)      Self-signed certificates are supported for which scenarios? Can I use them for message level encryption and signing?

Yes, you can use self -signed certificates for message level encryption and signing, however SAP recommends to use CA signed certificates.


10)      Who maintains and manages the key store? Can the control be given to the end customer?

As of today, SAP Cloud Operations team manages the key store for Customers. Customer cannot manage the keystore and known host file (known host file is required for SFTP connectivity). As of today, only exception is HCI developer P4EAD edition where partner can manage the keystore and known host file themselves for test tenants.

11)      What is the procedure for using certificates for message level encryption and signing?

Customer can use the certificates present in the keystore provided by SAP Cloud Ops team as keystore is managed by SAP. If a customer wants to use its own key pair for some reasons, customer need to raise a ticket to SAP Cloud Ops team LOD-HCI-PI-OPS and SAP will qualify the request. . There are different ways in which a customer can sign and encrypt HCI message content (example: PGP, X.509 etc.) covered in the online documentationhere.

 

12)      Does a customer need to do some special requests while connecting to SFTP/SMTP server

By default, the following ports are opened a) SSH- 22 b) SMTP- 25. Customer needs to create a ticket to LOD-HCI-PI-OPS with the hostname of the SFTP/SMTP server to enable access in the firewall.


13)      Does a customer need to request separately for HTTP(s) port opening for outbound connectivity?

By default, 443 and all HTTP ports > 1024 are opened. In case of new port requests or if customer faces any difficulty, customer can raise a request at LOD-HCI-PI-OPS


14)      What are the IP addresses range for HCI landscape that a customer need to configure in their own firewall for inbound connections (IP whitelisting)?

                        Please refer to documentationhttps://cloudintegration.hana.ondemand.com/PI/help> Virtual System Landscapes


15)      Where can customer details on SAP DATA Centers and security found?

Details are available on SAP WebsiteSAP Data Centres Information (refer section 2 for Security)


16)      What is SAP cloud connector (or HANA Cloud Connector) and is it mandatory?

Cloud connector is a complementary offering to SAP HCI. SAP Cloud Connector needs to be installed on premise and is an integral component of HCP. It acts as a reverse proxy and creates a secure tunnel with customer’s own HCP/HCI account. SAP HCI can route calls via SAP Cloud Connector for HTTP based protocols (eg. SOAP, OData IDoc XMLs etc.). SCC is a preferred mode of communication from HCP customers. However it is not mandatory as customer may use other reverse proxy softwares eg. Web Dispatcher.

 

Section-2 Checklist for Transport Level Security

(Note: Click the pictures: 3 of them, to see the entire list)

Inbound.jpg

Outbound.jpg

InboundOutbound.png

Hope this helped!

 

Cheers,

Sunita

Value Mapping Replication for Mass Data Using NWDS in Java Stack SAP PO

$
0
0

Dear All,

 

There are Many ways to upload the Mass data for Value mapping but the easy for creating value mapping using NWDS just browse the CSV file then values will be replicated in Cache monitor, before uploading the file we should follow the Predefined Structure for Value mapping.

 

Here you’ll find a little example of how you can make a value mapping replication for mass data using NWDS. I hope you enjoy it and useful.


Just Follow the below steps


1)Prepare the csv File For your values below formate by defining Sender and Receiver Agency ,Sender and Receiver Schema

CSVFile.JPG

2) Now we Need to upload the this CSV file using NWDS,I am assuming readers are aware of NWDS


Open the NWDS --> Connect to ID --> From Menu select option              Process Integration --> Value Mapping --> Import Value Mapping then it will display the below Screen


NWDS.VMR.JPG

then just browse the CSV file which you prepared, then it will show


VMR Group1.JPG

3)Let check this values are updated in Cache or not


http://<host>:<port>/rwb --> Cache Monitoring



Cache.JPGinput VMR.JPG

Here I am applying value Mapping for Country and name and one more VMT for Country is

name.JPG

5)Here is the output of the file

output VMR.JPG

 

Can you See?!!!! … It’s easier, now you can make more examples by you, and make and stronger value mapping in your interfaces.

 

 

 

Best Regards

Umesh Reddy

Changing attachment name or description of PI payload

$
0
0

As for one of my requirement, i had to rename the attachment payload name and description of the file(zip/jpeg/pdf etc..) which written in manifest file. I'm sure there is standart classes to make this happen such as PayloadSwapBean. If you are familiar of creating custom adapter module this can be done in this way as well.

 

Developer's who are not familiar of creating custom adapter module can follow this blog : How to create SAP PI adapter modules in EJB 3.0

 

 

Requirement,


Changing the name which "attachment-1" to as desired name


1.PNG



Manifest log file contains name and description


2.PNG



Here is a piece of code to apply,



public ModuleData process(ModuleContext mc, ModuleData md) throws ModuleException {  // TODO Auto-generated method stub  AuditAccess audit = null;  try {  audit = PublicAPIAccessFactory.getPublicAPIAccess().getAuditAccess();  } catch (MessagingException e2) {  // TODO Auto-generated catch block  e2.printStackTrace();  }  Message message = (Message) md.getPrincipalData();  key = new MessageKey(message.getMessageId(), message.getMessageDirection());  payloadValue = mc.getContextData("payloadname");  Payload attachment = message.getAttachment(payloadValue);  //audit.addAuditLogEntry(key, AuditLogStatus.SUCCESS, "Attachment ->"+attachment.getInputStream());  try {  attachment.setName("VODAFONETR-attachment1");  attachment.setDescription("VODAFONETR-attachment");  md.setPrincipalData(message);  }catch (InvalidParamException e) {  // TODO Auto-generated catch block  audit.addAuditLogEntry(key, AuditLogStatus.ERROR, "InvalidParamException ->"+e.toString());  }  return md;  }

 

 

Applying custom adapter module development to SOAP Sender Adapter Channel,

 

 

5.PNG

 

 

Here is the result,

 

 

Manifest log file,

 

4.PNG

 

 

 

3.PNG

 

 

 

 

 

Hope this helps to anyone requirements.

Multi Mapping with Dynamic FileName and Dynamic Folder using Variable Substitution

$
0
0

Introduction

As we all know if we use multi mapping we cannot set dynamic file name and folder with dynamic configuration because same header is shared by all the messages,Recently there are many threads asking about this requirement, If the fields used in variable substitution are part of the target payload then dynamic file name or folder is not a problem, only those fields are not part of the target payload then we cannot use variable substitution for dynamic file name and folder but in this blog i will show you how to achieve this using variable substitution.

 

Approach

We will add separate node under target structure to hold the fields which are used in variable substitution and these fields are not part of the target file, we will use below content conversion parameters for this node and then file adapter will ignore this record at runtime.
CCContentConversion.png

Scenario

The scenario is we get one message with multiple orders and we need to generate multiple files at the target side with each file containing single order. We need to create dynamic folders based on plant field in the source message, we need to create dynamic file name with IDoc number.


Design

The below are sender and receiver data types used in this scenario, for simplicity i have created the own data type for IDoc, i have added separate node called 'File' under target structure, FileName(this field used in dynamic file name) and plant(this field used in dynamic folder) are fields under this node, these fields will be used in variable substitution in the receiver file channel in the directory.

ORDERS05.pngStockOrder.png

As we are creating multiple files from single IDoc we need to change the occurrence of receiver message to unbounded under signature part of the mapping.

mapping_signature.png

The below is mapping between source message and target message, all fields are simple field to field mapping, we will receive multiple IDOC nodes in source message and i am creating multiple StockOrder nodes in target side. DOCNUM (Idoc number) and WERKS i am not passing to the target file, we need these fields to set dynamic file name and folder so we need to map these two fields under File node in the target structure.

mapping.png

The below is FileName field mapping, IDoc number concatenate with '.txt' extension.

fileNameMapping.png

Configuration

The below is receive file communication channel configuration, target directory and target file name with variables created under variable substitution section.

CCtarget.png

The below is content conversion for receiver file, As we don't need to send the fields under File node to the target file we need to use below parameters to ignore these fields in the target file.

CCContentConversion.pngOrderHeader.png

OrderItem.png

The below is 'fname' and 'plant' variables which we used in target directory and file name.

CCAdvanced.png

Testing Results

The below is input payload, there are two IDocs with same plant(3204), so at the end we expect two dynamic folders.

inputPayload.png

The below are four messages in message monitor, one for sender to messaging system, three message for messaging system to receiver(as per our multi mapping one file split into three messages)

messageMoni.png

We can see IDoc number and the plant number are mapped under File node in the target payload.

payload.png

We can see in the audit log the variables plant and IDoc number are replaced at runtime.

auditlog.png

As we expect the two folders created under target directory.

floders.png

We can find two files under 3204 directory (source file contains two IDocs with same plant 3204)

3204.png

The below one of the file content, We can see the IDoc number and plant values are not part of the file, only it contains header and item of the order.

FileContent.png

And one file under 3205 folder.

3205.png

References

A new approach: Multi-mapping Dynamic Configuration using a generic custom module

Multi-mapping with Dynamic Configuration - SOAP loopback approach

 

Conclusion

With this approach we can still achieve dynamic file name and folder even we use multi mapping, i hope this will be helpful.


HCI: Using Eclipse WSDL Editor for SOAP-based integration

$
0
0

Introduction

For SOAP-based integration, SOAP receiver channels are used to consume web services whilst SOAP sender channels are used to expose HCI as a web service. When the target system is a SOAP web service, we can easily implement a passthrough interface in HCI and reuse the target system's WSDL in the SOAP sender channel of HCI's iFlow. Currently HCI only supports SOAP adapter as sender channel to expose synchronous web service interfaces. So if the target system is not SOAP-based (i.e. REST or OData), we will need to manually define the WSDL for the sender side of the iFlow in order to expose the service as a SOAP web service via HCI.

 

For those who are used to PI/PRO development, this is normally achieved by creating a sender Service Interface using Data Type (and Message Type) defined with the built-in Data Type Editor.

 

However, the current HCI development tool on Eclipse does not come with an easy-to-use Data Type Editor. For those who have a working PI/PRO installation, Service Interface definitions can be imported from PI/PRO. However, those who do not have PI/PRO, the WSDL for the sender will have to be manually created using other XML tools like XMLSpy, Oxygen, etc.

 

Fortunately, Eclipse also comes with it's own native WSDL Editor. In this blog, I will share the steps on how the WSDL Editor can be used to generate a WSDL to be used in the sender SOAP channel of an HCI iFlow.

 

 

Component Details

Below are component versions of the Eclipse plugins that still do no have a native Data Type Editor. Hopefully SAP will port NWDS's ESR Data Type Editor to a future version of HCI.

Eclipse Plugin Versions: Adapter 2.11.1, Designer 2.11.1, Operations 2.10.0

 

 

Creating WSDL for Sender SOAP Channel

For the example, we will create a WSDL for a synchronous interface. This WSDL will be configured in the sender SOAP channel of the iFlow. It will have the following structure.

 

Request

Segment/Field NameOccurrence
OrderKeys1 - unbounded
> orderNo1
> orderDate0 - 1

 

Response

Segment/Field NameOccurrence
OrderDetails0 - unbounded
> OrderName1
> OrderID1
> ItemCount1

 

 

Step 1 - Create a new WSDL file

Right click on the wsdl folder and select New > Other...

new.png

 

Select WSDL File from the wizard, and provide a name for the file.

wizard.png

 

In the options screen, specify additionally the namespace and prefix, and accept the rest of the default values.

options1.png

 

A skeleton WSDL file will be created and it will be opened in the WSDL Editor's Design view as shown below.

skeleton1.png

 

Step 2 - Rename Operation

Operation is similar to a Service Interface's operation in PI/PRO. Highlight the default NewOperation value generated by the wizard and rename it accordingly as shown below. The names for the input and output parameters will be updated automatically.

operation1.png

 

Step 3 - Define request structure

Define the request structure as per the above table.

 

Click the arrow next the to input parameter. An inline schema editor will be opened. By default, the input parameter is just a single string field named in.

def_req.png

 

Change the properties of the input parameter as shown below. For Type, select New, to create an inline Complex Type using any arbitrary name (Key is used in the example).

req_type.png

 

The input parameter will be updated as shown below. Basically it means that the input parameter is of Key complex type which can occur 1 or more times.

new_req.png

 

Next we proceed to define the structure for the Key complex type. In order to do this, click the button at the top left Show schema index view.

schema_view.png

 

An overview of the schema with all elements, types will be displayed. To edit the complex type Key, double click on it.

key.png

 

It will bring us to the definition of Key. Here we can add additional elements to Key by right-clicking and selecting Add Element.

add_elem.png

 

Add the first field orderNo of type xsd:string as shown below.

add_elem2.png

 

Repeat for the second field so that the final definition of Key is as shown below.

key_def.png

 

Step 4 - Define response structure

Repeat the steps above for the response structure per the definition table.

 

For the final outcome, the output parameter is named OrderDetails of complex type OrderDetail and occurs 0 or more times.

resp1.png

 

Similarly, the OrderDetail complex type is defined as follows with three mandatory fields.

detail_def.png

 

With this, we complete the definition of the WSDL file.

 

Step 5 - Import WSDL into PI to verify (Optional)

As an optional step, we can import the WSDL into PI as an external definition to view and verify that the structures have been defined correctly.

pi_view.png

 

 

Using WSDL in iFlow

Once the WSDL has been fully defined, it can be included in the SOAP sender channel of the iFlow.

soap_sender1.png

 

After the iFlow development is completed and deployed. The actual WSDL for the HCI web service can be downloaded from the tenant. Select the IFLMAP node from the Node Explorer. Switch to Services in the Properties tab. Select the corresponding Endpoint of the HCI iFlow, right-click and select Download WSDL > Standard.

wsdl.png

 

This final WSDL that is downloaded will be very similar to the WSDL created for the sender channel, except it will contain the endpoint to the service on the HCI tenant.

 

 

Additional Info

The WSDL Editor can be also used to create the WSDL file for asynchronous interfaces. For these, just delete the output parameter from the skeleton WSDL created by the wizard.

async.png

 

 

Conclusion

As shown above, we can utilize Eclipse's built in WSDL Editor to assist us in defining WSDL files for SOAP based interfaces. It is relatively easy to use, and more importantly free compared to other license based XML editors like XMLSpy or Oxygen. We can also work with it within the same development environment for HCI iFlows without needing to launch another external tool.

 

Ideally it would be great if SAP ports the NWDS ESR Data Type Editor to HCI, but in the meantime we can at least rely on Eclipse's editor.

 

 

Reference

Introduction to the WSDL Editor - Eclipsepedia

When is the File Modification Check supported in Sender File adapter channel?

$
0
0

The File Modification Check feature prevents the File Adapter from processing incomplete files. Sometimes you may find that the File Modification Check is not available in your sender file adapter channel. Here I will explain when this feature is supported.

 

For sender file adapter channel, there are two transport protocols: File System (NFS) / File Transfer Protocol (FTP), and two message protocols: File / File Content Conversion.

 

The File Modification Check feature has already been made available in NFS mode from XI 3.0 SP11/ PI 7.0, when NO file content conversion or file split is used. There is a parameter named  'Msecs to Wait Before Modification Check' in the advanced mode.

 

This setting causes the File Adapter to wait a certain time after reading, but before sending a file to the Adapter Engine. If the file has been modified (which is basically determined by comparing the size of the read data with the current file size of the input file) after the configured interval has elapsed, the adapter aborts the processing of the file and tries to process the file again after the retry interval has elapsed.

 

12Capture.PNG


For lower releases, this feature is not supported in FTP mode. If you enter a value in the field of 'Msecs to Wait Before Modification Check' when configuring the sender FTP adapter channel, it will have no effect. However, since 7.31 SP18/7.40 SP13, File Modification Check feature is supported in FTP mode as well. For more details, please check SAP Note 2188990 - File Modification Check in FTP mode for File Adapter.

 

If the option "Msecs to Wait Before Modification Check" is not available for the settings you would like to use (for example, FCC, file split, FTP mode in lower releases), the following algorithm (to be implemented in your application) may be used to ensure that the File Adapter only processes completely written files:

 

  1. Create the file using an extension, which does not get processed by the File Adapter, e.g., ".tmp"
  2. Write the file content
  3. Rename the file to its final name, so the File Adapter will notice its existence and pick it up

 

Related Notes:

SAP Note 821267 - FAQ: XI 3.0 / PI 7.0 / PI 7.1 / PI 7.3 File Adapter

SAP Note 2188990 - File Modification Check in FTP mode for File Adapter

SAP Note 1713305 - Msecs to Wait Before Modification Check option missing

 

Related Docs:

Configuring the Sender File Adapter

Configuring the Sender FTP Adapter

Migration of Classical and ICO scenarios to IFlows

$
0
0

Introduction

We all know we can migrate the dual stack classical scenario's to integrated configuration objects(ICO) using migration tool, PI 7.31 SP16 or PI 7.4 SP11 on wards we can also migrate classical scenarios to IFlows using migration tool and if you have any integrated configuration objects in your system you can also change them to IFlows in eclipse, In this blog i will show you how we can do this conversion.

 

Migration of Classic Scenario to IFlow using Migration Tool

I have below classical scenario in my dual stack PI systemclassic.png

Open migration tool in the target PI system( pimon-> Configuration and Administration->Common->Migration Tool). Click on Scenario Migration

migration tool.png

 

Select source and target systems and give the user name and passwords and click next.

sorce_target.png

Search the scenario which you want to migrate

search.png

In the next screen select the check box 'Migrate to IFlow'

scenario matcher.png

In the next screen observe the names that created in the target system, if you want you can change the names and click next.

iflow.png

Click on create button to start the conversion.

create.png

In the next screen you can see the log related to conversion process.

result.png

Open eclipse and connect to your PO system, open Process Integration Designer perspective and you can find the change list related to IFlow which we created.

changelist.png

After we activate the change list, The IFlow can appear under Integration Flows.

iflow floder.png

You can find below the IFlow which we created.

iflow diagram.png

 

Migration of ICO scenario to IFlow using Eclipse

If you have any ICO's in your target system(PI single stack or PO system), these ICO's we can convert to IFlows in Eclipse. I got below ICO in my PO system.

ICO.png

Open eclipse Process Integration Designer perspective and connect to PO system, Go to Process Integration->Generate Integration Flows->Integrated Configuration as shown below.

menu.png

In the next screen it will show all the integrated configurations in the system, select the ICO which you want to convert to IFlow.

convert.png

In the next screen change the names like IFlow name and click on finish.

iflowname.png

You can see the log whether the conversion successful or any errors.

iflowlog.png

Conclusion

Using migration tool we can convert classic configuration to IFlow and using Eclipse we can convert ICO to IFlow. i hope this helps.

Mapping Lookups to a Centralized Key-Value Store System - Integration of SAP PI/PO with Redis

$
0
0

Intro. What is a centralized key-value store system and why to introduce it?

In SAP PI/PO, mapping lookups are commonly used to retrieve values stored in an external backend system, during mapping execution. Sometimes lookup in a backend system involves execution of complex logic before value can be derived and returned, but in some cases lookup is about accessing key-value store / database, where for a provided key, corresponding stored value is returned. A list of country codes and their names defined in ISO 3166 standard, mapping of various codes in one system to corresponding codes in another system (e.g. for material groups and categories, customer groups, account types,etc.), mapping of an IP address to a host name or user account name to their first name / last name are good examples of key-value store application.

 

In diverse heterogeneous landscapes, key-value store functionality is implemented and spread across variety of backend application systems, sometimes leading to duplication of stored information and potentially its inconsistency across landscape. Solution for this problem has been found in configuration of centralized key-value stores, which are used to hold and persist key-value pairs published by different application and technical infrastructure systems (literally speaking, by any system that originates information about key-value pair) and act as a single provider of these key-value pairs to consumer systems in a generic way. Common requirements for key-value stores are:

  • high availability,
  • scalability,
  • consistent data persistence,
  • high performance and robustness in accessing key-value pairs by provider and consumer systems,
  • secure access to key-value pairs,
  • availability of generic APIs that can be consumed or utilized in technologically heterogeneous landscape,
  • capability of bulk upload of key-value pairs.


In order to fulfil these requirements, key-value store solutions are commonly heavily relying on in-memory and NoSQL technologies, providing capabilities for distributed operations and clustering. Libraries developed in different programming languages and implementing key-value pairs access and key-value store system management APIs facilitate usage and smooth integration of such solutions in enterprise infrastructure. In contrast to other NoSQL based solutions targeted at fulfilling requirements for column store (like Apache Cassandra) or document store (like MongoDB), key-value store solutions, as their name implies, are specifically designed for storing key-value pairs - making operations for maintaining and querying key-value pairs extremely robust (in contrast to a case if they would have been implemented using classic relational database or NoSQL solution of other type).


Key-value store systems bring some extra efforts to IT infrastructure - most noticeable of them are

  • necessity of replication of key-value pairs from a source system to a key-value store system as well as keeping them up-to-date and consistent,
  • necessity of maintaining and supporting additional system (which is a key-value store system) in an IT landscape,
  • necessity of ensuring a key-value store system fault tolerance and high availability (since as any other centralized system, it becomes a single service provider for many other enterprise systems).


On the other hand, there are several significant advantages that key-value store system brings:

  • reduce workload related to non-primary and incidental functionality. For example, provider systems don't need to handle and process key-value pair lookup requests any more and can allocate released resources to system primary tasks and core functions,
  • provide a single endpoint and service provider for all key-value pair lookup requests. No need to look for a specific backend system to query this data from, since it is already accessible from a single storage. This also results into decoupling of a consumer system executing lookup from a key-value pair source system (if a source system is not available, consumer system will still be able to execute lookup and query required information),
  • provide diverse integration capabilities. Sometimes implementation of lookups to specific backend systems may be challenging due to lack of interoperability functionality between consumer and provider systems (e.g. libraries available for provider system, are not suitable for consumer system, or communication protocols are not supported by both systems). This constraint is effectively resolved by means of variety of libraries shipped for key-value store solutions.

 

In this blog, I would like to describe experience of using one of such solutions for mapping lookups in SAP PI/PO. Redis ( Redis ) has been chosen as a key-value store system. Redis is an open source NoSQL key-value store solution having very light resource footprint, flexibly scalable, providing really fast concurrent access to consistently stored information, which is achieved by usage of in-memory storage in combination with periodic replication to disk storage for consistent long-term persistence. Writing "really fast" here, I refer to request processing time being millisecond or even less, and request processing rate being tens of thousands of requests per second. In order to get an idea about robustness of Redis, I encourage getting familiarized with a summary of benchmark tests done by Redis team (available at How fast is Redis? – Redis ).

 

For this demo, a Redis server has been installed on a local laptop. Even without thorough performance tuning, following benchmarks could be achieved for SET and GET requests:

Redis benchmark statistics for SET and GET.png

Redis ecosystem provides variety of libraries in several programming languages as well as additional toolset that exposes capabilities for administering, monitoring and querying Redis.

 

 

Outlook at Redis database content used in demo

For this demo, a Redis server with one database has been used. Several key-value pairs were uploaded to a Redis database for test purposes. Discussion of a Redis server installation and configuration is out of scope of this blog - detailed technical documentation can be found on Redis official web site.

 

For uploaded key-value pairs, following naming convention for keys was used (following naming convention recommendations provided by Redis team): <object type>:<object id>:<field>, where object type represents location of an object in a hierarchical structure (structural levels being delimited with colons, multi-word structural level / object type names being separated with dots), object id is used for unique identification of a looked up object and object fields represent object attributes (multi-word field names being separated with dots). In general, good practice is to establish clear naming conventions for keys stored in a key-value store system during its design or before its usage, and then always follow them, so that consumer systems can easily derive key names when they need to address their queries to a key-value store system. This will help reducing future maintenance efforts and gaining more benefits from using a central key-value store system.

 

In a used test Redis database, 4000 key-value pairs were mass uploaded: 2000 sample objects, each having 2 attributes / fields (type and text). For example, value of a text attribute of a sample object with ID 00001 can be retrieved by getting value of a key "test:vadim:sample.object:00001:text":

Redis database.png

In sake of mapping lookup demonstration, a SAP PO system will execute a lookup of object text by object ID.

 

 

Implementation of lookup to Redis (baseline option): Usage of UDF / function library utilizing Redis client library

A described option for accessing Redis and querying key-value pairs within mapping lookup is based on usage of a custom developed mapping function implementing necessary functionality for interoperability with a Redis server utilizing one of commonly available Java libraries for Redis. Below is summary of corresponding required steps:


  • Download Java client library for Redis

For this demo, I used Jedis library (xetorthio/jedis · GitHub) as one of commonly used and recommended client libraries for Redis. An archive with a JAR file of a library can be downloaded from Releases · xetorthio/jedis · GitHub. Please note that this is not the only available Redis client library - an extensive list of client libraries can be found at Redis official web site.


  • Import a JAR file of a library into a PO system as an imported archive in ESR

Imported archive.png

 

  • Implement a lookup function as a part of a function library

Since an idea is to make a function re-usable in different mappings whenever key-value pair lookup is required, user defined function is not an option here since it doesn't provide cross-mapping re-usability. Having this in mind, usage of function library has been chosen.


The function implements a simple logic for establishment of a connection to a given Redis server by its host name and port, querying a key-value pair for a given key and a default value that should be used if key-value pair wasn't found:

Function library - function.png

Make sure that corresponding used Jedis classes are specified in import instructions section and imported archive is specified in used archives section:

Function library - instructions.png

Function library - archives.png


  • Make use of the developed function of a function library in a message mapping

In a message mapping, target field "Text" is filled with a looked up value of a text for the given object ID.

Some explanatory notes and comments:

  • concatenation is done beforehand in order to compose a key name which is compliant to key naming convention discussed above. This is where we can benefit from using agreed clear naming conventions for key names, simplify and unify required mapping logic for key name generation,
  • Redis server connectivity details (such as host name and port) are exposed as parameters of a message mapping (here, REDIS_HOST and REDIS_PORT, correspondingly) in order to make message mapping more flexible and reduce its maintenance efforts in case Redis server connectivity details are changed in the future.

Both points mentioned above have nothing to do with primary subject of a blog, but are nice to have, so worth mentioning here.

Mapping - graphical definition.png

Mapping - text definition.png

 

 

Mapping lookup runtime test

Test was executed by means of standard message mapping test functionality.

 

Corresponding required custom mapping parameter values were specified:

Mapping - test, parameters.png

After providing source message payload and triggering a mapping test, it can be seen that target message was constructed successfully and result of a lookup query addressed to a Redis server was obtained successfully:

Mapping - test.png

Redis has built-in functionality for collecting log information about processed requests and capture timestamps of their execution. For example, this can be helpful when evaluating executed queries and their latency when running mapping lookup test:

Redis CLI monitor.png

 

 

Implementation of lookup to Redis (alternative): Usage of mapping lookup API utilizing HTTP based communication

As an alternative to the approach described above where 3rd party Redis client library needs to be imported to a SAP PI/PO system, it is possible to send requests to a Redis server over HTTP. Out of the box, Redis does not provide these capabilities and cannot accept and handle HTTP requests. In order to make this possible, another 3rd party solution, a lightweight HTTP server Webdis (Webdis, an HTTP interface for Redis with JSON output), can be used, that acts as an HTTP proxy interface to Redis native commands, which supports HTTP based communication and output (response) in JSON format. Please note that Webdis is not exposing RESTful services as a proxy layer for accessing Redis server, it is meant to be an HTTP proxy to Redis which exposes an HTTP interface to external callers. As a result, when dealing with this kind of communication, it is not accurate to state we get true use of REST architectural style, but it is more appropriate to refer to it as to HTTP based communication, which may remind REST in some aspects, but which is not 100% REST compliant.

 

After Webdis is installed, configured and started, HTTP calls can be sent to its listener endpoint - for example, GET requests containing lookup queries. Below is a sample HTTP request that is leading to exactly the same lookup request to Redis server as the one used in test earlier. Note that Redis command (GET) and looked up key (test:vadim:sample.object:00100:text) are passed in URL, received response is in JSON format:

Webdis - HTTP GET request.png

Having this in mind, we can make use of Webdis + Redis by querying it e.g. using an HTTP or REST adapter in SAP PI/PO. From SAP PI/PO perspective, this is seen as a normal HTTP lookup query, with no specifics in regards to Redis, besides a URL pattern that should comply to the one expected by Webdis.

 

This can be achieved from a user defined function / a function of a function library or from a Java mapping program by means of SAP standard Lookup API, which is a part of mapping API of a SAP PI/PO system. Usage of Lookup API is a common way for calling lookup functionality through communication channel of arbitrary adapter type. In this blog, I will not focus on detailed implementation aspects for this approach, since Lookup API is well documented in SAP Help JavaDoc (Generated Documentation (Untitled)) and described in SAP Help (Implementing Lookups Using SystemAccessor - Managing Services in the Enterprise Services Repository - SAP Library). In addition to this, there are helpful SCN materials that demonstrate usage of Lookup API with practical examples and code snippets - here are just few of them:

 

 

Outro. Which integration option to choose?

Currently Redis doesn't provide any generic means of querying its database - for example, communication mechanisms involving plain HTTP or more advanced techniques based on REST. Instead, the one need to get use of Redis specific client libraries if we need to consume data from Redis database (for example, make lookups) or maintain data to it. This causes additional development overhead, commonly leading to necessity of (minor) custom development and potential increase of technical debt.

 

On the other hand, Redis is positioned as a very robust key-value store. In this way, introduction of any intermediate infrastructure components like HTTP proxies / interfaces may cause increase of end-to-end lookup query response time and mitigate original performance related benefits of Redis usage. Even though such HTTP interfaces make interoperability with Redis more generic (from consumer perspective, being agnostic to Redis client libraries), performance implication of their usage should be thoroughly considered.

 

As a result, I don't see a unique clear and unambiguously correct answer to a question regarding which integration option for Redis would be the right one - as seen, it is a merit between interoperability / communication mechanisms unification and performance. If in the future Redis will provide performant HTTP based mechanisms for querying its database, this will definitely be a promising feature to look at. Until that, it is advisable to consider performance requirements: if performance requirement is critical and performance KPI is challenging, then the approach involving usage of Redis client library is definitely an option to go, otherwise HTTP based interfacing is a nice and convenient alternative.

Using EGit to quickly create an Adapter Module's EJB & EAR project

$
0
0

Introduction

Creating an Adapter Module for PI takes a bit of an effort as we have to create the EJB and EAR projects in NWDS and set up all the necessary build paths and deployment descriptor files.

 

As I start to use EGit/Git to manage my NWDS projects, I've come to find different ways it can enhance my workflows. One such area is in setting up template EJB & EAR projects for custom adapter module. There is already other approaches like the blog below is using templates for adapter module projects.

Simple Steps to build an Adapter Module using EJB,EAR template

 

The benefit of using EGit is that the projects will be automatically version-managed by Git once the template is cloned into NWDS.

 

Below are the steps on how to quickly set up an adapter module project using template content from GitHub.

 

 

Prerequisite

The steps below are based on NWDS 7.31. EGit needs to be installed and configured first as detailed in Installation and Initial Configuration sections of the following blog.

Using EGit for Java source code management in NWDS

 

 

Steps

Step 1 - Clone Git repository from GitHub to NWDS

Go to the Git repository via the link below, and select Copy to clipboard.

engswee/pi-module-template · GitHub

git_copy.png

 

In NWDS, switch to Git perspective - Window > Open Perspective > Other > Git Repository Exploring

Click the Clone Git Repository button

clone.png

 

A window will pop up with pre-populated value of the GitHub URL details. Accept the default value and click Next.

clone1.png

 

It will load the GitHub branch details. Click Next again.

clone2.png

 

Finally, it will prompt for a destination of the local Git repository. Browse to the appropriate folder and select Finish.

clone3.png

 

The contents will be downloaded from GitHub to the local Git repository. The screenshot below shows the new (arbitrarily named) Git repository new-module.

git_repo.png

 

Step 2 - Import EJB and EAR projects

The contents in the Git repository already consists of the template EJB and EAR projects for a custom adapter module. This content will need to be imported into NWDS's project explorer.

 

Right click on the Git repository and select Import Projects.

import.png

 

Select Import existing projects and click Next.

import1.png

 

Select both EJB and EAR projects and click Finish.

import2.png

 

After import has completed. Switch to the Java EE perspective and both the EJB and EAR projects will be listed in the Project Explorer.

project.png

 

Step 3 - Refactor project contents

Now just refactor the project contents according to the development naming conventions and requirements.

 

Rename EAR project.

refactor_ear.png

 

Update provider name in application-j2ee-engine.xml in EAR project.

refactor_vendor.png

 

Rename EJB project.

refactor_ejb.png

 

Rename package in EJB project.

refactor_package.png

 

Rename EJB 3.0 Session Bean.

refactor_bean.png

 

Rename ejb-j2ee-engine.xml deployment descriptor. Ensure that this matches the name of the session bean above.

refactor_j2ee.png

 

Rename ejb-jar.xml deployment descriptor. Ensure that the name and package path matches above changes.

refactor_jar.png

 

Step 4 - Commit initial content to Git (Optional)

Once all the changes have been done, optionally commit an initial version to the Git repository.

commit1.png

 

With this, the EJB and EAR projects are now ready for further development according to project requirements.

 

 

Conclusion

With this simple and easy steps, we can quickly establish a new custom adapter module project and dive right into the design and development details. We no longer need to remember the details of setting up the project configuration correctly, which can be quite repetitive (which I also find hard to remember at times with so many screens involved).

 

Once the Git repository is cloned, with just a few refactoring steps, the project is ready for development. Additionally, it benefits from already being tracked by Git.

Adapters for SAP HCI: Integration with MS Dynamics CRM and RabbitMQ

$
0
0

Introduction

This blog describes integration scenarios between Microsoft Dynamics CRM Online, Microsoft Azure Cloud Service Bus and RabbitMQ with SAP HCI. These scenarios are to show that non SAP integration scenarios can easily be implemented using third party adapters for SAP HCI. The Advantco AMQP adapter for SAP HCI and the MS DynCRM Adapter for SAP HCI were developed based on the SAP HCI ADK.

 

 

 

Scenarios

In the outbound scenario, SAP HCI polls customer data from a RabbitMQ queue via the AMQP adapter and creates Accounts in Microsoft Dynamics CRM via the DYNCRM adapter. Upon changes to the Accounts in Dynamics, a plugin pushes the changes to an Azure Cloud Service Bus topic. An AMQP sender channel acts as a subscriber to the topic to receive the changes from Dynamics.

 

Advantco_TechEd2015.jpg

 

 

HCI Integration Flows

The screenshots below show the some of the configuration steps for these scenarios in SAP HCI.

 

Integration package

pic2.png

 

Outbound scenario
pic3.png

 

Inbound Scenario

pic7.png

 

DYNCRM channel configuration

pic4.png

 

AMQP channel configuration

pic5.png

Length limitation for parameterized mappings

$
0
0

Introduction

Usage of Parameterized Mapping Program is a common design technique that increases the possible applications of a mapping program as well as its flexibility. Below are some of the use case of parameterized mapping programs.

  • Enable different constant value for different environment (instead of hard-coding the values in mapping based on environment identifier)
  • Enable changing of mapping value via configuration change in Integration Directory object instead of design change of ESR object
  • Reuse of same mapping program in different integration scenario

 

The actual value of each parameter is configured in the Interface Determination step in Integration Directory. Although the configuration tool (whether Swing or NWDS) allows the possibility to configure a value of any length, one of the less known fact is that there is a length limit to the configured value. Although the limit is not listed in SAP's online library, as far as I recall, the limit was 255 on a PI 7.11 dual stack system.

 

Recently, I've come across some issues during deployment of an iFlow that contained a parameterized mapping. Searching for the error on SCN did not reveal much but after some troubleshooting it was traced back to the limitation of the parameterized mapping. It seems that on a single stack system (PO 7.4), the limit is even shorter at 98.

 

In this blog, I will share my experience of troubleshooting this issue and verifying the length limitation of parameterized mappings.

 

 

Troubleshooting the Issue

Below are the symptoms of the issue.

 

During deployment of the iFlow from NWDS, the runtime cache failed to update.

deploy_fail.png

 

Checking the trace by right-clicking and selecting View Deployment Trace does not reveal further information.

trace.png

 

Therefore, I checked the cache status instead and it shows that there is an error with Interface Determination object.

cache.png

 

By right-clicking and selecting More Details, it reveals the error log for the issue.

 

log.png

 

In short, the following error indicates that there was an issue with the parameter related to blank padding.

Failed to set the parameter 6 of the statement >>UPDATE XI_AF_CPA_PAR_MAP SET INF_DETER_ID=?, ALL_IN_ONE_ID=?, PARAM_TYPE=?, PARAM_CATEGORY=?, PARAM_NAME=?, PARAM_VALUE=? WHERE OBJECT_ID=?<<: Cannot assign a blank-padded string to a parameter with JDBC type>>VARCHAR<<.

 

The configured value of the parameter did contained white spaces in between but it was less than 255 characters, so that prompted me to guess that the length limit might have changed.

 

In order to confirm this, I tried to reduce the length of the configured value iteratively (1 character at a time) until the iFlow could be deployed. From the iteration, I found that deployment would only succeed when the value at position 99 was non-blank.

 

fail.png
succ.png

 

 

Verifying the Limitation

To further verify the limitation, I created a simple integration scenario consisting of a parameterized message mapping. An importing parameter is added to the mapping and used in the following mapping logic.

 

logic.png

 

When the mapping is executed in ESR with an input value of longer than 99 for the parameter, the full value of the parameter is accessible during the testing.

map_result.png

 

The iFlow is then configured with the following input value to the parameter which is longer than 99 characters.

param.png

 

When the interface is executed end-to-end, the target payload is truncated after position 98. This confirms that even though the parameter can be configured with values of any length, the actual value is truncated during deployment and it is the truncated value that is used during runtime.

truncate.png

 

 

Conclusion

With this simple troubleshooting and verification exercise, I can conclude that on a PO 7.4 single stack system:

  • Parameterized mapping have a length limit of 98 characters even though the configured value is longer than that
  • If the configured value is longer than 98 characters, deployment of the iFlow/ICO will fail is there is a blank space at the 99th position

 

I am not sure if this limitation is same on other systems, i.e. dual stack, different database system. However, the approach here can be used to similarly verify the limitation on any particular system.

 

As a conclusion, when designing integration scenarios with parameterized mapping, take into consideration the potential length of the configured parameter value so that it will not cause any issues during deployment or runtime.


To send data from request message payload to response message payload for Async Sync scenario

$
0
0

Hi guys,

Recently, I have come across a scenario where we needed to send some data from request payload to response payload for an Async-Sync bridge scenario. The approach I followed is described below:

 

Async Sync scenario from ECC (IDoc) to Database (JDBC) [IDoc --> JDBC (Synchronous Insert) --> IDoc]

Request from ECC IDoc to JDBC

JDBC Response to IDoc ECC

We have used Async Sync bridge module at the JDBC receiver channel - [alternative to BPM].

 

Requirement:

Some field values from the requesting outbound IDoc from ECC needed to be sent as a part of the response Inbound IDoc to ECC.

As the JDBC channel just gives insert_count information as response for INSERT statement, we needed to add the required values from request IDoc to response IDoc.

 

Solution:

To achieve this, we have implemented the following solution:

  1. The standard Async - Sync bridge modules inserted [RequestResponseBean and ResponseOnewayBean] at the receiver JDBC channel at proper sequence.

          1.jpg

    2. At the request message mapping, we have added the required values [to be sent back as response] as Dynamic configuration variables to the message header via setDCKey UDF.

          2.jpg

     3. The dynamic config keys which were set at the mapping runtime [step-2], were copied into Adapter module header variables [any name of our choice]. This is done via DynamicConfigurationBean module. (DCB1) And is done before the processing of RequestResponseBean module starts processing.

                    Parameter name              Parameter Value

                    Key.<num>                        write <dynamic config namespace> <dynamic config variable name>

                    Value.<num>                    module.<variable name>

              3.jpg

     4. After CallSapAdapter module of the JDBC adapter, the adapter module variables set at step 3 were read into the Dyncamic Configuration Variables of the response message header. This is done via DynamicConfiurationBean (DCB2) and this bean is used before the response oneway bean module gets processed.Parameter name              Parameter Value

                         Key.<num>                        read <dynamic config namespace> <dynamic config variable name>

                         Value.<num>                    module.<variable name>

 

     4.jpg

  5. Now we read the dynamic configuration keys of the message header set at step 4 into the response message mapping via getDCKey UDF, and send the required values from source request IDoc to the target response IDoc of the scenario.

      5.jpg

Code for the UDFs are given below:

 

public String setDCKey(String keyName, String keyValue, String dcNamespace, Container container) throws StreamTransformationException{

                AbstractTrace trace = container.getTrace();

                trace.addInfo("Entering UDF: setDCKey");

                try{

                DynamicConfiguration conf = (DynamicConfiguration) container.getTransformationParameters().get(StreamTransformationConstants.DYNAMIC_CONFIGURATION);

                DynamicConfigurationKey key = DynamicConfigurationKey.create(dcNamespace,keyName);

                conf.put(key,keyValue );

                }catch(Exception ex)

                {

                                trace.addWarning("Exiting UDF: setDCKey with error. Error: "+ex.toString());

                                throw new StreamTransformationException("Error in UDF setDCKey. Error: "+ex.toString());

                }

                return keyValue;

}

 

public String getDCKey(String keyName, String dcNamespace, Container container) throws StreamTransformationException{

                AbstractTrace trace = container.getTrace();

                trace.addInfo("Entering UDF: getDCKey");

                try{

                DynamicConfiguration conf = (DynamicConfiguration) container.getTransformationParameters().get(StreamTransformationConstants.DYNAMIC_CONFIGURATION);

                DynamicConfigurationKey key = DynamicConfigurationKey.create(dcNamespace ,keyName);

                return (conf.get(key));

                }catch(Exception ex)

                {

                                trace.addWarning("Exiting UDF: getDCKey with error. Error: "+ex.toString());

                                throw new StreamTransformationException("Error in UDF getDCKey. Error: "+ex.toString());

                }

}

And the RequestResponseBean and ResponseOnewayBean module parameters for async sync bridge:

 

6.jpg

REST Adapter in PI/PO: Enhanced XML/JSON Conversion

$
0
0

One of features of SAP standard REST adapter, is XML/JSON conversion - that surely makes sense, considering that internal processing in SAP PI/PO is done in XML format on one hand, and JSON format is de-facto format when dealing with REST architectural style on the other hand. Looking into recent SCN forum threads and questions raised about a REST adapter, it can be concluded that generation of JSON output for a processed XML message payload is not always clear and may be misleading. SAP actively enhances functionality of a REST adapter – customization and feature-enrichment of JSON processing being one of actively contributed areas. Many of such features have been documented in SAP Help materials, but one of quite powerful and flexible functionalities – namely, enhanced XML/JSON conversion – was only briefly mentioned in a SAP Note 2175218. In this blog, I would like to demonstrate usage of this functionality and provide details about valid parameterization.

 

Internally, a REST adapter makes use of 3rd party Jettison library for JSON processing tasks. In standard configuration, REST adapter relies on default conversion logic implemented in Jettison processor, which does not correlate or take into consideration payload elements properties as defined in a corresponding message type, but has its own optimization and type derivation mechanisms that are based on nature of the value of a processed XML document's element rather than XSD schema of a processed message. As a result, this conversion may sometimes result in unobvious output – here are just few examples which are commonly faced:

  • If an XML element was defined as an array, but only contains one item in converted XML payload, Jettison processor will likely convert it to a non-array type;
  • If an XML element was defined as a String, but only contains numeric value in converted XML payload, Jettison processor will likely convert it to an integer type.

In some use cases, this kind of improper type conversion may be unacceptable – and this is where enhanced XML/JSON conversion parameterization helps solving a faced problem.

 

An idea behind enhanced XML/JSON conversion functionality introduced with the SAP Note 2175218, is explicit instruction of JSON processor on how to treat particular XML elements. Let us examine this functionality based on a practical example.


Below is definition of a message type used for a response message in a synchronous scenario, where we make use of a REST sender communication channel. As seen, it contains elements of various types, including an array:

Response message type.png

A sample response message in XML format looks like:

XML response.png

Using standard configuration of a REST sender channel, JSON formatted response message produced from an XML formatted message given above, is:

REST response - default.png

It can be noticed that some elements’ types were interpreted incorrectly - for example:

  • Element “ID” wasn't recognized as a String, but as a number - Jettison processor treated it as a number, because element value contains only numeric characters;
  • Element “Properties” wasn't recognized as an array - Jettison processor treated it as a non-array object with a nested structure, because an element contains only one child entry of an element “Property” (no other sibling elements "Property").

 

Let’s fix this using enhanced XML/JSON conversion. In a REST sender channel, parameterization for enhanced XML/JSON conversion is done in a table “Custom XML/JSON Conversion Rules”. Below is configuration which aims troubleshooting type and conversion mismatches highlighted earlier:

REST channel - enhanced configuration.png

After executing an interface once again and checking JSON formatted response message, it can be observed that now JSON output is produced correctly:

REST response - with XML-JSON conversion enhancement.png

I didn't find details regarding parameterization in official materials, so let me summarize acceptable and valid values used in enhanced XML/JSON conversion parameters, and explanatory notes regarding their usage, in a table below:

 

FieldDescriptionValid values
XML NamespaceXML namespace of an XML element
PrefixXML namespace prefix of an XML element
NameXML element name
Type

XML element type.

Following types are currently supported: String, Integer, Decimal, Boolean.

It makes no difference which notation for the type value is chosen as long as it is one of those mentioned in a list of valid values.

If no value is specified, no specific XML/JSON conversion instructions are applied and default logic of Jettison processor is applied.

String type

string

xs:string

xsd:string

 

Integer type

int

integer

xs:integer

xsd:integer

 

Decimal type

decimal

numeric

float

xs:decimal

xsd:decimal

 

Boolean type

bool

boolean

xs:boolean

xsd:boolean

Array Type

Indicator if an XML element is an array or not.

It makes no difference which notation for the array type indicator value is chosen as long as it is one of those mentioned in a list of valid values.

If no value is specified, array type indicator is set to "false" by default.

If element is array

1

true

yes

 

If element is not array

0

false

no

Default Value

Value that will be assigned to a JSON element in case XML/JSON conversion for a corresponding XML element fails.

For example, in a provided demo, value of an element “Quantity” will be processed as an integer. If original value cannot be converted to an integer (for example, it contains not only numeric characters, but its content is alpha-numeric), then JSON output will receive a default value for such an element, which is “0” in this case.

It should be noted that default value is not verified against element type specified in a field "Type" - it is treated as a String. In this way, for example, it is possible to specify default value "Invalid value" for an element "Quantity" in a provided demo. An error will not be issued neither during activation of a communication channel, nor at runtime during execution of a message by REST adapter, even though provided default value mismatches with an element type (integer). Having this in mind, attention should be paid to provided default value and its compliance to an element type in sake of consistency.

Any value.

 

Following values are treated specially:

 

"null"

(with quotation marks) - interpreted as String value "null"

 

null

(without quotation marks) - interpreted as null

 

""

(just quotation marks) - interpreted as empty String value

HCI: XML to CSV conversion in HCI

$
0
0

Introduction

HCI provides functionality to convert between XML to CSV and vice versa. Compared to PI, its functionality is relatively rudimentary and can only cater for very simple structures.

 

The online documentation (Defining Converter) only covers the functionality briefly, and there is no other article on SCN covering it.

 

Therefore I tried experimenting with the functionality and this blog covers my experience doing so.

 

 

Component Details

As HCI is a cloud solution with automatic rolling updates, my testing is based on the following component versions of the tenant and Eclipse plugins.

HCI Tenant Version: 2.8.5

Eclipse Plugin Versions: Adapter 2.11.1, Designer 2.11.1, Operations 2.10.0

 

 

Example Scenarios

The following is mentioned in the online documentation, therefore I could only test out the following two scenarios.

You cannot use XML to CSV converter to convert complex XML files to CSV format.

 

For simplicity sake, the iFlows are designed with a timer to trigger the iFlow upon deployment and a Content Modifier to provide static input data to the converter. The output is then sent to a HTTP logging server.

iflow.png

 

Scenario 1 - Structure with single record type

In this scenario, the input payload is defined with a Records root node and an unbounded Line nodes.

 

Input Payload

<?xml version='1.0' encoding='UTF-8'?>

<Records>

                <Line>

                                <Field1>ABC</Field1>

                                <Field2>123</Field2>

                                <Field3>XXX</Field3>

                                <Field4>567890</Field4>

                </Line>

                <Line>

                                <Field1>XYZ</Field1>

                                <Field2>456</Field2>

                                <Field3>YYYY</Field3>

                                <Field4>98765</Field4>

                </Line>

</Records>

 

The data is contained in the repeating Line nodes. So, configuration of the converter is as simple as entering the XPath to the Line node, i.e. /Records/Line. The other options are specifying the field separator as well as the column names as header.

config1.png

 

With this configuration, the conversion's output payload is as follows.

 

Output Payload

Field1,Field2,Field3,Field4

ABC,123,XXX,567890

XYZ,456,YYYY,98765

 

 

Scenario 2 - Structure with header record type and repeating details record type

 

In this scenario, we have additionally a Header node.

 

Input Payload

<?xml version='1.0' encoding='UTF-8'?>

<Records>

                <Header>

                                <FieldA>H_ABC</FieldA>

                                <FieldB>H_123</FieldB>

                                <FieldC>H_XXX</FieldC>

                                <FieldD>H_567890</FieldD>

                </Header>

                <Line>

                                <Field1>ABC</Field1>

                                <Field2>123</Field2>

                                <Field3>XXX</Field3>

                                <Field4>567890</Field4>

                </Line>

                <Line>

                                <Field1>XYZ</Field1>

                                <Field2>456</Field2>

                                <Field3>YYYY</Field3>

                                <Field4>98765</Field4>

                </Line>

</Records>

 

In addition to the configuration above, we can configure the conversion of the "parent" element in the Advanced tab. The configuration is as simple as selecting Include Parent Element and specifying the XPath to the Header node.

config2.png

 

With this additional configuration, the conversion's output payload is as follows.

 

Output Payload

FieldA,FieldB,FieldC,FieldD

H_ABC,H_123,H_XXX,H_567890

 

Field1,Field2,Field3,Field4

ABC,123,XXX,567890

XYZ,456,YYYY,98765

 

A particular point of interest is that the converter automatically includes an additional blank line in between the header line and the detail lines.

 

 

Additional Findings/Issues

Besides this simple conversions, during my testing of the function, I've come across the following issues.

 

i) Missing enclosure of fields that contain separator

According to RFC 4180 - Common Format and MIME Type for Comma-Separated Values (CSV) Files:-

Fields containing line breaks (CRLF), double quotes, and commas should be enclosed in double-quotes.

 

However, the converter does not handle this properly. As shown below, Field1 contains a comma. However in the output, this field is no enclosed with double quotes. As such, it will potentially cause issue for applications that try to process the CSV content.

 

InputOutput

<?xml version='1.0' encoding='UTF-8'?>

<Records>

                <Line>

                                <Field1>AB,C</Field1>

                                <Field2>123</Field2>

                                <Field3>XXX</Field3>

                                <Field4>567890</Field4>

                </Line>

                <Line>

                                <Field1>XYZ</Field1>

                                <Field2>456</Field2>

                                <Field3>YYYY</Field3>

                                <Field4>98765</Field4>

                </Line>

</Records>

Field1,Field2,Field3,Field4

AB,C,123,XXX,567890

XYZ,456,YYYY,98765

 

ii) Include Parent Element setting still valid even after being unchecked

If Include Parent Element is checked and Path to Parent Element is populated (as shown in Scenario 2's screenshot), even if the setting was unchecked later, the converter still performs conversion for the parent element. The workaround for this is to ensure that the Path to Parent Element is cleared off prior to unchecking Include Parent Element.

setting.png

 

 

Further Points

Although the scope of this blog is on the XML to CSV converter, I also tried out the CSV to XML converter functionality. However, I was unable to get it to work successfully. Again, the example on the online documentation is quite vague and there were no other materials on SCN to assist.

 

Following is a configuration of the CSV to XML converter that was being tested.

csv2xml.png

However, during runtime, the following error is triggered. I've tried various values for Path to Target Element in XSD but none was successful.

java.lang.IllegalStateException: Element name [DT_HCI_Conversion\Line] not found in provided XML schema file

 

This is similar to the error in the following thread which also remains unanswered.

iflow in HCI having error at csv to xml converter step

 

 

Conclusion

Although HCI comes with built-in functionality for XML to CSV conversion (and vice versa), the functionality is still very limited and buggy. The use case for the converter is restricted to just simple scenarios. As such, until this functionality is enhanced in future updates, more complex conversions would most likely require custom development in the form of custom Groovy scripts.

Example of Sender Rest Adapter in PI 7.4 - PUT Operation

$
0
0

Hi Everybody, Greetings from Colombia.  This is my first Blog.

This is a step by step example of how to setup a Sender RESTAdapter Channel (with the Operation PUT), and how to test it using the Chrome Plugin and a simple JAVA Class.  I won't explain how to build the Mapping and the other side of the integration.

 

System: SAP PI 7.4 SP11

 

This example will follow this workflow:

 

JAVA <-> PI RESTAdapter (JSON) <-> PI Mapping (XML) <-> PI JDBCAdapter (XML) <-> Oracle SP

 

Bussiness Case Description

A Java class consumes a REST service published in PI to check if an ID (nmDni) represents a risk to the company.

 

SETTINGS

 

Message Mapping

The datatype must have a field nmDni, named just like the parameter that we are going to send in the JSON payload.

blog_message_mapping.PNG

 

Communication Channel

We will be working with the Adapter Type REST in the communication channel.  Follow the next screenshots to set up the channel:

cc_01_header.PNG

General Tab

Here it's important to set the Element Name with the request Message Type, so the JSON structure can be recognized in the request

cc_01_1_general.PNG

Channel Selection Tab

cc_01_2_channel_selection.PNG

REST Resources Tab

Here it's very important to set the JSON Element that will match with the datatype defined in PI nmDni

cc_01_3_rest_resources.PNG

Rest Operation Tab

cc_01_4_rest_operation.PNG

 

The last 2 tabs are left empty

  • Operation Determination
  • Error Handling

 

Activate and assign the channel to the ICO in the Integration Builder.

 

TESTING

 

Ping

The ping on the channel, in the Communication Channel Monitor, you can get the endpoint and check the pattern of the parameter that we will use in the testing fase.

cc_10_ping.PNG

 

Chrome Plugin

Download and install the Advance REST Client plugin for Google Chrome.

Put the endpoint and select the PUT operation.  In the payload section paste this code:

 

{

"nmDni":"6183"

}

cc_30_1_chrome.PNG

 

Clic the "Send" button and the response should appear in JSON format

cc_30_2_chrome.PNG

 

Java Class

Here it's the Java class that will work to test the REST Service.  It's important to notice, that you need a PI user and password.  That user and password is encoded using Base64 library.

 

 

package com.prueba.rest;

 

import java.io.BufferedReader;

import java.io.InputStreamReader;

import java.io.OutputStreamWriter;

import java.net.HttpURLConnection;

import java.net.URL;

import sun.misc.BASE64Encoder;

 

public class PruebaRestMAIN {

 

    public static void main(String[] args) {

 

    String name = "pi_username";

        String password = "pi_password";

    String authString = name + ":" + password;

        String authStringEnc = new BASE64Encoder().encode(authString.getBytes());

        System.out.println("Auth string: " + authStringEnc);

 

        String line;

        StringBuffer jsonString = new StringBuffer();

        try {

            URL url = new URL("http://your_PI_domain:50000/RESTAdapter/riesgosput/1234");

 

            //escape the double quotes in json string

            String payload="{\"nmDni\":\"71333\"}";

 

            HttpURLConnection connection = (HttpURLConnection) url.openConnection();

 

            connection.setDoInput(true);

            connection.setDoOutput(true);

            connection.setRequestMethod("PUT");

            connection.setRequestProperty("Authorization", "Basic "+authStringEnc);

            connection.setRequestProperty("Accept", "application/json");

            connection.setRequestProperty("Content-Type", "application/json; charset=UTF-8");

         

            OutputStreamWriter writer = new OutputStreamWriter(connection.getOutputStream(), "UTF-8");

            writer.write(payload);

            writer.close();

            BufferedReader br = new BufferedReader(new InputStreamReader(connection.getInputStream()));

            while ((line = br.readLine()) != null) {

                    jsonString.append(line);

            }

            br.close();

            connection.disconnect();

        } catch (Exception e) {

                throw new RuntimeException(e.getMessage());

        }

     

        System.out.println("Respuesta: "+jsonString.toString());

     

    }

}

 

PI MESSAGE MONITOR

In the PI message monitor you will see the 2 lines for each test.  And if you open the message you will see in the Payload tab the JSON message:

cc_20_message_monitor.PNG


I hope this helps in your way to test the RESTAdapter in PI.

Have a good day

Lookup of alternativeServiceIdentifier via CPA-cache failed for channel

$
0
0

I know we are not new to error "Lookup of alternativeServiceIdentifier via CPA-cache failed for channel" issue in SCN .Many blogs and references are there I am sharing my experience with this error as encountered for first  time.This is my first blog in SCN too.

 

In my project we have  a standard Function function Module in SAP ECC system which communicates using RFC in SAP PI(7.1).

The  error I received in outbound queue of ECC system "Lookup of alternativeServiceIdentifier via CPA-cache failed for channel"



As per Error I followed the steps.

  • I monitored the sender RFC channel in SAP PI RWB.

          Channel was green.Hence no error with channel.

  • I checked corresponding RFC destination in SAP ECC system with same Program ID as mentioned in channel.

RFC connection test was also fine.

Then I thought might be cache issue

  • I performed full cache refresh.

http://host:50000/CPACache/refresh?mode=full

Still the the same error while executing program in sap ECC.


Now I referred SCN threads.

e.g. https://scn.sap.com/thread/175402

  • As per SCN I got an Idea that there can be Issue with the Business system so as per Instructions I deleted and recreated new business system with same name and appropriate details of Technical system pointing to correct ECC system.

I Imported business system in Integration Directory.

Activated the business system and performed the cache refresh.

This time I assumed error should not be there,but found that same error was  there.

In the meanwhile I created new channel and sender agreement but it also did not work

Now nothing was working my way.


Then I again opened RFC destinations(SM59) in ECC system  and checked the RFC destination with same Program ID as  in Sender channel of SAP PI.

In my current project landscape we have Development, Quality,Training , Pre-Production, and Production  environments.This error was in Training environment

While checking the host configuration of RFC destination I came to know that this particular RFC destination was pointing to Pre-Production environment.

 

I have shared this blog in future if any body comes across such issue while dealing with with multiple environments we could miss out such silly things to check out initial stage.

Viewing all 676 articles
Browse latest View live