Quantcast
Channel: SCN : Blog List - Process Integration (PI) & SOA Middleware
Viewing all 676 articles
Browse latest View live

"Unexpected State of Changelist" issue in PI resolved

$
0
0

Due to an emergency situation a configuration object in Prod PI Integration directory (ICO) was edited directly, while activating the change list there was
some interruption because of which Change list got into an Unexpected State. I was not able to either reject or activate the change list.

 

ID Object is still hanging in my change list. Not able to activate.

 

ChangeList.JPG

Erorr message displayed while rejecting or activating change list.

ChangeList_Error.JPG

 

Root Cause:

The error "Unexpected state of Changelist (ID=<#>). Expected open,but given state is releasedLocally" is often caused by a hanging

thread which wasn't able to complete the changelist activation.

 

Solution:

Restarting the system has resolved the issue for us.

 

If this is still not resolved after restart, then run the SQL queries as per note #1399960

"Database Inconsistency for changelists in PI" and attach the result to SAP incident.


Building your first iFlow - Part 3: Building the iFlow

$
0
0

In the previous instalment, Building your first iFlow - Part 2: Initializing the iFlow  the iFlow was setup so that the iFlow could be started and the data retrieved from a remote service. In this exercise the data will be parsed and sent to the email account as an email.

 

To handle the data from the request, a content modifier component is used that will parse the XML and then use XPath's to extract the relevant parts of the XML into variables that will be used in the message.

 

pic.png

 

After the content modifier has processed the message, the message will need to be delivered so at this stage an “End Message” event will be added to the integration process as well.

 

pic.png

Expand the "Others" panel in the palette and select the "Sender" component and place it in the iFlow.


pic.png


The sender component will need to be configured to specify the authentication mechanism that will be used when sending messages to the external service.


Screen Shot 2015-11-04 at 09.18.38 copy.png


Click on the sender node and then the properties tab will be activated with the choice of Basic Authentication or Certificate Based Authentication.

Since GMail expects the username and password to be sent using the Basic Authentication scheme, select the "Basic Authentication" option. Without this option selected the GMail server may not receive the credentials encoded correctly.


And of course the destination (sender) component and channel will also be required to take the final message and deliver it somewhere, join them together using a message flow connector.

 

pic.png

As before, to configure the channel requires right clicking over the channel and choosing “Configure Channel” in order to reliably configure the channel definition.

 

pic.png

 

For the outgoing messages the messages will be sent to the “Mail” adapter, that will be using SMTP to deliver the message. As before, once the adapter has been selected, goto the “Adapter Specific” tab and enter the relevant details.

 

pic.png

 

Here the address of the destination server will be the gmail servers: smtp.gmail.com:587 using STARTTLS protocol and both the from and to fields will be set to my personal gmail account. For now the subject and body fields will not be configured.

 

All that is missing is the credential name and this will require a little detour…. which will be covered in the next posting

Building your first iFlow - Part 4: Configuring your credentials

$
0
0

In the previous instalments (Building your first iFlow: Part 3: Building the iFlow) the iFlow was created and partially configured. The outbound message now needs to be delivered but this requires a username and password to be supplied to Gmail in order that the email can be created on Google's servers.


The message destination required a user name and password. Normally this process requires creating and deploying your email address and google password as a credential artifact, however in the case of Gmail this may require additional steps in the case the Google account has two factor authentication enabled.

 

To enable a password to be used with the EMail adapter will require creating and deploying a Credential Artifact to protect the credentials from being read by anyone except the HCI server. To begin creating artifact goto the Integration Operations perspective.


The perspective can be changed either by clicking on the choose perspective button to the right of the toolbar or by selecting Other... command on the Open Perspective submenu on the Window menu.

Screen Shot 2015-11-11 at 12.57.47.png

This will present a dialog allowing the choice of Integration Designer or Integration Operations"


Screen Shot 2015-11-11 at 13.34.49.png


Choose Integration Operations and this will open the Node Explorer if it is not visible. Now right click over the tenant node and select the Deploy Artifacts command.

 

pic.png

 

This will give a choice of artefacts to deploy. In this case the “User Credentials” artifact will be created and deployed to the Tenant. These will be the credentials will be used to authenticate with the Google Email servers. By using an artifact like this the username and password can be protected on the server since they will not be made visible to developers and also the credentials may be independently updated.

 

pic.png

 

Click the “Next” button and fill in the values.

 

pic.png

 

For the credentials the Google username and password are used. If however the Google account has two factor authentication then an application specific password will need to be created. If errors are being reported delivering email then follow the steps below to setup an applications specific password. Application specific passwords are important since they will restrict what the application is allowed to do with your account. In this case the password will be restricted to the email application only, so your other Google properties are not affected.

 

Log into your Google account and choose “My Account"

 

pic.png

 

Select the “Signing in to Google” link and in the “Password and sign-in method” panel, choose “App passwords"

 

pic.png

 

From the list of app passwords, choose the “Select app” and “Select device” selects at the bottom.

 

pic.png

 

For the “Select App” choose “Mail and in the “Select Device” choose “Other” then use a descriptive name such as “HCI WeatherReport"

 

pic.png

 

and click the “Generate" button

 

pic.png

 

The will show a 16 character password in groups of four. This is the password that will need to be used for the password in the password field. Memorize this password or at least leave it on the screen then swap back to Eclipse and enter it in the password and password repeat fields.

 

Once the password is entered in the fields of the Password artifact, click “Finish” and this will deploy the credentials to the tenant.


 

pic.png

 

Verify the credentials have been deployed by clicking  on the  "Deployed Artifacts" tab and verify the “HCI WeatherReport Credentials” have been successfully deployed.

 

You can close the password window on your Google account and logout of Google now. The new application specific password would have been saved.

 

pic.png

 

A tip to remember is to click over the deployed artifact and choose the “Copy Artifact Name” command to copy the artifact to the clipboard. This will ensure the correct name is preserved.


pic.png

 

Go back to the channel configuration and enter or paste the name of the newly deployed credential name in the “Credential Name” field.

 

pic.png


This completes the configuration of the email channel.

 

The start, transformation and end of the integration have been defined, now join them together with the sequence connector, noting that the “Content Modifier” transformation is showing an error since there is nothing yet configured for the transformation.

 

pic.png

 

The Integration process is now starting to look completed. In the next installment the content modifier will be configured as well as the email channel and the final iFlow deployed and run.

HCI Security: Securing your communications - Part 1

$
0
0

This series of posts will look at common customer configurations when implementing network communication security in the HANA Cloud Integration (HCI) platform.When beginning its best to consider the different scenarios and the frames of reference around terms so that we are clear about the different directions messages will be flowing in.

 

This paper will focus around security protocols in general as well as security landscapes for the on premise installation and the HANA Cloud Infrastructure, specifically around the implementation of Basic and Certificate authentication for messages being transferred to and from the HCI Tenant and the Customer Installation since this is often the most complex portion of the environment to setup.


 

Customer Installation:

Refers to an ERP system running Process Integration or Process Orchestration but it is not restricted to this, it could just as easily be a third party landscape sending messages to HCI.

 

 

HCI Tenant:

This is the installation that is running on the HANA cloud platform and is subscription licensed by SAP who provide the infrastructure to companies and partners. This will be referred to as the HCI Tenant. Each customer will have one or more tenants.

 

 

Inbound Messages:

These are the messages that are being received by the specified server. It will be clear in the documentation which server (HCI Tenant or Customer Installation) is being referred to.

 

Outbound Messages

These are the messages that are being delivered by the specified server. It will be clear in the documentation which server (HCI Tenant or Customer Installation) is being referred to.


 

Certificates and Public Key Infrastructure

 

Fundamental to secure communication on the Internet is the use of public key cryptography (PKI).

 

Wikipedia defines PKI as:

"A public key infrastructure (PKI) is a set of hardware, software, people, policies, and procedures needed to create, manage, distribute, use, store, and revoke digital certificates[1] and manage public-key encryption. The purpose of a PKI is to facilitate the secure electronic transfer of information for a range of network activities such as e-commerce, Internet banking and confidential email.”

 

PKI  manages certificates and the SSL/TLS protocol for infrastructure. The certificates use the X.509 standard for identity management and when used properly can verify the identity of the server and the client. Though SSL has been replaced by the newer and stronger Transport Layer Security (TLS), the name SSL is still used though the protocol has been retired. Throughout this paper there will be references to SSL, however this will be synonymous with Transport Layer Security.

 

 

SSL is based on a hierarchical model of trust where Certificate Authorities (CA’s), are the very fundamental entities on which both parties involved in a SSL communication must know and trust. If any of them does not know about them or does not trust them, SSL will not work since basically they cannot build trust on what a client or a server are claiming to be.

 

Certificates by themselves are much like business cards in that they provide a means of identity but do not prove the owner of the certificate is who they say they are. The actual encryption stage is done later when establishing a secure connection by asking the server to encrypt some data and see if the data supplied can be correctly decrypted using the supplied server certificate. Since only the owner of the private key can decrypt data that has been encrypted using the public key held in the certificate, this proves the server is the owner of the certificate and can be trusted. If the data is not decrypted correctly, then the server does not possess the correct private key and so is not the owner of the certificate.

 

The top of the certificate tree  is referred to as the Root Certificate and is issued by the Root Certificate Authority or the Root CA. A Root CA may issue special certificates that allow other parties to issue certificates but are trusted by the root CA. These are called intermediate certificates and are issued by an Intermediate Certificate Authority (CA). The client certificate will have path from its intermediate CA's (if there is one) to the Root CA. This means that the Root CA and any intermediate CA certificates will need to be present in the browser but the Root CA is the only certificate that is implicitly trusted.

 

The terms Root Certificate and Client Certificate will be used throughout this document so it is important they are well understood. The root certificate is the one everyone trusts and is normally shipped with the browser. The intermediate certificates are trusted only because their trust is guaranteed by the root certificate. Finally, the client certificate which is your certificate is the one used in your communications to prove your identity.

 

 

Mathematically speaking that trust is computed as a public/private key pair digital signature bound to the SSL certificate.

 

No matter which tool you use, the process of generating SSL certificate is fundamentally as follows:

  1. The customer generates a private and public key pair according to the rules of a given encryption algorithm and with a given size of bits. That key pair consists of a public and a private key and will make up the two fundamental pieces of the final SSL certificate.
  2. The customer also establishes a subject for the certificate. This will assign an identity to the certificate and is the identity the CA is agreeing to certify. The subject will be a Distinguished Name (DN) and this name is guaranteed to be unique. The format of the name is required to be compliant with RFC 5280:4.1.2.6 and looks similar to the following example:

    E=some.one@sap.com,CN=i00000,OU=SAP Trust Centre,O=SAP AG,L=Walldorf,ST=BW,C=DE

    Here we have attributes for the Entity (E), the email name (CN) the organizational unit (OU), the organization (O), the location (L), the state (ST) and the country (C).
  3. The public key along with the name of the owner who is requesting the signing is sent as a signing request to the CAs in the form of the PKCS#10 Certificate Signing Request (CSR).
  4. The CA validates the information received and signs the CSR request as an act of trust on the given subject (typically this is the distinguished name of the CSR). The signed CSR eventually becomes the digital signed certificate that is returned back to the owner
  5. The private key and the signed certificate are installed on the server providing SSL services to connecting applications or to servers requesting the caller provide proof of their identity. The certificate (which contains the DN and public key, signed by the CA) can be made public to whoever requests it. Once the owner receives and installs that signed digital certificate, the owner will have a fully functional SSL certificate ready to be deployed on the HTTP server.
    This process of verifying the certificate is known as the server authentication. Note that same process can be also applied in reverse if required which is known as client authentication or if both the server and the client request a certificate exchange then this is termed mutual authentication. If the certificate exchange process seems familiar, it is the basis for Single Sign-on!

 

To complete the PKI overview lets look at what happens when a client connects to the HTTP server via the SSL port. This will initiate a defined sequence to ensure the server is who they claim to be and to provide a secure means of negotiating a shared secret to encrypt the stream.


  1. The handshake begins when a client connects to a TLS-enabled server requesting a secure connection and presents a list of supported cipher suites (ciphers and hash functions).
  2. From this list, the server picks a cipher and hash function that it also supports and notifies the client of the decision.
  3. The server usually then sends back its identification in the form of a digital certificate. The certificate usually contains the server name, the trusted certificate authority (CA) and the server's public encryption key.
  4. The client may contact the server that issued the certificate (the trusted CA as above) and confirm the validity of the certificate before proceeding. Other checks are also performed to ensure the certificate is genuine such as the domain name matching the domain name in the certificate and that the certificate has not been revoked, has not been expired and the date is not before the issue date.
  5. In order to generate the session keys used for the secure connection, the client either:
    a) Encrypts a random number with the server's public key and sends the result to the server (which only the server should be able to decrypt with its private key); both parties then using the random number to generate a unique session key for subsequent encryption and decryption of data during the session
    b) uses Diffie-Hellman key exchange to securely generate a random and unique session key for encryption and decryption that has the additional property of forward secrecy: if the server's private key is disclosed in future, it cannot be used to decrypt the current session, even if the session is intercepted and recorded by a third party.
  6. This concludes the handshake and begins the secured connection, which is encrypted and decrypted with the session key until the connection closes. If any one of the above steps fail, the TLS handshake fails, and the connection is not created.

 

In the next posting we will look at some of the typical operations needed to manage security within the HCI environment.

SFAPI Async Job Management

$
0
0

This will help to check what data is there for a particular task Id in Successfactor. In compensation integration, after compensation planning completed in SF, planned compensation will be imported into HCM running report RH_SFI_IMPORT_COMP_DATA.

 

When we run report RH_SFI_IMPORT_COMP_DATA in HCM, it will trigger list of interfaces and get the data into SAP HCM from SF. Last interface is SFSFQueryHandlingGetJobResultEmbeddedQueryResponse_Out which will get the planned compensation details from SF.

 

Seems this soap service is getting the data in the form of binary. So we cannot see the content in PI as part of monitoring. Same in ECC also.

 

You can find the Task id in below services.

 

  • SFSFQueryHandlingSubmitJobQueryResponse_Out
  • SFSFQueryHandlingGetJobStatusQueryResponse_Out
  • SFSFQueryHandlingGetJobResultEmbeddedQueryResponse_Out

 

In order to see what data is coming from SF for a particular task id, we can use SFAP Async Job management tool.

 

Take the completed task id from monitoring once all three interfaces executed.


 

2.png

Go to API SFAPI Async Job Management (SFAPI Login Required)Tools

 

select  SFAPI Async Job Management (SFAPI Login Required)

 

 

1.png

 

 

 

11-9-2015 5-05-17 PM.png

 

Give your task id here and click on get Task status.



11-9-2015 5-08-35 PM.png

 


11-9-2015 5-08-53 PM.png

 

click on get result and you will get data.

 

11-9-2015 5-03-57 PM.png



another option will be, log into SF Analytics and get the data. but you need task name here. Reg Compensation SFSF integration



Thanks.


Ref:


SAP and SuccessFactors Talent Hybrid Packaged Integrations: an overview


APIs – SAP Help Portal Page


Integration Add-On for SAP ERP HCM and SuccessFactors HCM Suite – SAP Help Portal Page


Compensation Management – SAP Help Portal Page

 



Success Factors Integration SFAPI/Odata Tools

$
0
0

SFAPI/Odata Tools provide some features query builder,job status and entity defintion(data dictionay).

 

 

This was very useful in SF and HCM integration project. i will show you two functions. rest you can explore yourself.

 

 

https://sfapitoolsflms.hana.ondemand.com/SFIntegration/

 

link to access sfapi tools

 

https://sfapitoolsflms.hana.ondemand.com/SFIntegration/sfapitools.jsp

 

 

1. **Improved** OData Query Tool - Graphical OData Query Builder.

2. SFAPI Async Job Management (SFAPI Login Required)

 

 

 

11_11-10-2015 3-37-44 PM.png

2_11-10-2015 11-54-48 AM.jpg

click on option four.

 

1. **Improved** OData Query Tool - Graphical OData Query Builder.

 

this helps you view the entity records. you have option of building your query. select your entity which you want to query. here i am using User entity.


you will get list of fields available under entity.  select your entity and list of fields to display.



3_11-10-2015 11-58-37 AM.png

 

when you place the cursor on the filed, it will show you the properties. if Filterable is true, you can select it under query.

 

 

3_11-10-2015 11-59-19 AM.png

 

 

 

if you place the cursor on the filterable field , it will allow you to enter the data in operand box.

 

4_11-10-2015 12-01-44 PM.png

 

 

 

you can even group the records, using "OR and AND condition.

 

 

6_11-10-2015 12-03-23 PM.png

 

 

5_11-10-2015 12-02-33 PM.png

 

execute the query, it will show you list of records, with selected columns.

 

 

 

 

7_11-10-2015 12-07-22 PM.png

 

 

if you click on user it will show your entire data for that user.

8_11-10-2015 12-09-57 PM.png

 

9_11-10-2015 12-16-25 PM.png

 

json message

 

 

10_11-10-2015 12-16-59 PM.png

 

 

if you want to select the records without any filter, you can do that.

 

query_11-10-2015 12-05-40 PM.png

 

next blog SFAPI Async Job Management  i will explain about sfapi job management.

 

Regards,

Muni.

 

 

Ref:

 

Hands-On &amp;#8211; Testing Integration with Succe... | SCN

 

Hands-On – Testing Integration with SuccessFactors OData API

Condition Editor Troubleshoot in NWDS (SAP PO)

$
0
0

During migration of configuration scenario from SAP PI to SAP PO via Migration tool. There are few kind of conditions maintained in Receiver Determination or Interface Determination migrates with some error, if you open them in Recipient List or Interface Split steps in NWDS. Here are few technique to fix them.

 

1. Namespace prefix:

By default Namespace prefix, don't get migrated in SAP PO. You need to explicitly maintained them in NWDS.

 

PO_NS.jpgPO_NSD.jpg

 

2. Conditions with predicates "[]" :

You can not maintain conditions having "[]" without EXISTS operator in NWDS condition editor.

 

PI_PRE.jpgNWDS_PRE.jpg

Alternatively, you can define this condition using customized Xpath as explained in 3rd point below.


3. Conditions using "Create Xpath containing whitespaces":

There is an option to create your own customized Xpath and define your conditions accordingly.For example,

 

PI_WSP.pngNWDS_WHSP.jpg

 

Alternatively, you can maintain this condition as explained in 2nd point above like this,

(EXISTS(Xpath./ns:Select_MaterialProperty/row[contains(XMLCONTENT,'ZSAF_EHS_ABS_014_VALUE') and STATUS !="N"]))

 

 

Note:

Above troubleshooting guide is prepared by using,

SAP PI:      7.0

SAP PO:    7.4 (Single Stack)

NWDS:      7.3-EHP1-SP16

 

Reference:

Defining Conditions - Process Integration Tools (Eclipse-Based) - SAP Library

 

 

Thanks,

Ambuj

Message Monitoring on Service Operation level

$
0
0

Introduction

 

Service Interfaces in the Enterprise Services Builder can support multiple Service Operations. The Process Integration system will calculate the correct Operation for a message during the Interface Determination step in the message processing pipeline, based on the message type and content.

The monitoring tools in the PI system only provide monitoring functionality at Service Interface level. It's not possible to see the which Operation was used for a message. Furthermore, the Operation is not written to the message. It's very hard to determine to which Operation a message relates, in case the Service Interface contains multiple Operations.

 

This was changed with the SAP Note 2243337. So please make sure to update your Adapter Engine to this patch level before trying the below steps yourself. The Operation and Message Type are now included in the Dynamic Header data of the message. This blog describes how this can be used for monitoring purposes.

 

 

Example Scenario

 

The required steps are illustrated with the help of an example scenario. It's a scenario that uses two Message Types and a Service Interface with two Service Operations in the Enterprise Services Builder (Integration Repository). In this demo scenario we have one Service Interface to order and cancel tickets for an event. That means, the Interface will contain two operations. The scenario contains the following objects:

Objects.png

 

The scenario defines the two Message Types TicketOder and TicketCancel as request/source message types. They are mapped with Message Mappings to the respective target message types Oder and Cancel. The TicketOrder and TicketCancel Message types are very simple XMLs that just contain few elements for this demo purpose. We just wanted to have two different message types and use them later in two different Service Operations.

TicketOrderMsgType.png

 

TicketCancelMsgType.png

 

 

 

 

The request Service Interface TicketInterface contains two operations: TicketOrderOperation and TicketCancelOperation which use the two different message types.

 

ServiceInterface.png

 

The rest of the objects in the scenario are not really interresting and are only required to complete the scenario so that we can actually send a message through it. To finish the configuration it also contains a simple File-to-File scenario with Integrated Configuration in the Integration Directory, without any special configuration.

 

New Data in PI Message SOAP Envelope

 

With the changes from SAP Note 2243337 the Message Type and source Service Operation are now inserted into the DynamicConfiguration header of the message during the Interface Determination step. Before the changes, it would not have this information available at all in the message data. The new entries are in the SOAP Envelope of the PI message in the DynamicConfiguration part. The following screenshot shows one example message for each of the two message types TicketOrder and TicketCancel we defined. The entries in the DynamicConfiguration are highlighted. As you can see the TicketOrder message used the TicketOrderOperation and the TicketCancel message used the TicketCancelOperation.

 

 

TicketOrder Message:

TicketOrderMessage.png

 

 

TicketCancel Message:

TicketCancelMessage.png

 

The new XML elements are:

  • SourceMessageType: The name of the Message Type. This relates to the Message Type object that was created in the Enterprise Services Builder
  • SourceMessageTypeNS: The namespace of the scenario from the Enterprise Services Builder
  • InterfaceFromOperation: The name of the Service Operation from the Service Interface

 

All three elements use the namespace http://sap.com/xi/XI/Message/30/routing which is important to remember for the next steps.

 

Monitor the Operation and Message Type

 

The Service Operation and Message Type data in the DynamicConfiguration as such is not yet very helpful in the monitoring tools. It's only visible in the full message content display. To make it useful, the User Defined Search (UDS) can be used to index the fields and have them available as search criteria in the Message Monitor. The UDS fields and values can then also appear as own table columns in the message list and used for sorting and filtering.

 

Please note: UDS won't work per default for synchronous messages. It only works if logging with payload is enabled for the synchronous scenario.

 

 

So first new UDS filters and data extractors have to be defined in the PIMON User-Defined Search Configuration. We create one new filter and name it TestDemoScenario (the name doesn't matter) for this ticket scenario (please also make sure the filter is Active!) The filter defines the PI scenario (Sender-Receiver-Interface) that we want to index.

UDSFilter.png

 

After the filter you have to create search criteria to define which data to extract from the messages. Now it has to extract the new fields from the DynamicConfiguration. For this purpose, the search criteria has to be of type Custom Dynamic Header and it has to contain the name and namespace of the new XML elements.

 

UDSSearchCriteria.png

 

The name of the search criteria can freely be choosen. This name is later the name that we have to use in the Message Monitor. Here the name Operation was choosen, because this describes exactly the data. It's possible to also add the other XML elements for SourceMessageType and SourceMessageTypeNS as search criteria, if this data is also interresting for monitoring purposes (but it's not required to add them, if the only the Operation is interesting).

 

UDSSearchCriteria2.png


After finishing the UDS configuration, new messages will now be indexed and the data is extracted from them. It's possible to see the data in the PIMON Message Monitor. When searching, please switch to the Advanced search filter and make sure to select the User-Defined Search Criteria. Now add the search criteria in the list and give the value * (star). This is required, because if UDS attributes are part of the search filter, then it will add a new column in the search result message list. The following screenshot shows how it should look like with the parameter Operation = *. Of course, it's also possible to add the other search criteria (MessageType = *, MsgTypeNS = *) if desired. The important parts are highlighted in the picture:

MessageMonitor.png

 

The message list now contains a column Attribute 'Operation' and it shows the Service Operation of each message.

 

 

I hope this article can help you to fulfill the request to monitor and search for the Service Operations. Please leave a comment if I can improve anything or if there are open questions.

 

--


Another comparison between HCI and Boomi: SOAP

$
0
0

Another comparison between HCI and Boomi: SOAP

 

Regarding the amount of feedback I received on my first comparison between HCI and Boomi (blog), I decided to compare some more features of both middleware solutions. This time I want to show you a SOAP activity done in Boomi and in HCI. First I will show you how to create the complete iFlow in Boomi. After that I will use the same format to show you how it’s done in HCI.

If there are things you would like to see in the future, please let me know in the comments section below!

 

 

Setting up the SOAP client in Boomi

 

First of all we are going to set up the SOAP client. We’ll need a connection to the Web service and create a request and response profile.

We are starting in Boomi by creating a new process. We’re going with the ‘No Data’ start shape, because we want to enter our data (the City and Country for our weather request) manually.

 

01 Boomi.png

 

 

To set up the soap connection we select the ‘Connector’ icon and choose ‘Web Services SOAP Client’, Leave the Action on EXECUTE and select the plus symbol right after ‘Connection’. Here we are going to set up the WSDL and SOAP endpoint URL

 

02 Boomi.png

WSDL Url: http://www.webservicex.net/globalweather.asmx?WSDL
SOAP Endpoint Url:
http://www.webservicex.net/globalweather.asmx?



After you set both URL’s you can Save and Close this SOAP connection, so we can create the SOAP operation.

 

Click on the plus button next to the operation and, in the operation screen, choose ‘Import’ (the green box, in the upper right of your screen). Make sure you select the Connection we just created (in this case SOAP weather) and choose ‘Next’. If you entered the settings correctly you should get a new screen where you can choose the object type. Go for ‘GetWeather’ and choose ‘Next’.

 

03 Boomi.png

 

04 Boomi.png

We now have the Start shape and the SOAP connection on our process canvas.

 

06 Boomi.png

 

 

Create input data and mapping in Boomi

 

When you analyse the Request profile we have just imported, you see the SOAP Web service expects a CityName and a CountryName. This is data we are going to enter directly into the flow, but you can also use other (employee)systems to gather this data for example. Drag and drop a ‘Message’ icon onto your canvas. In the ‘Message’ we enter the CityName followed by the CountryName, separated by an asterisk. Of course you can choose other countries and cities for this iFlow.

 

07 Boomi.png

 

This is the data that we will send over to the Web service. But we need to let the Web service know that this is CityName and CountryName, and deliver it in a format that the service expects (XML).

To do so, drag and drop a mapping icon onto your canvas. Choose ‘Create new’ and select the icon on the Source side of the mapping (left).

 

08 Boomi.png

 

We want to create a new Flat File, because the data we entered in the ‘Message’ step is formatted in a Flat File.

 

09 Boomi.png

 

We have to create two Elements so I choose ‘Add Multiple Elements’ and select 2. Name the Elements so it is clear what the elements stand for, in this case ‘City’ and ‘Country’.

 

10 Boomi.png

 

On the Destination side of the mapping (right), select the same icon as before, but now choose XML as Profile type. Choose the XML profile we imported in the first step named “Web Services SOAP Client GetWeather EXECUTE request”

 

11 Boomi.png

 

After that you can map the source (left side) to the destination (right side) so the Web service gets the format it expects.

 

12 Boomi.png

 

Finishing up and review results in Boomi

 

We now have a no-data start shape, a message icon, mapping and a connection. We want to enter a ‘Stop’ icon so we can review our test later on. Make sure your iFlow looks like this:

 

13 Boomi.png

 

Save your iFlow and run it as a test. If all went well you should see five green halo’s around the icons.

 

14 Boomi.png

 

Select the ‘Stop’ icon and view the ‘Shape Source Data’, your output should look something like this

 

15 Boomi.png

 

As you can see we received the weather for Rotterdam Airport Zestienhoven in the Netherlands.

 

16 Boomi.png

 

Now we have done a simple SOAP Webservice call in Boomi, let’s go over to HCI and see how it’s working there!

 

 

Setting up the SOAP client in HCI

 

01 HCI.png

 

When starting a new project in HCI we get a sender, a receiver and an Integration Process. As we do not need a Sender we can delete that immediately.
We do need an extra receiver to address our SOAP Web service. Add a receiver and drag and drop ‘Service Call’ icon onto the canvas. Draw a line from the request/reply button to the receiver so your flow looks like this:

 

02 HCI.png

 

Double click the dotted line to enter the details. We are choosing SOAP as the adapter type, HTTP as the transfer protocol and SOAP 1.x as the Message protocol.

 

03 HCI.png

 

When going into the Adapter Specific tab we notice that we are missing the ‘URL’ to the WSDL. We have to import the WSDL into Eclipse (go to  http://www.webservicex.net/globalweather.asmx?WSDL , download the page as a .wsdl file and drag it into your “src.main.resources.wsdl” folder in your project).

Now we can point the system to out WSDL and click ‘Browse’.


04 HCI.png

 

 

Then double click on globalweather and in the next screen choose GetWeather (GlobalweatherSoap)

 

 

05 HCI.png

 

The configuration for the SOAP service is now ready:

 

06 HCI.png

 

Create input data and mapping in HCI

 

HCI handles the input a bit different. Here we don’t create a mapping to tell what kind of value every expected field has, but we enter the XML in a content modifier before sending it out. Then, the response is exactly the same as we saw in Boomi. Let’s drag and drop a `Content Modifier’ to the canvas and go to the properties tab.

 

07 HCI.png

 

Enter the following XML in your body:

 

<ns2:GetWeather xmlns:ns2="http://www.webserviceX.NET">

<ns2:CityName>Rotterdam</ns2:CityName>

<ns2:CountryName>Netherlands</ns2:CountryName>

</ns2:GetWeather>

 

Finishing up and reviewing the results in HCI

 

I placed a dummy receiver so we can see the output when tracing this iFlow. To be able to trace the call after we deployed it we need to change the Trace Configuration. Left click on an empty space on your canvas and go to the properties tab. Choose Header and Body and save you iFlow.

 

08 HCI.png

(tip: don’t forget to ‘schedule’ your start event!)

 

 

When you successfully deploy your iFlow to your tenant

 

09 HCI.png

You can choose to trace your flow by clicking the ‘View Trace’ button when you select the right row.

 

10 HCI.png

 

When selecting the envelope on the right, and check the Properties tab we can see the same data as we saw in Boomi:

 

11 HCI.png

 

 

 

But what about the HCI webUI?

 

At the moment of writing this blog there is no option to trace the log like you can in Eclipse. Because I wanted to use the log to show the results I’ve chosen to make the flow in Eclipse. For the sake of good comparison I will also make the flow in webUI and cover it briefly:

Log in to your webUI HCI environment and choose Design

 

01 web.png

 

Click ‘Create’ and then ‘Add’ Process Integration.

 

02 web.png

 

Name your project and your Process Integration and click on the newly created Process (I named mine SOAP blog 7):

 

03 web.png

 

Choose ‘Edit’ (bottom right of your screen) and create the same iFlow like we did in Eclipse. When you’re done it looks somewhat like this:

 

04 web.png

 

Deploy the iFlow and go to the monitoring section of the webUI

When the iFlow is set up correctly, you get the status completed and in this case the output by mail:

 

 

Conclusion

 

Despite missing the trace functionality you can definitely see and feel that the HCI webUI has been developed further and further. The layout has changed, the feeling is good and the interface is reacting very smoothly. These are some things that are missing in Eclipse. There are some weird and unexplainable errors, and the precision of the mouse pointer sometimes drives me crazy. For example, if you just miss the receiver when dragging a new connection it will not connect. The same thing when trying to add a static value in a mapping.


In Boomi they recently upgraded the ‘Test run’. Instead of a maximum of 10 documents/10 MB you now have the ability to test 100 documents/100MB. I think this is a nice upgrade and the result is that you can test more before really deploying your iFlow.

 

I’m really enjoying working with Boomi ánd HCI. If the HCI webUI keeps developing in this pace I’m sure the future will be bright for HCI.

 

bob@nextmoves.nl

Blog 1: Starting with Hana Cloud Integration? Keep this in mind!
Blog 2: Starting with Hana Cloud Integration? Create a simple integration flow (iFlow)!
Blog 3: Deep dive in Hana Cloud Integration.
Blog 4: Hana Cloud Integration and SuccessFactors Integration Center
Blog 5: Hana Cloud Integration in comparison to Dell’s Boomi.

Blog 6: What does HCI, Rat Verlegh and Breda Beer have in common?

Using SAP Java Idoc Class Library in SAP PI Adapter Modules and allow segment type to be used for Idoc Flatfile to XML conversion

$
0
0

Introduction

We were investigating replacing one of our custom adapter modules which converts flat files to idoc messages by calling an ABAP function module on PI ABAP stack to do the conversion.

 

SAP has created an adapter module which performs the conversion of idoc flat file to xml format message.


William Li’s excellent blog describes the steps in detail about "How to Use User-Module for Conversion of IDoc Messages Between Flat and XML Formats".


However we ran into an issue as out input data doesn’t have segment definitions .


The message keeps failing with the errorIDoc segment metadata could not be found for segment name E1EDK01”.


I couldn't find any solution documented on SCN which gives the solution for this issue though some people have raised this as comments in the above mentioned blog post.


Problem :


IDOCFlatToXmlConvertor expects segment definition instead of segment types. As we don't want to use PI ABAP stack to read the idoc metadata, we were not sure how can we do the segment definition to segment type replacement.


Solution:


Before we were using JCo calls from within the bean to call an ABAP function module. However, we wanted to avoid reading segment metadata from PI ABAP stack primarily because PI ABAP stack will go away in future, though we’re still on dual stack system. After some analysis, we tried using SAP Java Idoc class library.


Where to get the libraries from:

- SAP Java Idoc class library and SAP JCO libraries can be downloaded from service.sap.com/connectors  .


Refer this page:


https://help.sap.com/saphelp_nwpi711/helpdata/en/48/5a29be412d58d8e10000000a421937/content.htm



Our plan was to create an adapter module which can do some pre-conversion before standard module IDOCFlatToXmlConvertor  is called .


Before we use SAP Java Idoc class library within the module, let’s try to write a small example which can read idoc metadata. It's much easier to understand that way and makes the whole process much faster as well  . The program will read idoc metadata and parse through the whole segment tree. .This can be then later used to build a list of segment type and segment definitions so that we can replace the segment types by segment definitions.


Java SE Set Up:


  • Create a Java project and addthe SAP JCO and SAP Java Idoc class library files to the Java project
  • Create a helper Java file with segment type and segment definition.


We need recursion to traverse the whole segment tree as any segment has information only about its immediate children.


public void getMetaData(IDocSegmentMetaData currSeg, List<IDocSegmentMetaData> metaDataList){

  metaDataList.add(currSeg);


  IDocSegmentMetaData[] children = currSeg.getChildren();

  for(IDocSegmentMetaData currChild : children){

    getMetaData(currChild,metaDataList);


In Java SE environment iDocRepositorycan be initiated as follows for testing.


JCoDestination destination=JCoDestinationManager.getDestination("NSP");

      IDocRepository iDocRepository = JCoIDoc.getIDocRepository(destination);


      IDocSegmentMetaData rootMetaData = iDocRepository

        .getRootSegmentMetaData("Z_TEST", "Z_TEST_EXT",

            "7.02", "7.02");          } }



where Z_TEST : idoc type

Z_TEST_EXT : Extension Type


Our system is at release level 702 . So we can give the release level as 7.02 for SAP release and segment release level.


IDoc Segment Representation


Our idoc in SAP is represented as follows:


2015-11-26 03_48_47-Display extension_ Z_TEST_EXT.png


Idoc library returns the root Segment : it has the definition “ROOT” with the description “General root segment”.


2015-11-26 03_58_48-Untitled drawing - Google Drawings.png



and our program displays the information :


ROOT  ROOT

ZTEST  ZTEST000

ZTEST2  ZTEST2000

ZTEST3  ZTEST3000

ZTEST1  ZTEST1000


Using the class library, we're able to retrieve segment type and segment definitions.



Using SAP  Java Idoc class library on PI server.


As the library is present in the runtime environment already, no extra deployment descriptors are needed in the custom adapter module.


Obviously the call to get iDoc Repository instance gets modified on the PI server.



ConnectionFactory connectionFactory = (ConnectionFactory) initialcontext.lookup((new StringBuilder()).append(

                      "deployedAdapters/").append(sourceJRA).append("/shareable/").append(sourceJRA).toString());


              JRAIDocFactory idocFactory = JRAIDoc.getIDocFactory();


              IDocRepository idocRepository = idocFactory.getIDocRepository(connectionFactory);



We need to do a JNDI lookup using the connection factory ( refer to William Li's blog mentioned above about requirements ). This is then used to get reference to JRAIDocFactory and ultilmately iDocRepository.


Once we get reference to iDocRepository, similar process can be used to retrieve segment metadata and build a list of segment type and segment definition pairs.


So the custom module parameters required are :

 

sourceJRA

sapRelease

segVersion

 

Once we have the list, segment type to segment definition translation can be easily carried out.

 

Internally system calls IDOCTYPE_READ_COMPLETE to read the metadata information and the metadata is cached on the PI server optimising the performance of the custom adapter module . Release appears without decimals . So release 7.02 we gave in Java code when reading root segment metadata on Java stack will be represented as 702 on ABAP stack .


Further, our idoc files don’t have segment hierarchy within them but it’s taken care of by setting RenumberSegments. This information is present in segment metadata in case it’s required.


 

Hopefully this blog will help people more familiarity with Java Idoc Class library and also help with the specific case of allowing idoc segment types to be used for Flat file to idoc conversion using the SAP provided IDOCFlatToXmlConvertor  bean.

 

You can refer some sample programs at this github link : viksingh/JavaIdocLibrary · GitHub

HCI: Testing Outbound Connections from HCI

$
0
0

Introduction

One of the really handy feature in HCI is the ability to test outbound connections from HCI to target systems. Connectivity issues can sometimes be a pain point during integration, and it is an area that I personally will always focus first in an integration project prior to developing integration scenarios. With this feature in HCI, connections can be tested even before building and testing any Integration Flows.

 

This feature is available from the Eclipse HCI plugin, where it is accessible from the Node Explorer's context menu.

test.png

 

As of the current HCI component versions, it is possible to test SSL, SMTP and SSH connections. In this blog, I will share on how to use this feature.

 

 

Component Details

Below are component versions of the tenant and Eclipse plugins.

HCI Tenant Version: 2.8.5

Eclipse Plugin Versions: Adapter 2.11.1, Designer 2.11.1, Operations 2.10.0

 

 

A) SSL Connection

HTTPS (HTTP over SSL) is one of the most common connections these days. However establishing connection on it is not always straightforward as it requires the correct certificates to be installed on the client and possibly the server in order for SSL handshake to be successful. Below is one of the common errors that occur when SSL handshake fails due to the failure to verify the certificate chain of trust.

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

 

Following steps will show how we can test this out and rectify it.

 

Right click on the tenant in Node Explorer and select Test Outbound Connection.

 

Select SSL Connection and click Next.

sslconn.png

 

Enter the hostname and post of the target system. In our example, we will try to establish connection with Google.

google.png

 

After clicking Run to execute the test, we will receive the following error indicating SSL handshake failure.

error.png

 

This error is most likely because the target system's (Google's) root CA certificate is not in the Trusted CA list of HCI's keystore. Every HCI tenant comes with a default system.jks keystore (the password to access this keystore should be provided by SAP during creation of the tenant).

 

To add Google's root CA to the keystore, we need to download this keystore, import Google's Root CA into it and redeploy it into the HCI tenant.

keystore.png

 

The procedure to manage the keystore is comprehensively explained in the following blog by Paul Todd.

HCI First Steps Part 8 - Working with Certificates

 

In short, we will use Keystore Explorer to add the certificate to the downloaded system.jks keystore. Use Keystore Explorer's "Examine SSL" feature to examine Google's URL and view the certificate chain. We only really need to import the Root CA into the keystore.

import.png

 

Once it is imported, save the keystore and redeploy it into the tenant using the "Deploy Artifacts" from the Node Explorer.

 

Now we are ready to retest the SSL connection. Repeat the above steps for "Test Outbound Connection".retest.png

 

As we can see, the SSL connection is successful now.

 

 

B) SMTP Connection

We can also test SMTP connections to mail servers.

 

Using the same steps as above, choose SMTP Connection instead and enter the SMTP server details. In the example below, it will test the connection to Gmail.

smtp.png

 

After executing Run, we can see the results of the connection test.

smtp_test.png

In our example above, the test is without authentication details. We can further test this with authentication credentials. For the Gmail example, the user credentials can be saved and deployed as a User Credentials artifact - refer to the following blog.

Building your first iFlow - Part 4: Configuring your credentials

 

Below are the results of retesting with credentials.

smtp_test2.png

 

 

C) SSH Connection

Although it is possible to test SSH connection to an SFTP server, unfortunately I was unable to get it to work successfully.

 

As I do not have access to an SFTP server, I tried using a public one at Wing FTP Server Online Demo. In order to establish the SSH connection, the host key needs to be maintained in the Known Hosts file. I logged on to the demo SFTP server from a Unix system, accepted the host key and copied the value from ~/.ssh/known_hosts file.

 

I then deployed the Known Hosts artifact into the tenant.

known.png

artifacts.png

 

However, even after deployment, testing the SSH connections fails with the error below.

ssh_error.png

A check in the Tail Log indicates that the keystore is not found even though it is already deployed.

2015 11 26 04:11:13#+00#ERROR#com.sap.cloud.crypto.keystore.service.KeyStoreValueReader##anonymous#EventAdmin Async Event Dispatcher Thread#na#avrhcit#t0311iflmap#web##Keystore with name: 'known.hosts', for tenant: '562d7668-fc93-440e-8785-927921a90522' is not found neither in the cloud (domain db) nor in the local (file) storagecom.sap.cloud.crypto.keystore.api.KeyStoreNotFoundException: Keystore with name: 'known.hosts', for tenant: '562d7668-fc93-440e-8785-927921a90522' is not found neither in the cloud (domain db) nor in the local (file) storage

 

 

Conclusion

As you can see, with the use of this handy feature, connection issues in HCI can be tested and resolved even before the integration flows are developed and tested.

AZURE AD/SHAREPOINT ONLINE INTEGRATION WITH ECC - COMMON OPERATIONS

$
0
0

Hello Everyone,

In this blog, I am going to show you some common SharePoint operations, that can be done using the REST API provided by Microsoft.


What is Azure?

“Microsoft Azure is a cloud computing platform and infrastructure, created by Microsoft, for building, deploying and managing applications and services through a global network of Microsoft-managed and Microsoft partner hosted data centers.”

 

What is Azure Active Directory?

“Azure Active Directory (Azure AD) is Microsoft’s multi-tenant cloud based directory and identity management service.”

 

What is SharePoint Online?

Simply put, “SharePoint in the cloud”. SharePoint is often used to store, track, and manage electronic documents and assets. For example, I used it to deliver some documents from ECC to the suppliers. The suppliers would have a separate work area (SPO Site) and read access for the documents shared with them.

Let me explain the common operations involved in the AAD and SPO integration with SAP PI. I have done the integration using Advantco REST adapter. You will need a bit of XML to JSON conversion knowledge in REST receiver adapter.

 

COMMON SHAREPOINT ONLINE OPERATIONS:

  1. Create SPO Site
  2. Update SPO Site
  3. Give permissions for AAD user to SPO site
  4. Create folder in SPO
  5. Upload file into SPO site

 

COMMON AZURE AD OPERATIONS: (Kindly wait for my next blog)

  1. Create AD user
  2. Create AD group
  3. Add AD user to  AD group
  4. Remove AD user from  AD group
  5. Update AD User / Block AD User

Let us see one by one in detail;

 

SPO OPERATIONS:

1. Create SPO Site: If you need to create a new SharePoint site, follow the steps below;

 

  • Resource URL:

https://mysapdev.sharepoint.com/sites/suppliers/_api/web/webinfos/add

 

  • HTTP Headers:

Header Name

Header Value

accept

application/json; odata=verbose

content-type

application/json; odata=verbose

Cookie

Cookie value has to be passed

X-RequestDigest

Digest value has to be passed

 

  • HTTP method : POST
  • SharePoint template (Not mandatory. To be created by SharePoint expert)
  • Know the supported language. Eg:1033 is for English-UK

 

Below is the required JSON request for creating a site.

        {

            'parameters': {

'__metadata':  {'type': 'SP.WebInfoCreationInformation'},

'Url': 'Supplier_ABC',

'Title': 'ABC Supplier',

            'Description': 'REST created web',

'Language':1033,

'WebTemplate':'{4AD5BC8A-5C4D-46E9-F390-1876AFF81CF5}',

'UseUniquePermissions':false

}

          }

 

Note: I used the GUID of the web template as the name doesn’t work during execution.

Our actual request will be an XML but, we have an option to convert the request format from XML to JSON in the REST Receiver adapter.

 

Likewise, we have an option to receive the response via three ways;

  1. Http Response body : Actual response from the REST service.
  2. Template : Our own template for response XML.
  3. Template for Empty HTTP Response body only : Our template in case of empty response from REST service.

 

2. Update SPO Site: If you need to update the SPO site that you created before, follow the steps below;

 

  • Resource URL:

https://mysapdev.sharepoint.com/sites/suppliers/%SupplierSiteName%/_api/web

 

The %SupplierSiteName% is the value to be replaced by the site name that we want to update.

 

  • HTTP Headers:

Header Name

Header Value

X-HTTP-Method

MERGE

content-type

application/json; odata=verbose

Cookie

Cookie value has to be passed

X-RequestDigest

Digest value has to be passed

 

  • HTTP method : POST
  • " X-HTTP-Method" is used to override the POST method.


Below is the required JSON request for updating a site’s description.

 

        {'__metadata':{'type': 'SP.Web'}, 'Description': 'Updated information'}

 

3.  Give permissions for AAD user to SPO site: If you need to give read/write permission for an AAD user to SPO site/SPO folders, follow the steps below;

 

We have used a site template which creates different site groups (For Reader/Owner etc.). If a user need “Read” permission, he has to be added in the Readers group. Giving permissions is a two-step process;

 

     A.      Get the site group ID:

               - Resource URL:

https://mysapdev.sharepoint.com/sites/suppliers/%SupplierSiteName%/_api/web/sitegroups/getbyname('%GroupName%')

 

The %SupplierSiteName% is the value to be replaced by the site name.

The %GroupName% is the value to be replaced by the group name.

               - HTTP Headers:

content-type

application/json; odata=verbose

Cookie

Cookie value has to be passed

X-RequestDigest

Digest value has to be passed

 

               - HTTP method : GET

 

     B.      Add user to the above group:

               - Resource URL:

https://mysapdev.sharepoint.com/sites/suppliers/%SupplierSiteName%/_api/web/sitegroups(%GroupID%)/users

 

The %SupplierSiteName% is the value to be replaced by the site name.

The %GroupID% is the value to be replaced by the GroupID which we got in the first step.

               - HTTP Headers:

content-type

application/json; odata=verbose

Cookie

Cookie value has to be passed

X-RequestDigest

Digest value has to be passed

 

               - HTTP method : POST

 

Below is the required JSON request for adding the user to the group.

 

{"__metadata":{"type":"SP.User"},"LoginName":"i:0#.f|membership|Jeorge.Kaps@mysaptest.onmicrosoft.com"}

 

4. Create folder in SPO: If you need to create folders inside SPO site, follow the steps below;

 

     - Resource URL:

https://mysaptest.sharepoint.com/_api/web/folders

 

  • HTTP Headers:

content-type

application/json; odata=verbose

Cookie

Cookie value has to be passed

X-RequestDigest

Digest value has to be passed

 

  • HTTP method : POST

Below is the required JSON request for adding the user to the group.

 

{'__metadata':{ 'type': 'SP.Folder' }, 'ServerRelativeUrl': '/shared documents/Test1' }

 

 

   

5.  Upload file into SPO site: If you need to upload files inside a folder, follow the steps below;

            

                - Resource URL:

https://mysaptest.sharepoint.com/_api/web/GetFolderByServerRelativeUrl('/Shared%20Documents/Test1')/Files/add(url='LargeFile_500Mb.zip',overwrite=true)

 

               - HTTP Headers:

content-type

application/json; odata=verbose

Cookie

Cookie value has to be passed

X-RequestDigest

Digest value has to be passed

 

               - HTTP method : POST

 

NOTE: In my case, the file to be uploaded is sent from ECC as SOAP attachment. This attachment is swapped in PI using the standard module. The file name, folder name can be set dynamically. We could upload any type of file.


 


Kindly wait for  Part-2 of this blog for the common AAD operations. Happy learning :-)

 

Regards,

Stenish Peter. S

HCI: Transferring Integration Package Content from WebUI to Eclipse and back

$
0
0

Introduction

HCI provides the following two different development environments for developing integration content.

  • WebUI - web-based application for online development mode
  • Integration Designer - Eclipse based tool capable of offline local development mode

 

WebUI is suited for fast consumption and deployment of pre-packaged integration content with minimal modifications; whilst the Eclipse IDE is suited for full fledged development of custom integration contents. This allows integration developers to choose the appropriate development environment based on their particular needs or circumstances.

 

However, as of the current tenant version, there is a disparity of features between both environments. Here are some examples of certain features available only on one of the environments but not the other:-

  • XSL mappings cannot be created or edited in WebUI
  • Certain Message Routing elements like Multicast or Aggregator cannot be created in WebUI
  • Message Mappings cannot be tested in Eclipse (like how it is tested in NWDS for PI)

 

As such, this may necessitate the developer to switch between the two environments. However, there is no quick and fast way to achieve this as of the current tenant version.

 

In this blog, I will share the procedure how Integration Content (pre-packaged or custom) can be exported from the WebUI environment to Eclipse for further editing, and then reimported back into WebUI.

 

 

Component Details

As HCI is a cloud solution with automatic rolling updates, future updates may incorporate changes to allow a more seamless transition between the two environments. Therefore these steps are only valid for the following versions and may be invalidated in the future.

Below are component versions of the tenant and Eclipse plugins.

HCI Tenant Version: 2.8.5

Eclipse Plugin Versions: Adapter 2.11.1, Designer 2.11.1, Operations 2.10.0

 

 

Step by Step Guide

We will use an example of transferring a pre-packaged integration content from WebUI to Eclipse and back.

 

Step 1 - Export Integration Package from WebUI

Login to WebUI, go to Discover to select an existing pre-packaged content. Select Copy to Workspace button.

copy.png

 

Switch to the Design tab. Select the integration package and click Export.

export.png

 

The content will be downloaded as a ZIP file.

download.png

 

 

Step 2 - Extract project from ZIP file

Open the ZIP file in a ZIP file manager (example 7-Zip below). The ZIP file will contain several entries. One of the entries which ends with "_content" is listed as a file but is actually a folder.

zip1.png

 

Double click on that file and it will display a list of files and folders similar to an Eclipse project.

zip2.png

 

Extract the contents of the project folder into a local folder on the PC.

extract.png

 

 

Step 3 - Import project into Eclipse

Start up Eclipse (which already has the HCI plugins installed). Select File > Import and choose Existing Projects into Workspace.

proj.png

 

Browse to the folder with the extracted contents and select the available project(s).

eclipse.png

 

Complete the import step and the project will be shown in the Project Explorer.

eclipse2.png

 

 

Step 4 - Determine content that needs to be changed

In this example, we will change the following three contents. The screenshots below display the content in WebUI prior to any changes. Some of the changes can actually be performed in WebUI, but this example is just to showcase the before/after details of changes done in Eclipse.

 

a) Content Modifier step with Message Header values

property.png

 

b) XSL mapping step which cannot be viewed/edited in WebUI

xsl.png

 

c) Groovy script

script.png

 

 

Step 5 - Modify content in Eclipse

Now we can modify the above three items in Eclipse.

 

a) Modify the value of one of the message header

prop2.png

 

b) Create a new XSL mapping and update the mapping step to use this new XSL mapping

xsl2.png

 

c) Update the Groovy script with some comments

script2.png

 

 

Step 6 - Update ZIP file with changed content

We will need to update the ZIP file downloaded in Step 1 with the contents that have changed.

 

Open the ZIP file in the ZIP manager, copy the files that have changed from the Eclipse workspace to the ZIP file. In the example below, the new XSL mapping file is copied into the src\main\resources\mapping folder of the ZIP file. This is repeated for the iFlow file and the Groovy script file into their corresponding folders.

copy.png

 

Once all the relevant files are copied, return back to the main folder of the ZIP file. There will be a prompt that the "_content" file was modified, select OK to update it in the archive.

update.png

 

The file will be displayed with the latest details.

zip3.png

 

 

Step 7 - Import ZIP file back into WebUI

Switch back to WebUI. Before importing back the ZIP file, it is advisable to delete the existing package. I've tried importing to overwrite it but the process runs indefinitely.

delete.png

 

Select Import and browse to the ZIP file.

import2.png

 

Once the import is completed, it will be displayed in the Design Overview page.

imported.png

 

 

Step 8 - View and verify the changes in WebUI

Select the Integration Flow and view it. Both changes (a) and (b) above can be viewed.

content_after.png

 

Select the Script element and view the underlying logic. The added comments are displayed.

script_after.png

 

 

Additional Steps for Custom Integration Package

For custom integration packages, they cannot be exported directly at first.

 

The content will be shown with a Draft version and there will be the following error message when trying to export it from WebUI.

draft.png

 

In order to export it, select the draft Integration Flow to view the diagram and change to Edit mode. Then select Save as version. After saving a version, it can be exported into a ZIP file.

save_version.png

 

 

Conclusion

With this approach, we are now able to export content that was designed in WebUI into Eclipse for further enhancement and reimport it back to WebUI.

ASMA parameters configuration using AttribMapper module

$
0
0

Recently came across a pass-through File-to-File scenario to create target FileName as <SenderFileName>_<DateTime>.<extension>

<SenderFileName> - File name picked by sender channel

<DateTime> - Date time in particular format for e.g. "ddMMyyyy"

<extension> - Constant file extension for e.g. .pgp

 

If we use Graphical/Java mapping this can be easily achieved by DynamicConfiguration class.

However required scenario was pass-through, we didn't wanted to end up using mapping just for changing filename.

 

There are workarounds available using DynamicConfigurationBean where-in attributes of the message header can be set in message header parameters and then can be accessed by Variable substitution .

These header attributes can be set by module: sender_party, sender_service, receiver_party, receiver_service, interface_name, interface_namespace,message_id,message_id_hex. These can be read during variable substitution.

However the approach is not advisable as we are replacing default message headers which may incur potential issues.

 

Then we came across AttribMapper adapter module.

It comes as part of Seeburger Suite, which is available in our landscape, it solved our case.

This scenario is achieved by using it in File Receiver channel:


AttribMapperReceiverChannel.PNG


To set ASMA paramter it has to be used in module parameter as:

<namespace>/<param>

fo e.g. http://sap.com/xi/XI/System/File/FileName


ASMA paramter value can be read as:

@<namespace>/<param>

for e.g. @http://sap.com/xi/XI/System/File/FileName



Though Attribmapper comes as part of thid-party add-on, it is perfectly capable of reading and writing all ASMA parameters.

This prompted me to explore more of Attribmapper, not much content is available in SCN.

Below are some of the useful functions which are available with this module:


getDateTime("ddMMyyy")

Output: DateTime in Desired format. Provide required format within ""

 

getCurrentTimeInMillis()

Output: java.lang.System.currentTimeMillis()

 

getMessageId()

Output: Message ID for example 9fbe1ff1-9a0d-11d9-8665-cbf10a126331

 

getRefToMessageId()

Output: Reference ID for the synchronous response

 

getConversationId()

Output: Reference ID for processing using queues

 

getCorrelationId()

Output: Reference ID for the asynchronous response

 

getAction()

Output: Interface Action

 

getDeliverySemantics()

Output: EO/EOIO/BE

 

getFromParty()

Output: Sender Party value. If not set then "null"

 

getFromService()

Output: Sender Service . If not set then "null"

 

getMessageDirection()

Output: INBOUND/OUTBOUND

 

getToParty()

Output: Receiver Party value. If not set then "null"

 

getToService()

Output: Receiver Service . If not set then "null"

 

Output of these functions can be appended with operator "&" and set it to desired ASMA parameters

for e.g such a configuration is possible:


AttribMapperReceiverChannelExample.PNG


Please share your thoughts on this.



Reference: AttribMapper - Assign AS2 Filename Dynamically by Prateek Srivastava

Using XPI Inspector to troubleshoot HTTP SSL connections

$
0
0

Introduction

HTTPS (HTTP over SSL) connections are sometimes a bit tricky to establish. There have been quite a number of threads opened on SCN recently regarding SSL related errors. Below are some of the common errors that could occur when trying to establish an outbound SSL connection from PI.

 

Errors
SOAP: Call failed: iaik.security.ssl.SSLCertificateException: Peer certificate rejected by ChainVerifier
SOAP: Call failed: iaik.security.ssl.SSLException: Peer sent alert: Alert Fatal: handshake failure
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

 

SAP's troubleshooting tool, XPI Inspector is one of the most useful tool to troubleshoot SSL connections. In this blog I will share an example of an SSL related error and how to utilize XPI Inspector to troubleshoot and resolve the issue.

 

 

Prerequisite

XPI Inspector does not come pre-installed in the PI system. To install and use it, please refer to the following SAP Note:-

1514898 - XPI Inspector for troubleshooting XI

 

For more details about XPI Inspector, refer to the following blog:-

Michal's PI tips: XPI inspector - help OSS and yourself

 

 

Step by Step Guide

In this example, we will use a SOAP receiver channel with an HTTPS target URL. For simplicity sake, we will use SCN as the target system (even though it is not a SOAP web server!).

 

Step 1 - Set up receiver channel

Populate SCN's URL into the SOAP receiver channel's setting. Ensure that the iFlow/ICO is configured correctly and activated.

soap.png

 

 

Step 2 - Launch XPI Inspector and start test

XPI Inspector can be accessed from the following URL of the PI system

http://host:port/xpi_inspector

 

Select Example 11 for Authentication, SSL & PP. Populate the SSL Server URL and any proxy server if necessary.

 

Once everything is ready for testing, click the Start button and then trigger the scenario for the iFlow/ICO in step 1. Once the scenario is completed, click Stop.

xpi_test.png

 

 

Step 3 - Analyse the results

After XPI Inspector has gathered all the logs, it will present a results page. Under section Performed Checks > Verify Remote SSL Server Certificate, the SSL debug logs are shown.

 

In the example below, it shows that the chain verification failed because no trusted certificate was found.

notrust.png

 

In the Performed Checks > Is Remote SSL Server Certificate Trusted section, more details of the certificate and the chain are shown.

notrust2.png

 

The analysis shows that none of the certificates has a CA (Certificate Authority) that is trusted. It means that there is no corresponding certificate for the CA in the TrustedCAs view in NWA's keystore.

 

 

Step 4 - Retrieve the server's Root CA certificate

In order to establish the SSL trust with the server, we need to retrieve the Root CA's certificate and import it into NWA's keystore.

 

First of all, we need to retrieve the certificate. This can be done by entering the HTTPS URL into a web browser and viewing the certificate details. The example shown below is using Google Chrome where we can view the certificate information by clicking on the padlock icon on the browser.

 

certinfo.png

 

After the certificate is displayed, switch to the Certification Path tab, select the top most entry of the path as it represents the Root CA. Click View Certificate to view the CA's certificate.

certpath.png

 

The Root CA certificate is displayed in another window and normally this is a self-signed certificate by the CA itself.

rootca.png

 

Switch to the Details tab and select Copy to File to save a version of the certificate on the local PC. It can be saved in the DER encoded binary X.509 format.

copy.png

 

Alternatively, the certificates are also accessible via the hyperlinks on the XPI Inspector results page.

 

 

Step 5 - Import Root CA certificate into NWA's keystore

Now that we have the Root CA, we can import it into the keystore in NWA. The keystore can be accessed in NWA > Configuration > Security > Certificates and Keys.

 

Specifically, we want to import the Root CA certificate into the TrustedCAs view of the keystore. Refer to the following article on importing the certificate. We will only be importing the X.509 certificate.

Adding Certificates to PI

 

After completing the import, the Root CA certificate can be viewed as an entry in NWA.

trustedca.png

 

 

Step 6 - Repeat XPI Inspector test

Now that the Root CA certificate is in place, the XPI Inspector test can be repeated.

 

In the results page, we can see now that the trusted certificate is found and the chain verification is successful. Therefore the SSL handshake completes successfully.

trust1.png

 

trust2.png

 

 

Additional Example

Here is an additional example of an XPI Inspector debug log for another SSL issue. In this case, the SSL handshake failed but not because the chain verification failed. After the chain verification, the server requested the client to present the client's certificate for authentication. However, the client sent an empty one causing the handshake to fail.

 

For this scenario, usage of client certificate for authentication is mandatory and therefore the resolution is to configure a valid client certificate to be used in the receiver channel.

ssl.png

 

 

Conclusion

As shown, XPI Inspector is very useful to troubleshoot SSL related issues. As a matter of fact, I personally would use it at the beginning of a development that involves HTTPS connection before even designing or developing.


So you think you know PI/PO?

$
0
0

Over the last couple of years we have seen dramatic changes in the SAP Middleware platform. Going from dual stack, to single stack to cloud. These changes can cause any SAP-er to sit down and scratch their heads. Do I use a Receiver Determination or Integration Configuration or do I iFlow? How do I convert or do I create? Where, why, and how do I use an iFlow? And by the way you cannot really take off 15 days for class in Maui.

 

Join me in a fast pace 3 day workshop called WNAAEX. This class will teach you how to create an application to application scenario from the ground up using the old PI/AEX swing client and learn how configure, navigate and create the integration objects using NWDS.

 

This is a 3 day class that will help you bridge the gap from Process Integration to Process Orchestration and HANA Cloud Integration. No deep JAVA coding needed but you do have to have and understand of our beloved PI/PO. These fundamentals will prepare you for the changes coming in the future.

 

The class will run virtually from Dec. 28th to 30th, For more information please visit :

https://training.sap.com/shop/course/wnaaex-sap-java-integration-objects-classroom-015-us-en/

 

or just drop me an email at j.valladares@sap.com

Step by Step Guide to Setup HCI-DS Agent On Premise

$
0
0

In this series of blog, I have captured the screens and the process of setting up the HCI-DS. This blog is targeted for first time users who are new to HCI-DS. At the bottom, I have added HCI DS Product details. When I started to use, HCI Data Services Agent 1.0.9 (build 1962) on which I had many issue to setup the datastore connections. With the latest release on Data Services Agent 1.0.10 (build 2102), the connections works much more smooth and easy.

 

We shall understand the following details in this blog:

  1. Set by Step Guide to Setup HCI-DS Agent On Premise
  2. Understanding HCI Web UI
  3. Extract data from SuccessFators to FileFormat (.csv) using HCIds  

Pre-perquisites:

  1. Admin Access to HCI Web UI
  2. Admin access to HCI DS Agent
  3. Access to Succeess Factors in you want to configure it

 

  1. 1. Setup HCI-DS Agent - You may Download the agent in case you dont have it. Please refer to PAM for Server OS requirements.

 

Once you download it, all you need to do is just right click and select “Run as Administrator”. You should be able to see the following screen:

Picture1.png

Ensure you do not close this window. You will be able to see the screen as below. 

Picture2.png

Select the installation path. In my case I have installed in D drive and used the standard ports. The Username and Password should of the System on which you are installing the HCI-DS. Click Install.

Picture3.png

It will start installing the following components:

- Configuring environment

- Generating certificates

- Configuring internal database

- Create an event to purge SQLA log files

- Creating repositories

- Configuring job server

- Upgrading adapters

- Starting agent service

 

[Installation Log]

D:\HCI Data Services Agent\log\Install_2015.12.09.05.49.36.log

 

[Summary]

Success: Installing files

Success: Configuring environment

Success: Generating certificates

Success: Configuring internal database

Success: Create an event to purge SQLA log files

Success: Creating repositories

Success: Configuring job server

Success: Upgrading adapters

Success: Starting agent service

 

Completed successfully.

Picture4.png

In case of any issues you can look at the installation log file as listed above. Now click on finish and the HCI-DS gives you an option to configure the SAP DS Agent, Click Yes.

Picture5.png

At this stage we need to get the Configuration Agent file from HCI Web UI. Go to our HCI WebUI URL and login with your details. Navigate to Agents tab as below.

Picture6.png

 

Click on New Agent, Enter the agent details “Name”, Description which is optional. Based on your requirements you may either create a new group or add the agent to the existing group. Click on Next.

Picture7.png

In this screen you will be able to see the option to Download “Configuration File”, click on it to.

Picture8.png

Once the file is downloaded, click on close.

Picture9.png

Now Copy the file which is downloaded, Go to the system where you are installing your HCI DS Agent and paste on your desktop. Now, open the HCI DS Agent configuration screen, You need to enter the HCI Web UI user name(email id), password and select the agent configuration file path.

Picture10.png

Picture11.png

 

Picture12.png

In case you have a proxy, please add the proxy server details and then click on upload. I my case I do not have a proxy system. You will see a pop up with the following message, click on Yes.

 

 

Picture13.png

The Agent should upload and restart successfully.

 

Picture14.png

Picture15.png

 

Once the Agent is restarted, get back to your HCI DS Web UI Scree, you should still see the Red symbol which indicate that the server has not connected yet. Click on the Refresh button next to “New Agent”

Picture17.png


You should be able to see the Green status with all the Version and connection details as below.

Picture18.png

You have now successfully installed the HCI DS Agent and connected to HCI Web UI. In the next blog we shall discuss about creating a HCI Web UI and its operations.

To know more about the HCI Product, you may look at the blog

SAP HANA Cloud Integration for Data Services – an ASUG Webcast

SAP HCI Product RoadMap

 

Regards,

Nagesh

Extract data from SuccessFators to FileFormat (.csv) using HCIds

$
0
0

In my previous blog(1) and blog(2) we have understood how to Install DS Agent on-premises  and the basic understanding of HCI Web UI. In this blog we shall see the process of creating a DataStore for different systems.


I have taken two systems, 1 Successfactors as a source and 2. Flat File as a target. I shall explain how to setup a project and connect the systems and do data transfer.


Successfactors per-requisite:

  1. Admin access to SuccessFactors
  2. Enable oData API in SuccessFactors

 

Creating a Datastore for SuccessFactors:

 

First we need to modify the HCI DS Agent for SuccessFactors Adapter. Open ConfigureAgent.bat and go to Configure Adapter, select the following as below:

1.png

I have removed all the details in Additional Java Launcher Options, this is used for Proxy setup. In my case, I do not have any proxy system. Save the settings and exit it. HCI DS Should restart successfully.

 

We have now successfully setup the HCI DS Agent to Communicate with SuccessFactors. Now we need to setup the SuccessFactors Account details. Log-in to HCI Web UI and click on Datastores and click on the + icon to add a new connection.

2.png

Give a name to your Connection Name, Select the other details as shown in the above figure. The endpoint URL should be in this syntax : 

https://xxxxxx.successfactors.com/sfapi/v1/soap?wsdl

3.png

Where xxxxx is your company system details. Enter the user name in this format username, eg: admin and enter your password. Leave it to default values.

 

Click on Save and click on Test next to the + icon to test your connections.

4.png

You should be able to see the successful confirmation message if everything is good. This message indicates that we are now ready to read the data from SuccessFactors. Now click on Tables tab and then click on Import Objects button. This will populate all the objects in the form of tables as shown below. Select the objects that you need for mapping and click on import.

7.png

I am selecting 1 object as shown for the demo. Once imported, you should be able to see the table name and its properties as shown below:

12.png

Now you are ready to consume the data from SuccessFactors.

 

Creating a Datastore for Filesystem:

 

Since we need two systems to transfer the data from source to destination, I have picked the simplest option as file system as it would be easy for demo. Now go to your HCI Agent On-Prem System and create a folder. Open Data Services Agent Configuration by using ConfigureAgent.bat file.  Click on “Configure Directories” and add the folder which you just created as shown below. In my case I have created a file folder FlatFiles.

 

8.png

Now click on Exit button. The Agent will prompt you for a restart. Click on Yes. Ensure you the agent is restarted successfully.


Now, that you have created a file path in the agent, now you can create a datastore in the HCI Web UI. Go to Datastores, create a new datastore and set the following details as shown below:

11.png

Click on Save and Test your connection once you enter the details. You may also add a SFTP in case you have a different system. Since Flat files do not have any format or table structure, we need to identify the source table and create the table structure with similar column names and data types. In my case I have picked a table called User from SuccessFactors.

 

Now lets create the fields/columns for the table mapping for SuccessFactors. Select your FileSystem and then click on File Formats as below: Click on Create File Format and select Create from Scratch. Enter the Table Name and the delimiter and other details as shown below and click on Ok.

15.png

To add a new column click on the Add button in the Columns tab. Enter the Name and datatype and length of the column and click on submit.

16.png

Repeat the above step for other fields that you need. Now that we have created two Datastores SuccessFactors and FileSystem we are now good to create a project and map the data to transfer the data.

 

Creating a New Project.

 

Go to Project Tab and click on New Project.

Enter the project name and description which is optional, click on Create and select Save and Create Task. Not to worry in you have selected the other option. Go back to your Project screen, select the project that you have created and then click on Create Task.

18.png

Enter the name for the Task, remove the tick from Use Template and click on next.

19.png

In this screen we have to select the Source, considering that we have no data in the FileSystem so select SFAD and click on next.  

20.png

Select the target as FileSystem and click on Save and Define Data Flow as shown below. You may also test the connection while doing this.

21.png

22.png

You should be able to see the screen below. Click on Add Target Object. You will be able to see the Table Name which was created in the DataStores. Select it and click on Create Data Flow.

24.png

Enter the Data Flow details as shown below and click Ok:

25.png

You should be able to see the screen below.

26.png

Target Query and Larning Activity are the details about Target. We need to map it to a source file. Now click on Source Table and drag it to the middle of the screen and select the table for which we selected from SuccessFactors. 

27.png

You should be able to see User Table on the screen, just connect the pointers from User to Target_Query as shown below:28.png

Now double click on Target Query, you should be able to see the input column details and output column details. You can drag and drop on each of the line items or if the names are same you can just click on “Automap by name”. Once the fields are mapped you should see the Blue Link just before the Column Name as show below:

29.png

 

Close the window once the mapping is done. Click on Validate to see if there are any errors in your mapping. You should be able to see a validation successful message. We are now good to run the Task which is created. Select the task from your project window and click on Run Now. In case you need the Debug Information, select it and click on Ok.

32.png

You will be able to see the status message, the blue arrow indicate the Task is Running. Once its successful, you should be able to see a Green Square which indicate the Job is Successful. Click on Refresh to see the status.

37.png

Please ignore the project. I had to recreate the project due to some internal technical issue. 

 

If we go back to our Folder path, we see the data transferred.

39.png

In case you find a Red Symbol, select the Task and click on View History to see the error report.

 

Hope this document helps to setup a connection and run the tasks. Looking for your feedback and queries.

 

Regards,

Nagesh

Understanding HCI Data Store Web UI

$
0
0

In the previous blog, we understood how to setup a HCI DS Agent On-Premises and Connect it to the HCI Web UI. In this blog we shall Understand the Different operations that we can perform using HCI Web UI.

 

We shall look into different tabs like:

  1. Getting Started
  2. Dashboard
  3. Projects
  4. DataStores
  5. Agents
  6. Administration
  7. Settings

 

Lets get started.

 

  1. Getting Started:

If you are working on HCI DS or need some instant help on the process this would be the right place to find few quick answers.

 

Process At A Glance explains the Admin process on how to download the agent, setup it up, create datastore and import data.

 

Integration development workflow describes the procedure when one can follow to schedule the jobs as required.

 

You can click on each of the blocks to get more help on it. There is a 5 video series on YouTube which beautifully explains about HCI. Do refer them for quick understanding.

 

1.png

 

 

2. Dashboard

 

As the name indicates, this would be the tab which helps an administrator or the user to understand the Successful Task and Failed Task. This data only shows for Production instance and not for sandbox.

2.png

If you switch to Schedules Tab under dashboard, you will be able to see all the details. Here we can see Sandbox and Production details.

3.png

3. Projects

A project is a container which groups related tasks and processes. The relationship between a project, tasks, and data flows is illustrated in the following diagram:

4.png

This is where you create a New project, map the data sources from one system to another, create custom queries, validated them, generate debug files, see error details, history,  and test them. We shall look at this in detail while creating a Project.

5.png

 

4. DataStore

This is the tab which helps to establish connection to the system like ERP, SFSF, DB, File System etc. The buttons on this screen helps to create a new connection, test the created connection and delete them. The System Configuration tab helps to create default datastore which is not mandatory. We shall look at creating connections in more details.

6.png

5. Agents

The SAP Data Services Agent provides connectivity to on-premise sources in your system landscape. It helps to provide metadata browsing functionality from an on-premise system and map it to the target system. Agent can be setup in multiple ways depending on your need. It can be setup to transfer large volumes of data with multiple agents and can support fail over concepts.

 

We have already setup a connection to an on-premises system to HCI Web UI in the previous blog.

7.png

6. Administration

This tab helps to control the user roles in HCI. You can also add new user and assign the roles to him. Once you add the user, as an admin you can trigger a notification email the user about this account details. There are 4 roles in HCI which are as follows:

 

a. Production Operator :

b. Administrator:

c. Integration Developer:

d. Security Administrator:

8.png

The other tabs under Administration like


Security Log

Cryptographic Keys

Notification


7. Settings:

This displays the user profile tab, you can configure your preferred display name and display language.

9.png

This completes the blog on understanding the HCI Web UI. In the next blog we shall see how to create Datastore connections.

 

Regards,

Nagesh

HCI: Using PGP message level security in HCI

$
0
0

Introduction

HCI comes packed with a lot of security related features. For message level security, it supports the OpenPGP standard. This is a commonly used standard in emails as well as file-based integrations.

 

In this blog, I will share how to create and deploy the OpenPGP keys, as well usage examples for PGP encryption and decryption in HCI.

 

 

Component Details

As HCI is a cloud solution with automatic rolling updates, these steps are valid for the following versions and may change in future updates.

Below are component versions of the tenant and Eclipse plugins.

HCI Tenant Version: 2.8.5

Eclipse Plugin Versions: Adapter 2.11.1, Designer 2.11.1, Operations 2.10.0

 

 

Required PGP Software

The online HANA Cloud documentation below details the steps required to create the keys using Gpg4win.

Creating OpenPGP Keys

 

However, the steps there are more directed towards tenants managed by SAP, and some of the steps can be skipped. I found that the steps can be simplified by just following the Generating Key Pairs section of the following Wiki which is used for PGP encryption/decryption in PI.

Generating ASCII Armored PGP Key Pairs - Process Integration - SCN Wiki

 

Both methods requires the installation of the Gpg4win tool. Additionally, during the installation, I recommend installing Kleopatra which comes with Gpg4win. It is a GUI based certificate manager and unified cryto which I will use for the examples in the following sections.

 

 

Creating OpenPGP Keys

Following the steps in the above Wiki, launch the command prompt to execute Gpg4win. Execute the following command:-

gpg --gen-key

 

When Gpg4win is executed for the first time, the secret and public key rings will be created in the following folder.

C:\Users\<user>\AppData\Roaming\gnupg

Enter the following details based on the instructions of the program:-

  • Key type - RSA and RSA (default)
  • Keysize - 2048
  • Validity - key does not expire
  • Real name & email address - <provide own details>
  • Passphrase - <enter passphrase to secure secret key ring>

 

genkey.png

 

Once everything has been entered, the public and secret key pair will be generated.

keys.png

 

 

Deploying OpenPGP Keys

In order to use the keys, the keyrings have to be deployed into the HCI tenant.

 

Right click on the tenant in Node Explorer and select Deploy Artifacts.

deploy.png

 

First, select PGP Public Keyring and select the public keyring file that was generated above.

pubkey.png

 

Repeat the above steps for PGP Secret Keyring. This will require the passphrase that was used during generation of the keyring above.

 

Once both keyrings have been deployed, they can be viewed on the Deployed Artifacts tab of the tenant.

artifacts.png

 

 

Usage Example 1 - Encrypting & Signing

For the purpose of the following examples, another OpenPGP key pair has been generated which is used to represent the external partner that HCI will integrate with. This key pair is generated under the name PGP Partner.

 

For the first example, HCI will encrypt and sign the message. The encrypted and signed message will then be transmitted to the partner where it will be verified and decrypted. Below is the required set up in HCI for this scenario.

  • Encryption with partner's public key
  • Signed with own private key

 

To simplify the example, the iFlow is designed with a static content in a Content Modifier and the output message is routed to an HTTP receiver.

iflow1.png

 

The PGP Encryptor function is configured as follows:-

  • Signatures are included in the message
  • Encryption algorithm using AES 256
  • Compression algorithm using ZLIB
  • Output in ASCII Armored format
  • Encryption using PGP Partner's public key
  • Signing algorithm using SHA 256
  • Signing using own private key

encrypt.png

sign.png

 

The Content Modifier is populated with the following static text in the message body.

content1.png

 

After the iFlow is deployed and executed, the following encrypted PGP message is sent to the HTTP receiver.

msg1.png

 

The encrypted message is extracted and saved as a text file. We will then use Kleopatra to decrypt and verify the file.

kleo1.png

 

The results of Kleopatra is as shown below. The signature corresponds to the signing configuration in HCI.

kleo_result1.png

 

And the decrypted content matches the original content.

output1.png

 

 

Usage Example 2 - Decrypting & Verifying

The second example is the reverse of the first example. This time round, Kleopatra will be used to simulate encryption and signing of the message by an external partner. The encrypted message will then be decrypted and verified by HCI.

 

Below is the required setup in HCI for this scenario.

  • Verification using partner's public key

 

This example will also use a simplified iFlow setup where the encrypted content is statically configured in the iFlow, and the decrypted output message will be routed to a HTTP receiver.

iflow2.png

 

The PGP Decryptor is configured as follows:-

  • Verification of signatures are mandatory
  • Verification using PGP Partner's public key

 

Note that decryption key does not need to be specify in the function as it is determined implicitly from the message content.

decrypt.png

 

To complete this configuration, we first need to create an encrypted message to simulate content from the external partner.

 

The content of the following file will be encrypted and signed using Kleopatra.

input.png

kleo_encrypt.png

 

The output will be in ASCII armored format.

kleo_encrypt2.png

 

The public key representing HCI is selected during encryption by Kleopatra.

kleo_encrypt3.png

 

Subsequently, Kleopatra will sign using the partner's private key.

kleo_encrypt4.png

 

After the content has been encrypted and signed using Kleopatra, it is populated in the message body of the Content Modifier.

kleo_encrypt5.png

 

content2.png

 

Once all configuration is complete, the iFlow is deployed and executed. At the HTTP receiver, the following decrypted output is transmitted, which matches the original content.

output2.png

 

 

Conclusion

As shown, usage of PGP message level security can be achieved relatively easily in HCI. This can help ensure that message content can be secured in cloud based integrations (especially file-based ones).

Viewing all 676 articles
Browse latest View live