Quantcast
Channel: SCN : Blog List - Process Integration (PI) & SOA Middleware
Viewing all 676 articles
Browse latest View live

Authenticating from HANA Cloud Integration

$
0
0

Hello Colleagues!

 

When SAP HANA Cloud Integration sends messages to systems, HANA Cloud Integration must be authenticated by the receiving systems. In this blog, we shall see the types of authentication and how to do it. This blog is a part of the series on Understanding Authentication & Testing Connectivity in HANA Cloud Integration. You can access all the blogs here.

 

You can authenticate from HANA Cloud Integration to other systems using certificate-based authentication, basic authentication or the OAuth 2.0 authentication. HANA Cloud Integration has a uniform way of configuring the authentication - through the use of Credential Artifacts.

 

(A small note: The complete OAuth 2.0 specification is not yet supported. We currently support only the client credentials grant type.)

 

How to configure Authentication Credentials?

 

In the Eclipse tool-set, navigate to the Deployed Artifacts tab of the tenant. Click on the Deploy button and select the type of authentication from the wizard. The procedure is illustrated below:

 

Credential_Deployment.png

 

For example, you let's say you want to send messages to an SAP Cloud for Customer application and want to authenticate using the credentials of the Cloud for Customer application. In the Deploy Artifactswizard, you select the User Credentials artifact, specify a username and password. This credential artifact is then referred in the sender channel of the integration flow. A sample configuration is provided below:


sample_configuration.png

 

Conclusion

 

When you want to authenticate from HANA Cloud Integration, always maintain the authentication information in the credential artifacts. An inherent advantage of this mechanism of HANA Cloud Integration is that you have to maintain the credentials of a receiving system only in one artifact; you can refer to the same artifact in multiple integration flows. Further, if you have to change the authentication details - you have to do it only once and need not reconfigure the integration flows.

 

Best Regards,

Sujit


Connectivity tests to HANA Cloud Integration (PI)

$
0
0

Hello Colleagues!

 

In this blog, we shall see the steps to test the connectivity when communicating to SAP HANA Cloud Integration. This blog is part of the series on Understanding Authentication & Testing Connectivity in HANA Cloud Integration. You can access all the blogs here.

 

Let us assume the following scenario:

 

Connectivity_Simplified_Diagram.JPG

You have an SAP ERP system want to test whether HANA Cloud Integration is reachable from SAP ERP. Before, we start on the SAP ERP system - let us test from a Web browser.

 

Testing from a Web Browser

 

I would propose to test first from a browser because it is the best way to understand what to expect in connectivity from HANA Cloud Integration. These are the steps:

 

Prerequisite: You have an SCN user that has the role to communicate with HANA Cloud Integration.

 

Step 1: Create a simple integration flow (SOAP - to - SOAP). Deploy the integration flow, and obtain its endpoint URL


You need not provide any WSDL for the sender SOAP endpoint. And ensure that you have selected Basic Authentication in the Sender.

dummy_iflow.png

Note the endpoint that has been created for the integration flow. We need this in our next step.


Step 2: Open a Web browser and enter the endpoint URL

 

When asked about the authentication, provide the SCN username and password that has been authorized against this tenant. Also, the role to access via basic authentication must be granted to this user.

browser_entry.JPG.png

Important Point to Note: Only HTTPs-based communication is possible with HANA Cloud Integration. This means when I am sending a request from the Web browser, HANA Cloud Integration presents itself with its certificates. It is important for the client (Web browser) to recognize these certificates. Therefore, the certificate store of the Web browser must have certificate chain of HANA Cloud Integration.

 

Let's take an example to understand this better. Let's say I am using Google chrome Web browser to connect to HANA cloud integration. When you enter the URL in the browser, you can navigate and check the certificate chain of the HANA Cloud Integration instance. The Web browser must contain the certificate chain of HANA Cloud Integration. Else, it cannot establish a trusted connection. See screenshots below:

 

google_chrome_example.png

 

 

Step 3: Check the Message Monitoring Log

 

If the connectivity is fine, then a message shall be sent to the integration flow. And it shall be visible in the message monitoring log.

Dummy_Channel_Message_Monitoring.JPG

 

Testing from an SAP ERP system

 

In principle, testing from an SAP ERP system is similar to testing from a browser.

 

It is easy to test via an HTTP destination created using the transaction SM 59.

 

  1. Create an HTTP destination in SM 59.
  2. Enter the endpoint URL of the integration flow.
  3. Since ERP is sending the data to HCI, the ERP system acts as a client. So, the ERP system must recognize the certificate chain of HANA Cloud Integration (if the CA is not already included).
  4. In the SM 59 destination, you can provide the credentials to connect to HANA Cloud Integration. Try with basic authentication first, and then using certificates.
  5. Remember to configure the sender components in integration flows accordingly. This can be done in the STRUST transaction of SAP ERP. Select the SSL Client SSL Client (Standard) certificate, export the certificate and save the file locally as .CER. Import this into the sender component of HANA Cloud Integration.

 

Few screenshots of the procedure described is shown below:

erp_certificate_config.png

 

Conclustion


We have taken a simplified setup to explain the connectivity tests. Our experiences with customer implementations show that most landscapes look like this below. Nevertheless, the concepts explained remain the same. And you must configure the Web dispatcher to speak to HANA Cloud Integration. Further, keep a lookout for network filters and firewalls. They could block the calls to the integration instance.

typical_landscape.JPG

Best Regards,

Sujit

Connectivity tests from HANA Cloud Integration (PI)

$
0
0

Hello Colleagues!

 

In this blog, we shall see the steps to test the connectivity when communicating from SAP HANA Cloud Integration. This blog is part of the series on Understanding Authentication & Testing Connectivity in HANA Cloud Integration. You can access all the blogs here.

 

On the HANA Cloud Integration side, it is quite simple to configure the settings. You have to configure the integration artifact (refer to the blog here). For certificate-based communication, you have to upload the receiving system's public certificate in the HANA Cloud Integration keystore. One way to send messages from HANA Cloud Integration (without a sender component) is to configure a timer and a content modifier step in the Integration flow. You can create payload messages by manually specifying them in the body of the content modifier. The scheduler step can be configured to run once or at multiple intervals.

 

Let's say you have an SAP ERP system, then create an HTTP destination that can receive the messages. You can check incoming logs at the SAP ERP system. The same procedure can apply for any receiving system.

 

Note: We are currently developing a feature that allows you to ping destinations from HANA Cloud Integration. Once that is available, I shall update this blog.

 

sample_ERP_System.JPG

 

Some recommendations:

 

  1. Ensure that the firewall in the customer landscape can accept messages originating from HANA Cloud Integration. The IP address range for different data centres of SAP HANA Cloud Integration are maintained in the documentation. (Documentation link: https://cloudintegration.hana.ondemand.com/PI/help -> Operating and Monitoring SAP HCI -> Understanding the Basic Concepts -> Virtual System Landscapes)
  2. For outbound HTTP/HTTPS connections, always use port 443

 

Conclusion

 

Connectivity tests from HANA Cloud Integration are easier to execute. Check out the firewall settings on the on-premise systems. You can test using basic authentication and then move to certificate-based authentication.

 

Best Regards,

Sujit

Blog 5: Content Enricher Pattern in Integration Flows

$
0
0

Hello Integration Community !

 

In this blog, I shall explain the content-enricher pattern of SAP HANA Cloud Integration (HCI-PI) and how you can use it in your integration project.

 

What is the Content Enricher Pattern?

 

From the definition of Enterprise Integration Patterns, a content enricher accesses an external data source in order to augment a message with missing information.

 

Let's take an example from the HR domain. In the SuccessFactors(SFSF) suite, we can obtain information on employee object entity.I get the job location code from the results. But, I want to send the complete job location and not just the job code and the job location is stored in a different entity. So, I enrich my data with the job location using a content enricher step.

 

Likewise, one system may provide us with a order ID, but the receiving system actually requires the associated customer ID associated with that order.

Content enricher is a very common integration pattern.

 

How to Use the Content Enricher?

 

In HANA Cloud Integration, content enricher is available as a Service Task.

ServiceCall.JPGSwitchtoContentEnricher.JPG

 

We shall take the following integration flow as an example:

Integration_flow_pattern.JPG

The first call to SFSF shall return the compound employee data. This is how the data looks:

 

 

 

CompoundEmployee.JPG

 

We are interested in enriching the job information. So, let us take an expanded look into it. In the expanded view, take a closer look at the location field. We are interested in enriching that particular field with the exact address details.

 

jobinfo.png

So, to achieve our purpose we shall use the Content Enricher step with the following configuration.

enrich_property.JPG

The Lookup message depends on the address location entity.The entity to which you are querying. So, the final output or the enriched output shall look like the one below.

FO_Location.JPG

 

Note: Content Enricher also has another option - Combine. That is a very simple logic of combing the two elements. Employee information and Job information query results are combined together into one entity. For fun, for the same objects we saw above - the result looks like this:

Combine.JPG

 

Conclusion

 

In the current version, the Content Enricher step works well with SuccessFactors system. For SOAP-based scenarios, the entire payload goes to the sending look-up system. If you do not want the entire data to go, you would have to employ the data store step and the content modifier step.

 

Best Regards,

Sujit

Blog 6: Splitting messages in Integration Flows

$
0
0

Hello Integrators!

 

In this blog, we shall look into the splitter pattern provided in SAP HANA Cloud Integration. The use of a splitter is quite clear from its name - to break out a composite message into a series of individual messages, each containing data related to one item.

 

How is Splitter supported in Integration Flows?

 

In HANA Cloud Integration, a splitter is available as a Message Routing flowstep. When you configure a splitter flowstep, it can appear confusing at first. Lots of options!

 

Splitter_Options.JPG

 

Nevertheless, let us try to understand Iterating and General Splitter first.

A very cool way to understand the workings of the splitter is by configuring a integration flow like below (idea courtesy: HCI development team).

Integration_Flow.JPG

In the Content Modifier step, I insert the following payload in the Body. It is group of orders:

Orders_full.JPG

Each order has details in the following format.

Order_expanded.JPG

 

Now, let us see how the splitter works. In the first splitter, after the content modifier, we have configured a General splitter the following properties.

SP1.png

 

output1.png

 

The exclusive gateway step routes the message according to the ordernumber.

gateway.JPG

 

Let us go to the second splitter that has been configured as a Iterating splitter with expression type as Token.

config_2.png

This is how the output of the second splitter looks like. Notice that the root tags of the incoming payload is not retained and further, it has been grouped as per the number provided.

 

output2.png

Note: The Token splitter is mainly used for non-xml payload. I have just illustrated this example so that you can understand its working. Here "<item>" (including the angle brackets) form the "token".


And if we configure the Iterating splitter with the expression type as XPath and the properties as below, we get the same output as the previous one. The only difference being there shall be the XML tag in the beginning of the output message <?xml version="1.0" encoding="UTF-8"?>.

SP_3.JPG

 

The forth splitter is also a General splitter. But, I recommend that you configure the same so that you can see the difference with the Iterating splitter against the same message. The difference lies in the enveloping signature of the XML. Compare this output with the previous one.

GeneralSplitterOutput.JPG

 

Conclusion

 

You use the splitter flow step in HANA Cloud Integration to break down a composite message. For XML payloads, you would use the general and iterative-xpath splitters. Iterative-token type is used for non-xml payloads. The other variants - IDOC and PKCS are used in specific scenarios. I shall cover those in another blog.

 

Best Regards,

Sujit

Blog 7: Message Events in Integration Flows

$
0
0

Hello Integrators!

 

In this blog, we shall an easy topic on message events. The topic itself is easy to understand. However, I want to provide a more contextual information on where you would use the different message events.

 

What are Message events?

 

Message events point to changes in the state of message processing of an Integration flow. To make it clearer, let's say if a message comes to an integration flow, then the pipeline starts processing the message. Or, when the integration pipeline has to send it to the receiver, it would have completed processing the message. Such changes in states are explicitly modelled as events.

 

Which Message events are supported, and where to use them?

 

You can configure all the message events from the Integration palette.

palette.JPG

 

The Start Message and the End Message events are used when HCI receives a message from a Sender and when HCI sends a message to a Receiver. By default, when you create an integration flow, the start and end message events are made available.

start_end.png

The other message events - Error Start and Error End can be used only within an exception sub-process. This has been explained in detail in following blog: Blog 4: Modelling Exceptions in Integration Flows (HCI-PI).

ErrorStart_End.JPG

Timer start is especially useful in scenarios where you have go and pull data from systems or have to trigger Web services at specified time/ intervals. In terms of polling (pulling data from systems), currently - you can use it only with a SuccessFactors adapter as it is a pull-based adapter.

 

The usual pattern of using a timer is -  A content modifier followed by a timer. That is because a timer does not create payload in the pipeline. With a content modifier, you can create the request payload that can be sent to the system.

 

timer_pattern1.JPG

 

 

And finally the Terminate End - this is useful if you want to stop further processing of a message. For example, you can use it in a content router where you have processing defined for specific values on the payload. If the payload does not match those values, you want to terminate the connection.

Note: The message monitoring shall show a Successful Message and not a Failed Message because it has terminated successfully. Terminated messages do not mean failed messages.

 

TerminatePattern.JPG

Use all these events in your integration projects and let us know your feedback !

 

Best Regards,

Sujit

Blog 8: Message Aggregation Pattern in Integration Flows

$
0
0

Hello Integrators!

 

In this blog, we shall look into the Aggregation Pattern supported in SAP HANA Cloud Integration. Aggregation is the first stateful pattern supported by HANA Cloud Integration. Stateful means it keeps a track of "state" of the messages and clears the status only if a condition is achieved. This shall become clearer as you read through the blog.

 

What is the Aggregation Pattern?

 

An aggregator flow step collects and stores individual messages until a complete set of related messages has been received. On successful receipt of the messages, the aggregator publishes a single message (aggregated on some principle). We shall look into the details using an example.

 

Let's say we want to aggregate every incoming messages of the same format together into a new message. The messages are of the following format.XMLFormat.JPG

 

Quite specifically, let's say we receive three separate messages into the Integration Flow.

 

Aggregator_Iflow.JPG

InputMessages.png

 

Now, we want to aggregate the messages according to a predefined condition. Now, let us define the conditions. Each aggregator is configured by a Correlation expression and an Aggregation strategy. They are represented in the properties of the Aggregator flow step.


Aggregation_properties.png

 

The correlation expression is used to define which messages should be aggregated together. For every incoming message, depending on the correlation message - a correlation id is created. All messages with the same id are aggregated together.

 

correlation_explanation.png

The aggregation strategy provides the logic for combining all messages into a single aggregated message. Now, we want to combine all the messages strictly in sequence. In this case, the sequence number must be provided in the incoming message. In our example, it is provided in the field /Mobile/ MCode. Further, we shall denote the last message by the field value /Mobile/LastItem = true. This means as soon as this message is received by the aggregator, the messages have to be grouped and sent as a single message.

 

Further, I shall give a timeout period of one minute. That means, between two messages - the maximum waiting time should be one minute. If that time period is elapsed, then the aggregator should combine all the messages received thus far and send a single message. Check our settings below.

 

aggregation_explanation.png

So, finally when the three messages arrive, it is aggregated as one message and sent to the receiver. This is how our final output looks like:

FinalMessage.JPG

Two points to note:

 

1. In the message monitoring tool, the logs appear in pairs. One for receiving the message into aggregator and the other for confirming it into the data store.

MessageMonitoring.JPG

 

2. Post the aggregation step, you may want to know if the aggregation has been successfully completed based on the expression or did a timeout happen. The information can be obtained from the header parameter in the integration flow. It is available from the ${header.CamelAggregatedCompletedBy} parameter.

The values would be timeout or predicate. You can use this in an exclusive gateway step.

 

Further enhancements to the aggregation step is planned. So, keep a look out for it!

 

Best Regards,

Sujit

Blog 9: Scripting in Integration Flows

$
0
0

Hello Integrators!

 

In this blog, we shall explore a slightly advanced feature of SAP HANA Cloud Integration - scripting.

 

What is Scripting and When to use scripting?

 

By scripting, I am referring to the use of a scripting language (like Groovy, JavaScript) in your integration projects. HANA Cloud Integration provides a rich set of functionality to transform your data - mapping, content modifier, converters, and so on. However, at times you want to perform a more complex task that lies outside of the native functionality provided. For example, say you receive the following incoming payload:

Sample_In.JPG

The payload comes from an HR system that tracks based on activities recorded every day. You want to integrate to a time recording system, that expects the total number of work hours against each person. Expected payload is in this format:

Samplt_out.JPG

So, we have to parse through the entire incoming payload, calculate the number of hours against each person, and map to the final payload format. Achieving this scenario using the native functionality set of HANA Cloud Integration could be a little tedious task. It can be easy accomplished using custom transformation functions. That is where scripting comes in.

 

Which Scripting Languages are supported?

 

HANA Cloud Integration supports two forms of scripting languages: Groovy and JavaScript


  • Groovy is an Object Oriented Scripting Language which provides dynamic, easy-to-use capabilities. It absorbs most of the syntax from Java. Learn more Groovy from its site here.
  • JavaScript is a dynamic programming language of the Web. Most HTML pages are programmed using

 

Both scripting languages are easy to learn and come with a host of resources that you can use in your integration project.

.

How to use Scripting?

 

In the integration project, you should create the following folders in your project for scripting:

 

script_folder.JPG

 

The src.main.resource.script folder should contain all the scripts. In an integration flow, the script step is available as part of the Message Transformer step. In the context menu of the script step, you can create or assign existing scripts from the folder.

Script_Palette.JPG

 

When you create a new script, you shall get the default code editor in Eclipse with the following view. Functions to access the message are provided to you by default.

groovy_explained.png

Using External Libraries in Scripting

 

One more cool feature you could use in the scripting message is the use of external libraries in your project.There are many open source libraries available to you.

 

Let's say you want read an XML file in Groovy using an Xpath library. And you finalized on using Jaxen. Then, here is how you proceed in an integration flow project.


Step 1: Import the Jaxen libraries in the src.main.resources.lib folder

jaxen.JPG

 

Step 2: Include the import definitions in the script file of your integration flow

AddingJARs.JPG

Step 3: Modify the processData function with your function logic

codesnippet.JPG

 

That is it! The procedure to use script in integration flows is simple.

 

Conclusion

 

You can utilise a scripting language in your HANA Cloud Integration project for complex transformations.As language of choice, you have Groovy or JavaScript - and in addition, you can augment it with external libraries. We strongly recommend that you first look at the native functionality supported by the toolset before writing scripts. By its inherent nature, scripts can make integration projects harder to maintain.

 

Best Regards,

Sujit


Blog 10: Importing PI content in Integration Flows

$
0
0

Hello Integrators!

 

In this blog, we shall look at a connecting link between the on-premise Process Orchestration and HANA Cloud Integration. Although, it is a goal to run on-premise based integrations on HANA Cloud Integration, we started with the first step - mappings. On our discussions with customers and partners, we realised that mappings are one of the most important business asset that consultants wanted to reuse.

 

We support downloading the artifacts from Process Orchestration version 7.10 and above.

 

How to import artifacts from Process Orchestration ?

 

Downloading Message Mappings

 

Step 1: Configure the settings to the Enterprise Service Repository (ESR) from the Eclipse tool-set

connection.png

 

Step 2: From the integration project, click on Import PI content. You can import Message mappings, Operation mappings, and WSDLs.

Let's talk about the example of downloading a message mapping. We want to re-use a message mapping in our integration flow project of HANA Cloud Integration. You have to select it from the wizard. The mappings are then downloaded in a HANA Cloud Integration - native format.

mapdownload.png

Check the file format - It has been downloaded as an .mmap file. You can then edit the downloaded mappings using the mapping editors.

 

  • WSDLs/XSDs corresponding to Message Types and Fault Message Types are placed under src.main.resources.mapping folder
  • Other interfaces get placed under src.main.resources.wsdl

 

Downloading Operation Mappings


Operation mappings have a slightly unique behaviour. They can currently downloaded only as a .jar file. The functionality of the operation mapping can be used, but you cannot modify the mapping in HANA Cloud Integration.

 

The imported operation mapping has the following features:

 

  1. If operation mapping contains message mapping, then the message mapping is downloaded as a jar under src.main.resources.mapping package
  2. If the operation mapping contains XSLTs, then the files are downloaded as .xsl under src.main.resources.mapping
  3. Imported source or target WSDLs are not supported in integration flows

 

Note: There are certain restrictions in downloading the mappings. We remove them as time goes. The limitations are documented in the guide; so, do make a check on the documentation from time-to-time.

 

Conclusion

 

Process Orchestration customers who are adopting HANA Cloud Integration could do well to reuse the mapping components. Before the start of a project, check if the interfaces and mappings have already been defined in Process Orchestration. Message mappings are more easily imported into HANA Cloud Integration. You can also modify them and adapt them in the project.

 

Best Regards,

Sujit

SAP HANA Cloud Integration (HCI) - A complementary offering to SAP Process Orchestration (PO)

$
0
0

A lot of ambiguity is seen wrt. usage of SAP Process Orchestration and SAP HANA Cloud Integration. Even heard that HCI is a replacement of PI/PO which is not TRUE. SAP HANA Cloud Integration (HCI) is public since 2013. And we get from time to time, questions on the difference between SAP HANA Cloud Integration and SAP Process Integration/Orchestration. Let us understand on both the offerings:

 

SAP Process Orchestration provides on-premise middleware platform to design, model, execute and monitor business processes by ensuring connectivity to multiple different business / technical systems and applications (SAP and non-SAP). Along with the options of creating human and/or system centric processes, it offers the following products under one umbrella:

 

  • SAP Process Integration (including B2B Add-On and Connectivity Add-On)
  • SAP Business Process Management
  • SAP Business Rules Management

 

SAP HANA Cloud Integration (HCI) is a SAP’s strategic secure cloud-based integration platform with a strong focus on process and data integration across domains (SAP Financial Services Network (FSN), SuccessFactors, Cloud for Customers, Ariba, Business by Design, Travel On Demand etc.). It provides you the Process Integration (HCI-PI) and Data Integration (HCI-DS) capabilities.

 

HCI enables you to connect your cloud applications quickly and seamlessly to other SAP and non-SAP applications (on-cloud or on-premise) without extensive coding. This integration as a service solution from SAP can help you integrate your business processes and data in a secure and reliable environment. Also, another important point to understand is that HCI is not SAP Process Integration on cloud. It is a new product that runs on SAP HANA Cloud Platform. HCI is designated as IPaaS -- Integration Platform as a Service.


Picture2.jpg

 

 

 

As both the products are from SAP, SAP has provided a way to re-use your existing investments on SAP Process Orchestration wrt. message mappings that can be readily used in SAP HANA Cloud Integration. Both the solutions are complementary and lot of factors decide on which solution to be used:


1.    Cloud to cloud integration. There are many use cases that derive integration from one cloud solution to another. E.g. SuccessFactors to SHL or PeopleAnswers or Workforce integration. These cloud solutions can be SAP or non-SAP.  Right choice of the solution for this use case would be SAP HANA Cloud Integration as there is no on-premise involvement and most important fact to understand is that customer has invested on cloud to get  everything as an hosted/subscription model including integration to avoid capital expenditure and development of technology skill set.


2.    On-premise integration: There are use cases when a customer wants to integrate mainly on-premise systems and applications. These can be SAP or non-SAP. As all systems/applications need to be connected reside in customer’s on-premise landscape, the right technology to use is on-premise middleware i.e. SAP Process Orchestration.


3.     Cloud to on-premise and vice versa: We also call it as hybrid integration use case. This integration area is causing most of the confusion.  Let us see  the different factors that need to be considered to decide which solution fits best:


    1. If a customer is already having PO/PI and wants to leverage the same, SAP has introduced the required technical adapters e.g. SuccessFactors adapter, Ariba cXML adapter etc. to connect to the respective cloud applications. So SAP Process Orchestration can be continued as a single middleware in customer’s landscape covering both integration needs.
    2. If customer is not having PI/PO, immediate right choice would be to use SAP HANA Cloud Integration as a minimal up-front investment. HCI is a multi-tenant solution specially built for cloud integration usecases.
    3. There are many usecases that exist when the customer is on PI, but still HCI can be considered for cloud Integration. Few examples:
      • Customer is thinking to move into the cloud and requires speed integration of new cloud applications for business innovation and Pre-packaged content is only available on HCI, then HCI is the right choice.
      • From the different LOBs perspective, they want to have integration bundled within cloud application to achieve faster results and keep a bifurcation of different integration use cases. So, customers can have one middleware each for cloud integration and for on-premise integration use cases respectively.
      • PI is on older release and does not have all the technical adapters available with latest release. Customers do not want to invest on upgrade.


Though HCI is already capable of integrating cloud applications via custom integration, in cloud era, lot of focus is on simplicity, quick configurations and deployments. Pre-packaged content is of utmost importance. As of today (Jan. 2015), lot of pre-packaged integration content is already available on SAP HANA Cloud Integration:

 

  • SAP Cloud for Customer (C4C) with SAP ERP
  • SAP Cloud for Customer (C4C) with SAP CRM
  • SAP SuccessFactors LMS Curricula with SAP HCM Qualification
  • SAP SuccessFactors HCM Suite Competency with SAP HCM Qualification
  • SAP SuccessFactors HCM Suite Talent Management with SAP HCM
  • SAP SuccessFactors Recruitment Management (RCM) with 3rd party assessment vendor PeopleAnswers
  • SAP SuccessFactors Recruitment Management (RCM) with 3rd party assessment vendor SHL
  • SAP SuccessFactors Employee Central (EC) with 3rd party benefits vendor Benefitfocus
  • eDocument (Electronic Invoicing) solution with government solution in Peru (SUNAT) and Chile (SII)


Lot of other pre-packaged content is under development for Ariba, SAP SuccessFactors (e.g. Employee Central Payroll, Cost Centre, Org integration) and other cloud applications that is s planned to get released during next release cycles.


Also, as SAP HANA Cloud Integration is having monthly release cycles, it becomes interesting to check continuously and keep yourself up-to-date on the newly released features and upcoming pre-packaged content. I am sure we have enough information available for PO and PI on this SCN space. For HCI, we can refer the following to get a quick information:


Some tips for High Availability setup on PI dual-stack system

$
0
0

1. Naming schema changes for some XI/PI components in SLD

 

For High Availability (HA) setup, the SLD naming schema for Integration Server, Domain and Adapter Engine are a little bit different.

 

  • The SLD name of Integration Server and Domain are built as is.<cisysnr>.<cihost> and domain.<cisysnr>.<cihost>. The Central Instance hostname is taken from the ABAP profile parameters.

 

  • The SLD name of Adapter Engine contains the name of J2EE Database. It's built as af.<system_id>.<j2ee_dbhost>. Because several Adapter Engines can be part of the same domain, e.g. central Adapter Engine + non-central Adapter Engines, the Adapter Engine name cannot contain the host name of central instance, but needs to have a unique name for each Adapter Engine. The value for j2ee_dbhost is taken from the Java profile parameters.

 

  • You can check the profile parameters in the folder /usr/sap/<SID>/SYS/profile.

 

2. parameter "com.sap.aii.connect.integrationserver.sld.name"

 

The value of parameter "com.sap.aii.connect.integrationserver.sld.name" mentioned in HA notes is as below:

 

        <is sld name>           Default: is.<cisysnr>.<cihost>

 

  • You can configure it under http://<host>:<hport>/dir/start/index.jsp -> Administration -> Exchange Profile -> Parameters -> Connections. You might have some doubts whether this parameter will work if you switch from CI to DI.

66.PNG

 

  • In fact the value of this parameter is just an identifier that does not have any particular meaning for the java stack. It is just used to avoid many SLD reads. So there you should put the host name and instance number of CI. This term is somehow outdated because when stripping the ASCS from CI, it becomes a plain DI. But anyway, one of the DI's is denoted central and this is defined during the system install. So the CI "knows" it is the central one and this exact instance is used to define the name of the integration server.

 

  • So if installed according the HA notes, there should be no problems if the CI is offline, the DI known which CI it belongs to and uses it to recreate SLD content if necessary.

 

Related Notes:

SAP Note 951910 - NW2004s High Availability Usage Type PI

SAP Note 1052984 - Process Integration >=7.1 - High Availability

 

Related Docs:

Steps for running SAP Netweaver PI on high availability (HA)

New Functionality for Table Switch Procedure - Table Switch Control

$
0
0

In this blog I'd like to generally introduce the new functionality for table switch procedure - Table Switch Control.

 

Firstly, please allow me to explain some details about the procedure of switch deletion:

All the messages in your system can be divided into three parts:
Part1 - messages are not in retention period and with final status so that they can be deleted.
Part2 - messages are not in retention period but do not have final status so that they can not be deleted.
Part3 - messages are in retention period so that they can not be deleted.

 

When the switch procedure starts, there are three steps to deal with these messages:
Step1 - Messages in Part1 are deleted logically. The table entries are not physically deleted from the database tables, instead the flag "Deleted" is set in the master entry.
Step2 - Messages in Part2&3 which are not set the delete flag will be copied to new tables.
Step3 - The original tables in database are dropped and then recreated again immediately. Messages in Part1 which are set the delete flag are physically deleted in their tables.


Sometimes you might face critical situation when the table switch procedure is in use. This most likely is because of the following reasons:

• During the execution of the switch, additional table space is required to copy the valid messages to the new master table. Only after the copy job finishes the original tables can be dropped. The disc space consumption increases considerably after you activate switch procedure.

• The copy process takes long time

• While a switch is pending, no other reorganization job is allowed to start; in particular, neither deletion job nor archiving job can run and therefore the space issue becomes even more critical.

 

Now the Table Switch Control (report RSXMB_TABLE_SWITCH_CONTROL) is available with SAP Note 2106462. You are able to get out of this awkward position on your own and to complete a pending switch.

 

Upon start the report provides detailed information on the current status of a pending switch. By pressing the push buttons with recycling bin icon, you can perform a physical deletion of inactive messages (either logically deleted messages or messages that already have been copied).

11.PNG

Additionally this report provides the option to revert the direction of the switch. Reverting the direction is a critical operation and multiple pre-conditions must be fulfilled. All of these conditions are checked automatically and changing the direction is enabled if and only if all conditions are fulfilled:

12.PNG

Just by clicking the execution button you can revert the direction of the switch:

13.PNG

Pre-requisites:

The harmonized persistence layer is an indispensable prerequisite for the Table Switch Control report. Harmonization is delivered by SAP Notes 2038403 & 2095113.

 

Related Notes:

SAP Note 872388 - Troubleshooting archiving and deletion in PI
SAP Note 2038403 - Harmonization of PI persistence layer
SAP Note 2095113 - Harmonization of PI persistence layer II
SAP Note 2039256 - PI: How to calculate the remaining messages to be copied during the Switch Procedure
SAP Note 2041299 - PI: How to calculate the time remaining before the Switch procedure completes
SAP Note 2106462 - Kontrolle des Switchverfahrens

 

Related Docs:

Troubleshooting for Archive and Delete on Integration Engine

Overview of the Switch Deletion Procedure

Storing password in SAP PI modules

$
0
0

Storing password in SAP PI modules.

 

Setting user and password inside a module is slightly different from normal adapter module parameters as the text can’t be kept in clear-text in module parameters.

 

Three strategies we can use:

 

1) Use hard-coded user id and password in the module. Not a great approach but sometimes this can be the only feasible option. The advantage of course is that there is no risk of locking the user.

 

2) Setting in comm channel as a secure parameter ( displayed as asterisk ).


Here, user can be set as a normal string parameter. For passwords, we don’t want the password to show up in clear text. Hence, password can be the following:

 

  • - If password parameter  starts with pwd, it’s displayed as asterisks when entered and displayed. However, the database folder is unencrypted.
  • - If password parameter starts with cryptedpassword, the database folder is encrypted. This is more secure as the database folder is encrypted.

 

The advantage is that the values can be configured for each system and the drawback being if the password is not correctly entered it can get locked and trying to find the comm channel which is locking the user can be time consuming.

 

3) Setting values in Application Properties. This  combines the best of both worlds – we’re able to configure values in each environment and as we’re configuring it in only one location, the chances of accidentally locking the user due to incorrect values is reduced.


The values can be modified from NWA. The path is:


NWA_1.png


Configuration Management->Infrastructure->Java System Properties




Steps required to add configuration capacity.

 


a)  Add sap.com~tc~je~configuration~impl.jar to the module EJB project.

Path to get the client library: /usr/sap/<SID>/<instance>/j2ee/cluster/bin/services/configuration/lib/private/sap.com~tc~je~configuration~impl.jar

 

b) Create sap.application.global.properties file under META-INF. It’s essentially a  .properties file.

 

EAR_1.png

 

 

Sample content to make User modifiable and appear as clear text

 

## Comment for user

#? secure = false; onlinemodifiable = true

#% type = STRING;

User =

 

Sample content to make User modifiable and appear as asterisk when entering in NWA.

## Comment for password

#? secure = true; onlinemodifiable = true

#% type = STRING;

Password =

 

c) Update module code to read the property


Sample code will look something like this ( to be added in the module code )

 

// Obtain the JNDI context

InitialContext ctx = new InitialContext();


// access the Application-Configuration-Façade service

ApplicationPropertiesAccess appCfgProps = (ApplicationPropertiesAccess) ctx.lookup("ApplicationConfiguration");


java.util.Properties appProps = appCfgProps.


if (appProps == null) {

// perform error handling

}

else


{

userID = appProps.getProperty("

password = appProps.getProperty("Password");

                                                                }

 

d) Update application deployment descriptor to indicate the library being used. Add this to application-j2ee-engine.xml .

 

<reference reference-type="hard">

    <reference-target provider-name="sap.com"

target-type="service">

      tc~je~appconfiguration~api

    </reference-target>

</reference>

Reading Messages from PI System

$
0
0

This blog describes how to retrieve information about messages from a PI system. PI has several interfaces which can be used to retrieve PI messages and other data form the system.

 

To read messages from a PI System, we have different solutions for Java and ABAP. They are not released as stable APIs and we don’t give assurance that it will never change. However, the APIs are usually stable in a release (and available since NetWeaver 7.0) and can be used to retrieve the data. Depending if the messages are retrieved from Java (Adapter Engine) or ABAP (Integration Engine) stack, there are different technologies. A web service can be used for the Java stack and ABAP function modules for the ABAP stack.

 

Retrieving Messages from Java

We offer a web service to retrieve messages from Java stack. The web service offers similar functionality to what can be done in the RWB Message Monitoring tool:

  • Select messages which match a filter. You can filter by message header attributes like Time, Sender Component, Receiver Component, etc.
  • Resend or Cancel Messages
  • Retrieve the payload of a Message

 

This can be done with the web service AdapterMessageMonitoringVi which is delivered with a standard PI installation and is available with all releases. In 7.10, please check note 1373289 for availability limitations. The web service can be investigated and tested with the Web Services Navigator tool of the WebAS Java.

 

 

Web service WSDL URL is available at:
http://<host>:<port>/AdapterMessageMonitoring/basic?wsdl&mode=ws_policy&style=document

The web service also offers three different bindings for basic Http authentication, SSL over Https and HTTPS with client certificate authentication.

The actual web service URL depends on which binding is to be used and can be one of the follows:
http://<host>:<port>/AdapterMessageMonitoring/basic?style=document
https://<host>:<httpsport>/AdapterMessageMonitoring/ssl?style=document
https://<host>:<httpsport>/AdapterMessageMonitoring/clientCert?style=document
In most cases the basic binding can be used if no special security requirements must be met.

 

The web service offers a set of operations/methods for different purposes. Some interesting methods for message monitoring are:

  • getMessageList: Get a list of message. Input is a search filter similar to the filter in RWB Message Monitoring and the result contains the Messages and their header data (without the message payload) and status
  • getMessagesByKeys: Similar to getMessageList, but search can be done only by a list of message keys.
  • getMessageBytesJavaLangStringBoolean: Retrieve the payload of a message.
  • getMessageBytesJavaLangStringIntBoolean: Retrieve the payload of a message.
  • getLogEntries: Read the Audit Log of a message (see note 1814549 for required SPs and patch levels).

 

 

The web service also contains other methods, e.g. cancelMessage and resendMessage that can be used for message manipulation, but are not required for monitoring purposes. In newer releases like 7.30 and 7.31 the web service AdapterMessageMonitoringVi even contains many more methods for advanced monitoring (e.g. can be used for User-Defined Message Search) but which are out of scope for this document.

 

Method getMessageList

The method getMessageList  can be used to search for messages which match to a given filter. This is according to the search functionality in RWB Message Monitoring.

 

The method has two input parameters:

  • filter: A structured data type to give the search filter
  • maxMessages: Integer value to limit the number of search results. It should always be provided to protect against high memory consumptions and out-of-memory situation.

 

The input structure for parameter filter contains many attributes of PI messages which are used to search for specific messages.

 

Important filter fields are:

  • archive: search in the Message Archive or in the Database. Use false to search in the Database
  • direction: Sender or Receiver direction. Valid values are “INBOUND” or “OUTBOUND”
  • fromTime: The start time for the date/time selection
  • toTime: The end time for the date/time selection. Find messages which are processed between fromTIme and toTime.
  • interface: The message interface name and namespace
  • messageIDs: A message ID for searching for a specific message. If a message ID is given in this field, all other filter attributes are ignored. Must be in the format of a 36 character length guid like “5ba9192c-fa6c-11e0-ca61-00001001a6a3”
  • onlyFaultyMessages: If set to true, then only messages which had an error in the processing are in the result. This doesn’t mean that the current message status is “Error”. It can be a “Successful” message which at some point had an error.
  • protocol: The message protocol. Should be “XI”
  • qualityOfService: The message Quality of Service. Valid values are “EO”, “BE” and “EOIO”
  • receiverInterface: Receiver interface name and namespace
  • receiverName: Receiver Component name
  • receiverParty: Receiver Party name
  • senderInterface: Message sender Interface name and namespace
  • senderName: Sender Component name
  • senderParty: Sender Party name
  • status: The message status. Valid values are “success”, “toBeDelivered”, “waiting”, “holding”, “delivering”, “systemError”, “canceled”. Only one status is possible per web service call. It’s not possible to send a combination of 2 or more status at a time.

 

Unused filter fields can be left empty so that they are not considered during the search. An example for a valid filter which searches messages with error status in a certain time interval can look like in the following screenshot:

 

 

Only fromTime, toTime and status are provided here. All other filter attributes are left empty and thus ignored.

 

The result of method getMessageList  contains a structure with all search results, and for each result the message header information. An important field in the result structure is messageKey, because this field contains the input value which is required as input for the other methods getMessagesByKeys, getMessageBytesJavaLangStringBoolean and getMessageBytesJavaLangStringIntBoolean.

 

The following picture shows a part of the result:

 

The result contains an array of AdapterFrameworkData with one entry for each message that matched the filter.

 

Method getMessagesByKeys

This method works similar to method getMessageList. It returns a list of messages and their header attributes, but without the message payload. The only difference between the two methods is that it has only a list of message keys as input parameters.

 

This message keys have to be in the following format:

<guid>\<direction>\<node>\<QoS>\<seqNr>\
(Please note the backslash “\” at the end. It’s important to add it!)

 

  • guid: the message id
  • direction: the message direction. Can be “INBOUND” or “OUTBOUND” (without quotation marks)
  • node: the server node id
  • QoS: the quality of service. Can be “EO”, “BE” or “EOIO” (without quotation marks)
  • seqNr: the sequence number for EOIO messages

 

An example for a valid message key is like this:

5ba9192c-fa6c-11e0-ca61-00001001a6a3\INBOUND\268543650\EO\0\

 

An invalid message key would be:

5ba9192c-fa6c-11e0-ca61-00001001a6a3\\\EO\\

(all 5 parameters needs to be filled in the message key)


Please note: Only the fields guid and direction are really of importance here. All other fields can have “random” values (but the values should stay in the same parameter type; e.g. server node has to be integer, QoS has to be one of “EO”, “BE”, “EOIO”, etc.)

Methods getMessageBytesJavaLangStringBoolean and getMessageBytesJavaLangStringIntBoolean

The two methods getMessageBytesJavaLangStringBoolean and getMessageBytesJavaLangStringIntBoolean can be used to retrieve the payload of a message. Both methods work very similar, only the second method has an additional parameter for the message version (method one always returns the latest message version).

 

 

The other input parameters are

  • archive: search in the Message Archive or in the Database. Use false to search in the Database
  • messageKey: The message key to identify the message. The message key format is described above under method getMessagesByKeys. The message key can be obtained from the result of method getMessageList from result field messageKey
  • version: The number of the message version to read. Use -1 to read the newest version, or a number larger/equal 0 to read older versions. E.g. if there are 4 message versions existing, valid values are 0, 1, 2 and -1. Please note that 3 isn’t a valid value in this example, since -1 has to be used to read the newest message version.

 

The result of this method contains a byte array with the serialized payload of the PI message. It can be deserialized and afterwards processed. For example it can be (depending on the payload type) parsed as XML document to extract several elements of the XML. In the code this can look similar to this:
// call web service to get data
byte[] msgBytes = adapterMessageMonitoringWS.
getMessageBytesJavaLangStringBoolean(…);
// parse result
SAXParser parser
= SAXParserFactory.newInstance().newSAXParser();
parser.parse(new ByteArrayInputStream(msgBytes),new MySAXHandler());

Method getLogEntries

This method can be used to retrieve the Audit Log entries of a PI message. It has 5 input parameters:

  • messageKey
  • archive:
  • maxResults: Maximum number of results. The results are sorted by time/date in descending order (newest come first)
  • locale:
  • olderThan: Only return Audit Log entries that are older than this timestamp. Can be used together with parameter maxResults to browse through a long list of audit log entries without overloading the web service client. If this parameter is not provided than the result starts with the newest log entries.

 

The return value contains a list with the Audit Log entries for the message with the timestamp, severity, text and some other data.

 

 

 

Retrieving Messages from ABAP

There are two important function modules to read the messages from the ABAP stack:

  • SXMB_GET_MESSAGE_LIST
  • SXMB_GET_XI_MESSAGE: retrieve the payload of a PI message

Both function modules are remote enabled and can be called externally, e.g. via JCO from a Java application. The functionality of the function modules is similar to the Java web service methods.

 

Function module SXMB_GET_MESSAGE_LIST

The function module can be used to search for messages which match to a given filter. This is according to the search functionality in RWB Message Monitoring or SXMB_MONI.

 

Input and output parameters of the function module:

 

Input parameters:

  • IM_FILTER: Type SXI_MESSAGE_FILTER;  A filter to search for message which fulfill certain criterias
  • IM_MESSAGE_COUNT: Type INT4;  The maximum number of results. It should always be provided to protect against high memory consumptions and out-of-memory situation.

 

The data type SXI_MESSAGE_FILTER  for the filter has the following important fields:

  • FROM_TIME: TIMESTAMPL; UTC Time Stamp in Long Form (YYYYMMDDhhmmssmmmuuun) The start time for the selection interval.
  • TO_TIME: TIMESTAMPL; UTC Time Stamp in Long Form (YYYYMMDDhhmmssmmmuuun) The end time for the selection interval.
  • OB_PARTY: SXI_PARTY; XI: Communication Party; Sender Party
  • OB_PARTY_TYPE: SXI_PARTY_TYPE; XI Partner: Identification Schema
  • OB_PARTY_AGENCY: SXI_PARTY_AGENCY; XI Partner: Agency
  • OB_SYSTEM: AIT_SNDR; Sending System
  • OB_NS: RM_OIFNS; Outbound/Sender Interface Namespace
  • OB_NAME: RM_OIFNAME; Outbound/Sender Interface Name
  • IB_PARTY: SXI_PARTY; XI: Communication Party; Receiver Party
  • IB_PARTY_TYPE: SXI_PARTY_TYPE; XI Partner: Identification Schema
  • IB_PARTY_AGENCY: SXI_PARTY_AGENCY; XI Partner: Agency
  • IB_SYSTEM: AIT_RCVR; Receiving System
  • IB_NS: RM_IIFNS; Inbound/Receiver Interface Namespace
  • IB_NAME: RM_IIFNAME; Inbound/Receiver Interface Name
  • MESSAGE_IDS: SXMSCGUID_T; Character Format Message GUID Table. A message ID for searching for a specific message. If a message ID is given in this field, all other filter attributes are ignored. Must be in the format of a 32 character length guid like “5ba9192cfa6c11e0ca6100001001a6a3”
  • QUALITY_OF_SERVICE: SXMSQOS; Integration Engine: Quality of Service. Valid values are “BE”, “EO” and “EOIO”
  • CLIENT: SYMANDT; Client ID of Current User
  • STATUS_TYPE: SXI_STAT_TYPE; XI: Type of a Status. A number for a status groups. Status groups pool together single message status values of the Integration Engine into groups with a semantic equal meaning. Valid numbers are:

01           Successful
03           Scheduled
05           Application Error
06           System Error
10           Branched
12           Waiting
19           Manually Modified
16           Retry
21           Canceled with Errors
30           Waiting for Confirmation
50           Log Version

  • STATUS: SXMSPMSTAT; Integration Engine: Message Status. Instead of a status group (with parameter STATUS_TYPE) now a single status can be given for the message search. For a list of all available status, please see contents of table SXMSMSTAT via transaction SE16.

 

Output parameter:

The function module returns the search result in output parameter EX_MESSAGE_DATA_LIST. This structure contains a list of results of type SXI_MESSAGE_DATA. It contains the header information for all found PI messages.

 

Function module SXMB_GET_XI_MESSAGE

This function module can be called with the message ID (taken from the result of SXMB_GET_MESSAGE_LIST) and it returns the payload of the message. The result can be deserialized and further processed (e.g. parsed as XML document).

 

Input and output parameters of the function module:

 

 

Input parameter:

  • IM_MSGKEY: SXMSMKEY; XI: Message-Id as returned in the result of SXMB_GET_MESSAGE_LIST
  • IM_ARCHIVE: SXMSFLAG; 1=read from archive, 0 = read from database; should be 0 to read from the database.
  • IM_VERSION: SXMSLSQNBR; message version number; leave empty to get the latest version

 

Output parameter:

  • EX_MSG_BYTES: XSTRING; the bytes of the message

Some tips and tricks for adapter modules lifecycle management

$
0
0

Recently I was involved in migrating lot of adapter modules from a PI 7.0 system to a PI 7.4 environment. Creating a custom adapter module is not too complicated but like anything else, we need to be clear of all the components involved and their implications. There is a very good SDN document about the steps to create one. However, in my experience keeping the following points i makes the development process much smoother.


1. Understanding the descriptor files:

 

Most of the issues with adapter modules occur as one of the descriptors is wrong. Let's go through them.

 

a) ejb-j2ee-engine.xml: This file will be in the EJB project. This links the EJB name with the JNDI name. Hence, if you are getting a module not found error, most likely you need to check your jndi-name.

 

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<ejb-j2ee-engine

  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

  xsi:noNamespaceSchemaLocation="ejb-j2ee-engine.xsd">

  <enterprise-beans>

  <enterprise-bean>

  <ejb-name>ToUpperCase</ejb-name>

  <jndi-name>ToUpperCase</jndi-name>

  </enterprise-bean>

  </enterprise-beans>

</ejb-j2ee-engine>

 

b) ejb-jar.xml : This is in the EJB project as well . This has the actual class name, EJB interface names, bean type etc. The key thing here is to check the ejb-name and the class name. The home,local, remote and local-home names always stay the same.( to standard SAP values ). This causes a lot of errors as normally people just accept the default provided names for these interfaces and it causes errors.

 

 

 

<?xml version="1.0" encoding="UTF-8"?>
<ejb-jarxmlns="http://java.sun.com/xml/ns/j2ee"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"id="ejb-jar_ID"version="2.1"xsi:schemaLocation="http://java.sun.com/xml/ns/j2eehttp://java.sun.com/xml/ns/j2ee/ejb-jar_2_1.xsd">
  <enterprise-beans>
  <session>
  <icon/>
  <ejb-name>ToUpperCase</ejb-name>
  <home>com.sap.aii.af.lib.mp.module.ModuleHome</home>
  <remote>com.sap.aii.af.lib.mp.module.ModuleRemote</remote>
  <local-home>com.sap.aii.af.lib.mp.module.ModuleLocalHome</local-home>
  <local>com.sap.aii.af.lib.mp.module.ModuleLocal</local>
  <ejb-class>com.demo.ToUpperCasebean</ejb-class>
  <session-type>Stateless</session-type>
  <transaction-type>Container</transaction-type>
  </session>
  </enterprise-beans>
</ejb-jar>

 

c) application-j2ee-engine.xml : This file is in the EAR project and stays the same, unless the project needs additional libraries.

 

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<application-j2ee-engine

  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

  xsi:noNamespaceSchemaLocation="application-j2ee-engine.xsd">

  <reference

  reference-type="hard">

  <reference-target

  provider-name="sap.com"

  target-type="service">engine.security.facade</reference-target>

  </reference>

  <reference

  reference-type="hard">

  <reference-target

  provider-name="sap.com"

  target-type="library">engine.j2ee14.facade</reference-target>

  </reference>

  <reference

  reference-type="hard">

  <reference-target

  provider-name="sap.com"

  target-type="service">com.sap.aii.af.svc.facade</reference-target>

  </reference>

  <reference

  reference-type="hard">

  <reference-target

  provider-name="sap.com"

  target-type="interface">com.sap.aii.af.ifc.facade</reference-target>

  </reference>

  <reference

  reference-type="hard">

  <reference-target

  provider-name="sap.com"

  target-type="library">com.sap.aii.af.lib.facade</reference-target>

  </reference>

  <reference

  reference-type="hard">

  <reference-target

  provider-name="sap.com"

  target-type="library">com.sap.base.technology.facade</reference-target>

  </reference>

  <provider-name>sap.com</provider-name>

  <fail-over-enable

  mode="disable"

  xsi:type="fail-over-enableType_disable"/>

</application-j2ee-engine>

 

This file has references of libraries etc. when the application is running. A good way to check that the references are correct is by going to Java Class Loader on the application server and give the component name.

 

In the below screen-shot we're able to fins component type "service" with name "engine.security.facade" as updated in application-j2ee-engine.xml .

 

AM1_app_engine.png

 

 

Addiing additional references:  If we're using additional libraries, we need to update the reference in the below way.

 

- Add the client libraries to allow compilation. Many of the libraries are already available in NWDS plugins ( e.g. JCO 2/3 libraries ). However, in some cases they will need to be obtained from SAP PI application server.

 

- Refer Javadoc to understand the DC that contains the class. Taking interface ApplicationPropertiesAccess as an example. Javadoc at

 

http://help.sap.com/javadocs/NW73/SPS05/CE/en/com.sap.en/com/sap/engine/services/configuration/appconfiguration/ApplicationPropertiesAccess.html

 

gives the DC as:

 

[sap.com] tc/je/appconfiguration/api



AM_2.png

 

Hence, the deployment descriptor will need to be updated as:

 

<reference reference-type="hard">

    <reference-target provider-name="sap.com"

target-type="service">

      tc~je~appconfiguration~api

    </reference-target>

</reference>


 

2. Renaming adapter module name: We need to update a module name many a times. Some of the reasons could be:


- While testing, we gave a "test" name and want to change the module name without rewriting the whole project.

- Create a new version and test it out in some scenarios before changing configuration everywhere.


The adapter module name that we configure in adapter modules is actually the JNDI name in ejb-j2ee-engine.xml. So initially the ejb and jndi names were the same.


AM_4.png




In the below screenshot, I changed the JNDI name to Convert2Up


AM_3.png


After making only the JNDI name, I sent a test message after updating the adapter module config parameters and it works fine.



AM_5.png



3. Adding software component information: The adapter module can be deployed directly from NWDS by running the EAR project on the application server. However, it is better to organise custom software components like SAP delivered software components in its own namespace so that it's easier to track the inventory and get all the benefits of application lifecycle management.


Without the software component information, we can verify the bean is successfully deployed by checking JNDI browser.


AM_12.png


However, if there are multiple beans, we need to know their names.  It's better to organise in their own namespace. We need to go through the below process : EAR --> SDA --> SCA


The steps required are :


- Convert EAR to SDA file ( which is EAR file along with SAP specific manifest info in SAP_MANIFEST.MF file )


- Add SDA file to a SCA file along with the software component information


The easiest approach is to use nwpacktool . Update the batch file with JAVA_HOME and NWPACKTOOLLIB.


AM_6.png


Then lanuch the batch ( or .sh ) file. As an example, to create a SDA file from the EAR.


AM_7.png


Now, add component information and create a SCA file.


AM_8.png



Now, deploy the SCA file .  We can display all custom modules created under the same component.In the below screen-shot, two beans are deployed under SC "AdaptModules" with the first bean having version 3.


AM_9.png,


It's easier to deploy to any system as well since there is only one SCA file to be updated .


Anonymous SOAP calls in SAP PI

$
0
0

Best practice in SOAP calls is to provide at least user and password to authenticate call. Unfortunately sometimes the client systems do not provide the option to send credential in SOAP calls.

 

This issue have been discussed in previous Blogs like  A closer look at SOAP Sender authentication , but the solutions provided are either not supported by SAP PI single stack or they are too dangerous because disable SOAP authentication at adapter level.

 

One option we have found quite interesting in a recent project is to use SAP Web Dispatcher to allow anonymous SOAP calls to specific services.

 

 

Overview.png

 

The idea is the following:

1. We define a new endpoint for the anonymous service,  for instance /xi/project1/service1

2. Rewrite the end new endpoint to the SOAPAdapter URL related to the service

3. Add header authentication for the endpoint.

 

 

Let have a look in detail.

 

1. The first thing to find out  what the URL for the service call is.

 

This URL has the format

http://<server>:<port>

/XISOAPAdapter/MessageServlet?senderParty=<SENDER_PARTY>&senderService=<SENDER_SERVICE>&receiverParty=<RECEIVER_PARTY>&receiverService=<RECEIVER_SERVICE>&interface=<INTERFACE>&interfaceNamespace=<INTERFACE_NAME_SPACE>


for instance


http://myserver.com:50000/XISOAPAdapter/MessageServlet?senderParty=&senderService=BC_MYBC&receiverParty=&receiverService=&interface=BookingUpdate&interfaceNamespace=http://mycompany.com/booking


There are several places where you can find this, one of them is in Display WSDL for the ICO.


2. HTTP Basic Authentication is constructed as follows:

 

  • Username and password are combined into a string "username:password"
  • The resulting string is then encoded using the Base64
  • The authorization method and a space i.e. "Basic " is then put before the encoded string.

 

For the values pouser and mypassword the string will be "Basic cG91c2VyOm15cGFzc3dvcmQ="

 

 

3. Update modification handler rules for SAP Web Dispatcher.

 

The documentation is here  Modification of HTTP Requests - SAP Web Dispatcher - SAP Library

 

In Unix the file is something like /usr/sap/<SID>/SYS/global/security/data/icm_filter_rules.txt

 

You can add 3 rules similar to these ones:

if %{PATH} stricmp "/xi/project1/service1"

SetHeader Authorization "Basic cG91c2VyOm15cGFzc3dvcmQ="

RegRewriteUrl ^/xi/project1/service1/XISOAPAdapter/MessageServlet?senderParty=&senderService=BC_MYBC&receiverParty=&receiverService=&interface=BookingUpdate&interfaceNamespace=http://mycompany.com/booking" [qsreplace]

 

4. Logon to Web Dispatcher Administrator

 

http://<server>:<port>/sap/admin    for instance http://myserver:50000/sap/admin

 

Select HTTP Handler -> Modification Handler

 

Press Reload Rule File

 

5. In the SAP call for this service replace the end point with the new one, for instance /xi/project1/service1

 

And hopefully it will work without user and password

Dynamic File name and Directory pseudo code

$
0
0

Hi All,

 

There are many articles, blogs and posts on dynamic filename in SCN. I was looking for a pseudo code where dynamic filename and dynamic directory has been implemented at the same time. I did'nt find any. Here, I am posting a pseudo code with dynamic file name and condition based dynamic directory. Hope this helps.

 

 

 

Name:setFileName

Title:setFileName

Execution Type:Single Values

Category:User-Defined

Type:Argument

Name:variablePart

Java Type:String

 

//Dynamic File name and Directory: Author-Partha

 

 

String filename = new String("");

String directory = new String("");

DynamicConfiguration conf1 = (DynamicConfiguration) container.getTransformationParameters().get(StreamTransformationConstants.DYNAMIC_CONFIGURATION);

DynamicConfigurationKey key1 = DynamicConfigurationKey.create("http:/"+"/sap.com/xi/XI/System/File","FileName");

DynamicConfigurationKey key2 = DynamicConfigurationKey.create("http:/"+"/sap.com/xi/XI/System/File","Directory");

filename = "Customers_Outbound_" + variablePart + "_.csv";

conf1.put(key1,filename);

//return filename;

 

 

if (variablePart.equals("6301"))

{

directory = "/IN-IFL/01-OFFICE";

}

 

 

else if (variablePart.equals("6302"))

{

directory = "/IN-IFL/02-OFFICE";

}

 

 

else if (variablePart.equals("6303"))

{

directory = "/IN-IFL/03-OFFICE";

}

 

 

else if (variablePart.equals("6304"))

{

directory = "/IN-IFL/04-OFFICE";

}

 

 

else if (variablePart.equals("6305"))

{

directory= "/IN-IFL/05-OFFICE";

}

 

 

else if (variablePart.equals("6306"))

{

directory= "/IN-IFL/06-OFFICE";

}

 

 

else if (variablePart.equals("6308"))

{

directory = "/IN-IFL/08-OFFICE";

}

 

 

else if (variablePart.equals("6309"))

{

directory = "/IN-IFL/09-OFFICE";

}

 

 

else if (variablePart.equals("6310"))

{

directory = "/IN-IFL/10-OFFICE";

}

 

 

else if (variablePart.equals("6312"))

{

directory = "/IN-IFL/12-OFFICE";

}

 

 

else if (variablePart.equals("6313"))

{

directory = "/IN-IFL/13-OFFICE";

}

 

 

else if (variablePart.equals("6314"))

{

directory = "/IN-IFL/14-OFFICE";

}

 

 

else

{

directory = "/IN-IFL/20-OFFICE";

}

 

 

conf1.put(key2,directory);

return directory;

How to create SAP PI adapter modules in EJB 3.0

$
0
0

Introduction:

 

We all have worked on creating custom adapter module in various projects at some point. And we have mostly written all our custom modules in EJB(Enterprise Java Bean) 2.1 standard. In this blog, let’s look into few main differences between EJB 2.1 and 3.0 standards and how to develop and deploy your custom modules in EJB 3.0 standard. This document will be more helpful if you have basic knowledge on how to create/develop custom adapter modules in EJB 2.1 standard.

 

Differences:

 

Few key differences between EJB 2.1 and EJB 3.0 are

 

S.No

EJB 2.1

EJB 3.0

1

XML deployment descriptor(ejb-jar.xml) is mandatory

XML deployment descriptor in optional and annotations can be used instead

2

Bean implementation class must implement SessionBean interface

Not required to implement SessionBean interface

3

Bean implementation class must override all bean lifecycle methods  whether you use them or not

Add your own bean lifecycle methods if required using annotations

4

Complex, more work on developers

Simple, less work on developers as container does most of the work.

 

There are many other differences between EJB2.1 and 3.0, but we are not using them.

 

Let's create a simple adapter module in EJB 3.0:


To keep the implementation simple our custom adapter module just prints a statement to the audit log.


Step 1: Create new EJB project in NWDS

 

  1. Project name: FileValidation_EJB
  2. EJB module version: 3.0
  3. Add EAR membership: FileValidation_EAR
  4. Click Next

new ejb proj.jpg

 

  1. Uncheck option to create EJB client jar
  2. Uncheck Generate ejb-jar.xml deployment descriptor
  3. Click Finish

new ejb proj2.jpg

 

Step 2: Configure Build Path

 

  1. Right click on FileValidation_EJB
  2. Select Build Path > Configure Build Path

new ejb proj3.jpg

 

  1. Select Libraries tab
  2. Click in Add Library

new ejb proj4.jpg

 

  1. Select XPI Library
  2. Click Next

new ejb proj5.jpg

 

  1. Select Library Type: XPI Adapter Libraries
  2. Click Finish

new ejb proj6.jpg

 

Step 3: Create a new Session Bean

 

  1. Right click on FileValidation_EJB
  2. Select New > Session Bean(EJB 3.x)

new ejb proj7.jpg

 

  1. Java package: com.sap.pi
  2. Class name: Validation
  3. State type: Stateless
  4. Uncheck create remote and local business interface
  5. Click Finish

new ejb proj8.jpg

 

Add below annoations to the Validation class

  1. @Stateless(name=”ValidationBean”)
  2. @Local(value={ModuleLocal.class})
  3. @Remote(value={ModuleRemote.class})
  4. @LocalHome(value=ModuleLocalHome.class)
  5. @RemoteHome(value=ModuleHome.class)

new ejb proj9.jpg

Implement Module interface and add bean lifecycle methods using @PostConstruct and @PreDestroy annotations. Bean life cycle methods are optional and can be ignored.

new ejb proj10.jpg

 

Step 4: Provide JNDI name for module

 

  1. Expand FileValidation_EJB > ejbModule > META-INF
  2. Double click ejb-j2ee-engine.xml
  3. Right click on XML node ejb-j2ee-engine
  4. Select Add Child > enterprise-beans

new ejb proj11.jpg

 

  1. Right click on XML node enterprise-bean
  2. Select Add Child > jndi-name

new ejb proj13.jpg

 

  1. ejb-name: ValidationBean (This should be same as the value specified in @stateless annotation)
  2. jndi-name: SetValidation (This name will be used in the communication channel module tab)
  3. Save the file

new ejb proj14.jpg

 

Step 5: Add standard references to EAR


  1. Expand FileValidation_EAR > EarContent > META-INF
  2. Double click application-j2ee-engine.xml
  3. Right click on XML node application-j2ee-engine
  4. Select Add Child > reference

new ejb proj15.jpg

 

  1. Right click on reference-target
  2. Select Add Attribute > provider-name

new ejb proj16.jpg

 

  1. reference-target: engine.security.facade
  2. reference-type: hard
  3. target-type: service
  4. provider-name: sap.com

new ejb proj17.jpg

 

Repeat the steps in step5 and add below references

 

S.No

Reference-Target

Reference-Type

Target-Type

Provider-Name

1

engine.j2ee14.facade

hard

library

sap.com

2

com.sap.aii.af.svc.facade

hard

service

sap.com

3

com.sap.aii.af.ifc.facade

hard

interface

sap.com

4

com.sap.aii.af.lib.facade

hard

library

sap.com

5

com.sap.base.technology.facade

hard

library

sap.com

 

  1. Right click on XML node application-j2ee-engine
  2. Select Add Child > fail-over-enable

new ejb proj18.jpg

  1. Right click on XML node fail-over-enable
  2. Select Add Attribute > type

new ejb proj19.jpg

  1. Right click on the node fail-over-enable
  2. Select Add Attribute > mode

new ejb proj20.jpg

  1. xsi:type: fail-over-enableType_disable
  2. mode: disable
  3. Save the file

new ejb proj21.jpg


Step 6: Deploy EAR into PI Web AS Java

 

  1. Right click on FileValidation_EAR
  2. Select Export > SAP EAR file

new ejb proj22.jpg

  1. Select a target folder on the local file system
  2. Check Export source files and click Finish

new ejb proj23.jpg

Add SAP AS Java instance to NWDS

  1. Window > Preferences > SAP AS Java > Click Add button
  2. Enter your PI system hostname and instance number and click OK

new ejb proj24.jpg

 

Open deployment perspective in NWDS

  1. Select Window > Open Perspective > Other > Deployment
  2. Click OK

new ejb proj25.jpg

 

In Deployment Job view

  1. Click on Import button under Deployment List

new ejb proj26.jpg

 

  1. Select File System
  2. Click Finish and select the EAR file to deploy from local filesystem.

new ejb proj27.jpg

 

  1. Click on Start button to start the deployment
  2. Enter j2ee_admin credentials if prompted

new ejb proj28.jpg

 

The module should get deployed successfully without any errors

new ejb proj29.jpg

 

Step 7: Use module in communication channel and test


Use the module in channel

new ejb proj31.jpg


Audit log

new ejb proj30.jpg


Note:

  1. EJB3.0 solves the same problems as in EJB2.0 , but EJB3.0 is more developer friendly than EJB2.0
  2. I am using SAP NWDS (Net Weaver Developer Studio) 7.31 SP10, The screenshots in this document may vary depending on NWDS version that you are using.
  3. The perspective that am using in NWDS is Java EE

Webinar SAP HANA Cloud Integration

$
0
0

SAP has recognized the need for customers to leverage and augment their existing on-premise investments with cloud applications for faster business outcomes. To help customers rapidly and flexibly integrate our cloud solutions with other systems, whether they are SAP, non-SAP, or custom applications, SAP provides a cloud integration solution called SAP HANA Cloud Integration (HCI) which is a core service running on top of the SAP HANA Cloud Platform.

 

In this SAP Insider hosted webinar you can understand the core product capabilities of SAP HANA Cloud Integration. You can also see how partners like Itelligence are building a SalesForce adapter using the adapter SDK of HCI.  The webinar also talks about how customers like Bentley, Owen Illinois etc have been solving their cloud integration challenges using HCI and also understand how the pre-packaged content delivered by SAP can significantly reduce the time spent on integration projects.

For more details click here Event Registration (EVENT: 921502 - SESSION: 1)

Using SAP Hana Cloud Integration (HCI) with SAP Cloud Connector

$
0
0

While trying on the SAP Hana Cloud Integration trial version for integration with on premise SAP systems and cloud SAP applications – one issue of concern is the connection between SAP HCI and on premise SAP PI or ECC system from network perspective. The reverse proxy approach is not always the best approach and the client network teams are sometimes reluctant to have firewall rules modified with service end points. SAP Cloud Connector based on SAP Hana Cloud Platform can be an easy to use alternative in this case and it also handles propagation of the cloud user identity in a trusted manner to on-premise SAP system.

For testing purpose only, let us try in this blog to reach the ping service on On Premise SAP system from SAP Hana Cloud Integration using SAP Cloud Connector.

SAP Hana Cloud Platform and SAP Hana Cloud Integration trial user has been used to set up the connection.

  1. Activate the HTTP service in sicf transaction. Once activated successfully the following link can be checked in local browser or you can use the ‘Test Service’ option shown below.                                                                                                                                                                                        http://<on premise sap hostname>:<port>/sap/public/ping?sap-client=<client>

       1.png

       2.png

 

     In this blog we will try to reach this onPremise ping service from SAP HCI and capture the response i.e ‘Server reached successfully’ in a file.


     2. Setting up SAP Cloud Connector

 

For setting up the SAP Cloud Connector please refer to the following link :

http://hcp.sap.com/developers/TutorialCatalog/con100_1_setting_up_cloud_connector.html

Refer to the section ‘Install the SCC’ section for installing the cloud connector and ‘Establish connection to your cloud account’ to setup the connection to your SAP HCP trial account.

 

     3. Setting up the Connectivity application in SAP Hana Cloud Platform from Eclipse

For this example, we will use the connectivity sample from the SAP Hana Cloud Platform SDK available from https://tools.hana.ondemand.com/#cloud

3.png

The ‘connectivity’ project is imported into Eclipse and deployed in SAP Hana Cloud platform as described in Rui ‘s opensap course on SAP Hana Cloud Platform. This is basically a servlet which requests data from onPremise system and passes this to the calling system in this case SAP HCI system. The primary steps are outlined below :

 

 

  1. The Hana Cloud Platform server is setup in Eclipse as below :

     4.png5.png

    .b. Right click on the imported project and Run On Server to deploy the application in SAP Hana Cloud Platform.6.png

  c. Double click on the server and adjust the URL destination ‘backend-no-auth-destination’ to the subpath - /sap/public/ping?sap-client=<client>.The hostname and port does not need to be the actual onpremise SAP server and we will set the mapping of system names in next step. We are using here the destination ‘backend-no-auth-destination’ which is provided in the sample ‘connectivity’as well.7.png

 

d. Set the mapping of the external system and internal actual hostname in SAP cloud connector:8.png

 

4. Setup SAP HCI iFlow to reach onPremise SAP system as below. The iFlow fetches the HTTP response from onPremise and writes it into a file in SFTP server

9.png

Note that in the receiver channel the Query should start with ‘&’.

10.png

The URL Address to be used in the receiver channel can be taken from Eclipse as below :

11.png

5. Set the ‘Timer start event’ in the HCI iFlow to ‘Run Once’ and deploy the project in your HCI Tenant.

6. Monitoring :

   a.  Once it is deployed successfully, the message monitoring in HCI can be checked for failures.12.png

The message processing log is shown below on selecting the message in the ‘Properties’ tab.

 

13.png

b. Logs in the SAP Cloud Connector can be checked in the ‘Logs’ link in the left pane after logging in to SAP Cloud Connector.

 

14.png

 

7. Checking the result :

The file is checked in the path in SFTP server which is mentioned in the HCI receiver channel. The file has the response from onPremise ping service confirming the connectivity setup.

 

15.png

 

For a number of cases, for example when cloud applications like Successfactors (for example Employee Replication interface from SF to SAP ECC) is being integrated with on Premise SAP using SAP HCI , SAP Cloud Connector can be a helpful tool and hope SAP will have a recommended approach at some point of time to frequently use the Cloud Connector with SAP HCI Integrations.

Viewing all 676 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>