Best Practices: Designing Orchestration Flow

Once you have completed your analysis and have a blueprint in place, you can proceed with the development of the processes and the associated artifacts.

Process development cycle

The development of a process usually involves a number of steps:
  • Import and locate resources related to services that will be orchestrated. This sometime involves the creation of artifacts based on sample data.
  • Create the orchestration flow based on process models created during the analysis stage
  • Quality Control
    • Collect sample data for testing
    • Simulate the process when new functions are added to the process
    • Create automated unit test cases and test suites for integrated regression tests
      • You can use either a recorded simulation session or the BUnit wizard to generate test cases
      • Test cases can be aggregated as test suites that in turn can be incorporated in your QA process
  • Deployment
    • Create deployment artifacts. This is the process of defining endpoint references and other attributes for the partner services and the process itself
      • Define the invoke handler and endpoint reference for your partners. The invoke handler specifies what binding and protocols to use when sending messages to a partner service.
      • Define a receive handler and endpoint reference for the process's my role. This defines how the service the process exposes is to be accessed by clients.
      • Define policies related to the process's my role and partner roles - these include any security, communication and QoS attributes required for the interaction
      • Define process governance related properties including versioning policies, persistence policies and runtime process management-related data (i.e. indexed properties)

Important design principles using BPEL

The following are some of the most important BPEL process design considerations:
  • Robust - design processes to perform as expected in all possible real-world scenarios
    • Use the appropriate interaction pattern (synchronous vs. asynchronous). Synchronous communication is only suitable for low latency, short-lived service invocation. Asynchronous communication can handle long-lived process and interaction patterns but requires more programming effort if you don't plan on using ActiveVOS Server managed correlation.
    • Use scopes and isolated scopes to separate the units of work in a process. Fault and compensation handlers can be attached to a scope so it is important to place the activities belonging to one unit of work into one scope so the integrity of the process can be maintained if a fault occurs. If you have parallel activities and are sharing resources between them, then use isolated scopes to handle access to shared resources.
    • Use structured fault and compensation handling to handle all known fault and exception conditions. If there is known faults that can be returned by a partner, then you should think how to handle it.
    • Use message validation to validate inbound and outbound messages whenever it is appropriate. Catching invalid messages early and handling it can help to make the process much more robust. However, be aware that validating message adds latency to the process's execution.
  • Flexible - create interfaces, data models and deployment artifacts that can be easily modified or extended at a later stage when required
    • Use extension mechanisms in interface design to add flexibility. For example, you can use the XML Schema anyType to indicate that a particular definition is extendable.
    • Use URN mapping to add flexibility to partner role binding. Doing so lets you add a level of indirection when specifying the endpoint address of a partner at process deployment-time. You can use tokens in the endpoint address when creating the process deployment descriptor and substitute the tokens with components (host, port, service names, etc.) of the real address using the engine administration console. This eliminates the need of redeploying a process for the sole reason that a partner’s endpoint address has changed. This technique can also be used to help test processes in various staging environments prior to deployment to production.
  • Efficient - design for runtime efficiently
    • BPEL contains constructs that allow for parallel execution. Flow and parallel forEach activities, providing powerful semantics to express using attributes for how activities execute in parallel. ActiveVOS supports true parallel execution using multiple threads for receiving inbound messages and invoking external services.
  • Compliant - design for interoperability
    • Use WS-I Basic Profile 1.1 as guidance when exposing process as services
      • Use doc-lit style whenever possible so the message payload can be validated against schema
      • Use single part messages whenever possible
      • Do not use RPC-Encoded binding style unless absolutely required as encoded message cannot be easily validated.
    • When should you use ActiveVOS extensions? ActiveVOS provides BPEL extensions to address specific needs. Use BPEL extension with caution because this makes the process less portable. Below is a summary of the extension:
        • Suspend, Continue, Break - these permit logically driving a process into the Suspend state and more fine grained control of your looping constructs
        • XQuery, JavaScript as expression language, in addition to XPATH which is mandated by the BPEL specification
        • ActiveVOS-specific custom functions (e.g. getProcessId, getAttachment, etc.) that provide enable various capabilities are not covered by BPEL specification.
        • Custom invoke, custom function – this provides the means to use a Java based components to support intricate expressions or to invoke partner services

Use of integrated testing for quality control

To ensure the quality of the application and reduce maintenance costs, ActiveVOS includes a set of built-in features that makes it much easier to perform quality control. There features are simulation and BUnit test.

Functional testing through simulation

Simulation allows a developer to test the correctness of the orchestration logic in the development environment. By providing sample data for all inbound messages, you can create a "scenario" that simulates a set of business conditions. You can then run the simulation against the process and check if the process behaves as expected under these conditions.

Simulation is a powerful tool to validate the process without the need of ever having to deploy it. You can simulate as many scenarios as you want for a given process. Simulation is especially useful for testing newly added logic. The key to simulation is to collect sample data early and understand how they can apply for the test scenarios.

Use BUnit for automated testing

While simulation is good in validating new functionalities, you would have to run many simulations in order to have good coverage of a process if it has multiple execution routes. And simulation requires human interaction, which means you cannot automate it.

To solve the problem of test coverage (especially important for regression test) and test automation, you can use the BUnit test function provided by ActiveVOS.

BUnit test is very similar to JUnit in that it is driven by scripts, can be run outside of the IDE as part of a build process and does not require human interaction.

There are two ways to generate the BUnit test cases:
  • Record a simulation session and save the result as a test case. Assertions are automatically made to check the correctness of outbound messages.
  • Use the BUnit wizard to generate a test case and manually modify the inbound message content and assertions. This is more suitable for advanced use cases.
In most cases you want to bundle all test cases for a given process in a test suite so it can be incorporated into one offline test.