I am going to write a ‘visual’ example of the article documented here: Performing Composite Operations on Oracle Database by Using BizTalk Server

I have a schema already created from Oracle that defines all of the stored procedures we are planning on calling from within BizTalk

The issue is that I need to send multiple of the same calls to the stored procedure and multiple stored procedure calls.

Now we are going to create the composite Schema Definition.

Let’s import the original schema, and for whatever reason, I needed to add the ns0 to the prefix.

 

So let’s change the root node name to Request (like it asks us to in the instruction)

And start adding the things we want to the Request node (making them repeatable), however, unlike the instructions, I am not naming it, because once I choose the correct node, it is going to rename it anyway:

Make it min 0 and max * and we are done with the first one.

And do it for the rest of the stored proc calls.

Let’s create the Response and do the same thing, except choosing all of the Response nodes. Here is the final schema

 

Now you can start mapping…

Jan 232015
 

One of the biggest beef’s I have with the BizTalk mapper (other than functiods in general) is the ‘feature’ where, when you click on a source node, it will ‘highlight all of the links and center the destination to show you the links.

For small maps, this is ‘hardly’ noticeable.

However, for maps that are large, and the same input node is used in multiple destination nodes, it becomes hair pulling-ly frustrating.

To disable this, in Visual Studio go to Tools –> Options –> BizTalk Mapper and uncheck the two default checks.

You then need to restart Visual Studio (perhaps just close the currently opened Maps) for this to finally not scroll.

Options

 

I am sure I am the last to figure this out, so pardon me for my late entry to the game of XSL goodness.

A few years ago, I discovered the power of XSL, and now I never use any of the out of the box functiods, in fact, I am hard pressed to try to remember the last time I even used inline C# in my maps.

So here is how I have ALWAYS coded inline XSLT:

I need to create an output node

Destination

So, what I have always done is created the XSL like this:

<ns2:P_BATCH_ITEM_CNT>
<xsl:value-of select="count(/s1:Batch/s0:Form)" />
</ns2:P_BATCH_ITEM_CNT>

Which should work. However, when I either Validate the map or compile it, I get the following error:

MakeMessages.btm: error btm1023: Exception Caught: ‘ns2′ is an undeclared prefix. Line 1, position 2

So then I open up the map and go find the ns2 definition up at the top of the generated xsl

namespaces

Copy the namespace down into the script like this:

<ns2:P_BATCH_ITEM_CNT xmlns:ns2="http://Microsoft.LobServices.OracleDB/2007/03/BIZTALK/Procedure">
<xsl:value-of select="count(/s1:Batch/s0:Form)" />
</ns2:P_BATCH_ITEM_CNT>

However, there is an ‘easier’ way to do this:

Use the xsl:element command, this way you don’t have to ‘care’ about declaring the namespace in all of your custom XSL scripts

<xsl:element name="ns2:P_BATCH_ITEM_CNT">
<xsl:value-of select="count(/s1:Batch/s0:Form)" />
</xsl:element>

Jan 222015
 

So I just was notified that the StackExchange Redis NuGet package was upgraded, so here is my experiences:

There isn’t anything on the site’s home page that says anything about upgrading, so it must be easy!

I opened up my solution and in the NuGet console, I pointed to my project that uses Redis and put the command to get the latest Strong Name in:

image

And here are the results

image

Just as I thought: EASY

Nov 142014
 

So I learned a bit more of the importance of the places you need to name what when setting up database facts in the BRE.

The table name is especially important. Within the BRC, you don’t have much flexibility in what you can edit, but when setting up either a long or short term fact, the names are extremely important, well, they have to match, or it simply states that the TypedDataRow is un resolvable.

Here is the link between what is setup in the BRC and how it correlates to setting up a table fact in code:

image

Oct 292014
 

So we at Stott Creations often get requests to ensure that data is valid flowing through the BizTalk.

The out of the box functionality is pretty straight forward, simply turn on XSD validation in the XML pipelines:

image

The sucky part about that is that it will only surface the first error. Now if your data is really bad, I have no desire to send the data in multiple times to get all of the errors.

What we have done is create a product that creates a comprehensive list of errors so you can send the entire list of errors.

So the first thing we have is a list of validation functions that will serve us to create the list of rules to validate the data against.

image

The second set of vocabulary items is the validation patterns:

image

To setup the actual policy, we need to first setup the ability to Assert the list to the Rule engine. I create the rule as 1 Assert (so it shows up at the top of the list). I drag the run into the conditions pane. I right click the actions and choose assert and then drag List Enumerator into the fact. I also want this to run first, so I set the priority to 2.

image

Now I need to advance the Enumeration. I create 2 Advance (to sit below the 1 Assert). I drag the IEnumerator MoveNext into the Conditions pane. I then right click the actions and choose Assert and drag in the Current IEnumerator into the fact. I then right click the Actions and add update and drag IEnumerator into the fact. Because I want this to run second, I set the priority to 1.

image

Okay, so now we are ready to start creating validation rules. Each rule will run independent of each other, and I really don’t care which order it runs in, I just need them all to run. Each rule that I create in this policy is going to have a priority of 0 (which is the default). Let’s create a rule that checks a format.

I am going to check for two things:

  1. Check the XPath to see if the rule is going to be valid
  2. Check the format to see if it is going to be valid

image

Right click and choose Equal and drag in XPath Statement from the vocabulary

image

I then go to Visual Studio and open up the schema and choose the element or attribute I am attempting to check and copy the xpath statement.

image

And paste it in the right side of the rule

image

Then I drag the Check Format into the AND

image

I then drag Text Value into the first <empty string> slot

image

Now I go to the other Vocabulary and choose the date format I care to validate against, in this case Date with optional century indicator MMDDYYYY and I drag it into the second <empty string> slot. I then have two items, the first one is the regular expression value and the second one is the friendly version. I want to choose the regular expression.

image

The next part is if this is true, meaning, the xpath value is true, and it doesn’t match the regular expression check, I want to create an error. I drag the Create Format Error (Manual) into the actions

image

Now I need to fill in the error information: I need to supply the node type, which is either an attribute or element, I drag Node Type into the <enter a value> section and Node Name into the <empty string>

image

I then drag Text Item into the next <empty string> element, Line Number, and Position into the next two 0 places

image

I then drag from the Patterns vocabulary the Date with optional century indicator MMDDYYYY into the last <empty string> place holder, this time I choose the human readable description (because really, who understands Regular Expressions?)

image

I can continue creating rules this way.

However, there is an easier way:

I create the next rule, creating the If pane the same way, this time I am going to check the SSN. In the THEN, I drag in Create Format Error (Automatic)

image 

I then drag in Validation Information into the <null> (which has all of the details I need (Attribute/Element, node name, text, position, line number, etc)

image

And then drag the Pattern in the <empty string> and choose the friendly explanation.

image

There is tests for data lengths, min, max, and length ranges.

Now to execute it.

First we will show how to do it in .NET

Adding a reference to StottCreations.Validation in the GAC. I then create the following code

  1. I load an XML document
  2. I instantiate a new Record and pass in the xml document
  3. I set the facts (only one) to the Record.
  4. Because I might want to see the trace I new up the DebugTrackingInterceptor
  5. I setup the policy calling the Validation policy I just created.
  6. I then execute the policy.
  7. I also need to clear and dispose the policy (because we have found that the automated garbage collection within .NET is not fast enough).

image

 

When I run the test, I can look at the record, and this is what I see:

image

If I run a valid xml document through, this is the results

image

I can do the same for an orchestration:

image

In the Initialize Variables shape is the following code, again we don’t have the Call Rules shape because we need to dispose immediately.

image

and then in the decision shape (Good)

image

and in the Terminate shape:

image

Oct 212014
 

I put together a demonstration for one of our clients demonstrating the Business Rule Engine.

The scenario was:

We have a form that has the person with an loan amount, and based on the state that they live in, a certain interest rate needs to be charged.

The entire purpose of the Business Rule Engine is to abstract the business logic from a developer. So in theory, I could put each of the states and their subsequent rate in the BRE. I wanted to abstract it even further. I wanted to have the data in a db that the business would be able to modify without even looking at the rule.

I created at table to store the rates

image

I created this schema so I can create the underlying class:

image

I opened the visual studio command prompt and created the .cs file from the .xsd by exectuting xsd.exe RateFacts.xsd /classes

I added a new method to the given class

image

I also created a Fact Creator (for testing purposes)

image

But the real purpose of this article is to show how to have long term database facts.

I have found the documentation on this rather sparse.

In the BRC, I have created two database facts, notice that they are Data table entries, this means that I am need to pass a Typed Data Table for this to work

image

and

image

I have created a couple of other Vocabulary items:

image

Here are the list of all of the Vocabulary Items

image

Trick: Hopefully this caught your attention

When you pass a data table, and you determine the value in the IF pane, it keeps that row of the table in memory for use in the THEN pane.

Here is the rule, I highlighted the DB Rate and DB State vocabulary items in the rule

image

Here is the form:

image

The code to actually call the Apply Rate policy is:

image

Notice that we don’t ever set the database. This is where the documentation seems to drop off. :(

I created a FactRetriever method that returns a Typed database table to the BRE (in the namespace as the Record class).

Notice, this is a long term fact. It only populates the Typed Table if it is null.

image

Now I need to make sure this logic gets called by anything that calls the policy, notice in the properties of the Apply Loan policy, the Fact Retriever points to the DBFactRetiever class (yes I had to gac and restart the BRC before I could see it)

image

When I run application, this is what I get:

image

And as long as I don’t restart the console application, it won’t re-query the db.

Oct 102014
 

I wanted to write about something that is sometimes mis understood regarding correlation sets.

A lot of samples I have seen have the same data being correlated on, or it is named the same, etc.

I am going to show that those are not requirements though.

I created a purchase order schema

PurchaseOrder

I also created an Acknowledgement Schema

Acknowledgment

Now I want to be able to correlate off of the purchase order number, but notice that they are not named the same, nor are they even the same type (one is an element, the other is an attribute. The only real thing that needs to be the same is the same data type (string, or int, or whatever; in this case they are both strings)

So I create a property schema and call the value PurchaseOrderNumber, that I set to MessageContextPropertyBase (I always do this so that if one of the messages doesn’t have a element, it can still be assigned to the message), and also to show that the property schema property name doesn’t have to match either of the schema defined elements/attributes.
Property

So now I go back into each of the schemas and add a new property to each of the schemas, I add the reference to the schema, and then click Add and PurchaseOrderNumber shows up in the drop down

AddToPurchaseOrder

AddToAcknowledgement

Now I create the Orchestration and create a new CorrelationType

CreateCorrelationType

And then Add the POMsg and AckMsg and the CorrelationType to get this:

Variables

Now I create my send shape to send out the Purchase Order initializing the correlation set

InitializingCorrelationSet 

And add the following correlation set to the receive shape

FollowingCorrelationSet

 

A few notes:

  • This does not need to happen at the beginning of the orchestration
  • Generally I have an expression that sets the value of the property ex: POMsg(BizTalk_Server_Project1.PurchaseOrderNumber)=”123”;
 

Here is why the map looks like it does: heavens knows I have gone down this thought process:

‘I want to create a LOC_2 repeating record for each ShipmentStatus element and I also want to create a LOC_2 for every DeliveryCode element. The looping functiod means I want to create a record for whatever is the source, simply linking two of these should work.’

TableMapping1

Unfortunately, the looping funtiod can only be one output, not two looping functiods to the same output.

Here is the input and output definitions:

TableMapping2

Here is some sample data:

TableMapping3

I have created a sample that shows how to solve the problem, one the hard way, the other the easy way.

First the hard way:

Let me explain how you need to think how the out of the box functionality:

"I need to load all of the data into a repeating temporary table and then extract data from the table into the output structure"

Using out of the box functiods:

You want to use the table looping functoid and table extractor functiods

TableMapping4

The arguments to the table looping functiod are:

TableMapping5

The ones that are really​ important are the input[0], input[1],input[4], and input[5], the other ones are hard coded data.

Input[0] is the scoping, generally the root node.

Input[1] is how big the table is going to be (generally the number of output elements you need to create) (we will see the importance in a moment).

Input[4] and Input[5] are the drivers to the creation of the output records.

Now lets look at the table

TableMapping6

I put Data1 and Data2 as the first column (even though it won’t be the first output) and marked the Gated check box, so that if there is no Data1 or Data2 in the input, no empty records be created. For each Data1 record, I am going to create a record and hard code 11, C571data1 and C571data2, for every Data2 I will hard code 12, C571data3 and C571data4 (they could have been links from the source if I wanted)

Now the output:

The link from the table looping functiod is linked to the LOC_2 record (which repeats)

The table extractor functiods are as follows:

  • The top table extractor functiod’s argument is 2 (exctract column 2 (value 11 or 12)

  • The next one under is 1 (column 1′s data)

  • The one after that is 3, and then 4

This returns a result set:

TableMapping7

TADA!

(not easy to understand however)

Now the easy way:

The easy way is to think of it, like I described

"I want to create a structure that needs to be sourced from a repeating structure, and I also need to create the same structure based on a different source element."

So we start by dragging the first ‘source’ to the destination and going into the properties of the other elements and hard coding something

 

and I went into the LOC01 and C5172, and C5173 and hard coded it:

TableMapping8

Now, I validate the map and see the output xsl:

TableMapping9

We want to create a ns0:Output and then we loop through each Data1 and then put data in LOC01 and then map the Data1/text() in the C5171, and put some data in the C5173, pretty easy to understand thus far.​

So let me change the data to match the previous map:

TableMapping10

Now let’s copy the <xsl:for-each node and save it in notepad

Now lets go re-do the map for the second record we want to create

TableMapping11

And looking at the xsl (which looks eerily similar)

TableMapping11

So I change the xsl

TableMapping12

Now I simply copy both of the <xsl:for-each into an inline xsl functiod box and connect the output to the LOC_2 record (no inputs):

TableMapping13

TableMapping15 

Here is what the map looks like:

TableMapping16

I validate it and the underlying xsl looks like this:

TableMapping17

Which also creates:

TableMapping18

Both ways are do-able, I just don’t like thinking that the BizTalk gods at MS put together the functiods. XSL seems more logical to me.

Jul 022014
 

So in discussions with a lot of executives about the situations of integration, often times the question is asked: Why use BizTalk, it seems like a lot of work to get integration done when I could just use tools that Microsoft provides already.

I have struggled with coming up with a good answer to this question, because, yes Microsoft provides other integration tools packaged with other server products that solve the same problems that BizTalk Server solves.

Lets take SQL Server Integration Services (SSIS). It transforms data from one data type to another. You don’t even need SQL Server to be the source or destination. From outward appearances it can do all of the things that BizTalk can.

SSIS is great, but SSIS is akin to a machete, whereas BizTalk is akin to a Swiss army knife

Machete vs.swiss-army-knife 

 

They both have their uses, if I need to cut down a swath of weeds, or to clear a trail of underbrush, a machete is what I would use. If I needed to whittle away a piece of wood, I would use the Swiss army knife. Could I accomplish the same thing with the other tool? YES! Clearing under brush with a Swiss army knife, possible, but not the best, carving a wooden sculpture with a machete, I guess it can be done.

Can a Swiss army knife deal with a screw? Yes! Can I cut paper, can I open up a can, can I file my fingernails? All yes! Is it the best tool for the job? Probably not, but it is far more comprehensive than a lot of other tools.

So also is SSIS compared to BizTalk. If I wanted to do a mass update, without a lot of moving part SSIS is great, if I need to bulk move data from one place to another, SSIS is the job. If I need to design a workflow process, where there are multiple stops (along with different types of end points), BizTalk is the way to go.

Are there better screw drivers than the one provided with the Swiss army knife, how about can openers, how about scissors, yes, yes, and yes.

WCF exposed C# interfaces are much faster, and operate at a much granular level. However, you lose some of the functionality that comes out of the box with BizTalk, namely tracking, exception handling, etc,

Food for thought.

© 2015 BizTalk Blog Suffusion theme by Sayontan Sinha