Nov 142014

So I learned a bit more of the importance of the places you need to name what when setting up database facts in the BRE.

The table name is especially important. Within the BRC, you don’t have much flexibility in what you can edit, but when setting up either a long or short term fact, the names are extremely important, well, they have to match, or it simply states that the TypedDataRow is un resolvable.

Here is the link between what is setup in the BRC and how it correlates to setting up a table fact in code:


Oct 292014

So we at Stott Creations often get requests to ensure that data is valid flowing through the BizTalk.

The out of the box functionality is pretty straight forward, simply turn on XSD validation in the XML pipelines:


The sucky part about that is that it will only surface the first error. Now if your data is really bad, I have no desire to send the data in multiple times to get all of the errors.

What we have done is create a product that creates a comprehensive list of errors so you can send the entire list of errors.

So the first thing we have is a list of validation functions that will serve us to create the list of rules to validate the data against.


The second set of vocabulary items is the validation patterns:


To setup the actual policy, we need to first setup the ability to Assert the list to the Rule engine. I create the rule as 1 Assert (so it shows up at the top of the list). I drag the run into the conditions pane. I right click the actions and choose assert and then drag List Enumerator into the fact. I also want this to run first, so I set the priority to 2.


Now I need to advance the Enumeration. I create 2 Advance (to sit below the 1 Assert). I drag the IEnumerator MoveNext into the Conditions pane. I then right click the actions and choose Assert and drag in the Current IEnumerator into the fact. I then right click the Actions and add update and drag IEnumerator into the fact. Because I want this to run second, I set the priority to 1.


Okay, so now we are ready to start creating validation rules. Each rule will run independent of each other, and I really don’t care which order it runs in, I just need them all to run. Each rule that I create in this policy is going to have a priority of 0 (which is the default). Let’s create a rule that checks a format.

I am going to check for two things:

  1. Check the XPath to see if the rule is going to be valid
  2. Check the format to see if it is going to be valid


Right click and choose Equal and drag in XPath Statement from the vocabulary


I then go to Visual Studio and open up the schema and choose the element or attribute I am attempting to check and copy the xpath statement.


And paste it in the right side of the rule


Then I drag the Check Format into the AND


I then drag Text Value into the first <empty string> slot


Now I go to the other Vocabulary and choose the date format I care to validate against, in this case Date with optional century indicator MMDDYYYY and I drag it into the second <empty string> slot. I then have two items, the first one is the regular expression value and the second one is the friendly version. I want to choose the regular expression.


The next part is if this is true, meaning, the xpath value is true, and it doesn’t match the regular expression check, I want to create an error. I drag the Create Format Error (Manual) into the actions


Now I need to fill in the error information: I need to supply the node type, which is either an attribute or element, I drag Node Type into the <enter a value> section and Node Name into the <empty string>


I then drag Text Item into the next <empty string> element, Line Number, and Position into the next two 0 places


I then drag from the Patterns vocabulary the Date with optional century indicator MMDDYYYY into the last <empty string> place holder, this time I choose the human readable description (because really, who understands Regular Expressions?)


I can continue creating rules this way.

However, there is an easier way:

I create the next rule, creating the If pane the same way, this time I am going to check the SSN. In the THEN, I drag in Create Format Error (Automatic)


I then drag in Validation Information into the <null> (which has all of the details I need (Attribute/Element, node name, text, position, line number, etc)


And then drag the Pattern in the <empty string> and choose the friendly explanation.


There is tests for data lengths, min, max, and length ranges.

Now to execute it.

First we will show how to do it in .NET

Adding a reference to StottCreations.Validation in the GAC. I then create the following code

  1. I load an XML document
  2. I instantiate a new Record and pass in the xml document
  3. I set the facts (only one) to the Record.
  4. Because I might want to see the trace I new up the DebugTrackingInterceptor
  5. I setup the policy calling the Validation policy I just created.
  6. I then execute the policy.
  7. I also need to clear and dispose the policy (because we have found that the automated garbage collection within .NET is not fast enough).



When I run the test, I can look at the record, and this is what I see:


If I run a valid xml document through, this is the results


I can do the same for an orchestration:


In the Initialize Variables shape is the following code, again we don’t have the Call Rules shape because we need to dispose immediately.


and then in the decision shape (Good)


and in the Terminate shape:


Oct 212014

I put together a demonstration for one of our clients demonstrating the Business Rule Engine.

The scenario was:

We have a form that has the person with an loan amount, and based on the state that they live in, a certain interest rate needs to be charged.

The entire purpose of the Business Rule Engine is to abstract the business logic from a developer. So in theory, I could put each of the states and their subsequent rate in the BRE. I wanted to abstract it even further. I wanted to have the data in a db that the business would be able to modify without even looking at the rule.

I created at table to store the rates


I created this schema so I can create the underlying class:


I opened the visual studio command prompt and created the .cs file from the .xsd by exectuting xsd.exe RateFacts.xsd /classes

I added a new method to the given class


I also created a Fact Creator (for testing purposes)


But the real purpose of this article is to show how to have long term database facts.

I have found the documentation on this rather sparse.

In the BRC, I have created two database facts, notice that they are Data table entries, this means that I am need to pass a Typed Data Table for this to work




I have created a couple of other Vocabulary items:


Here are the list of all of the Vocabulary Items


Trick: Hopefully this caught your attention

When you pass a data table, and you determine the value in the IF pane, it keeps that row of the table in memory for use in the THEN pane.

Here is the rule, I highlighted the DB Rate and DB State vocabulary items in the rule


Here is the form:


The code to actually call the Apply Rate policy is:


Notice that we don’t ever set the database. This is where the documentation seems to drop off. :(

I created a FactRetriever method that returns a Typed database table to the BRE (in the namespace as the Record class).

Notice, this is a long term fact. It only populates the Typed Table if it is null.


Now I need to make sure this logic gets called by anything that calls the policy, notice in the properties of the Apply Loan policy, the Fact Retriever points to the DBFactRetiever class (yes I had to gac and restart the BRC before I could see it)


When I run application, this is what I get:


And as long as I don’t restart the console application, it won’t re-query the db.

Oct 102014

I wanted to write about something that is sometimes mis understood regarding correlation sets.

A lot of samples I have seen have the same data being correlated on, or it is named the same, etc.

I am going to show that those are not requirements though.

I created a purchase order schema


I also created an Acknowledgement Schema


Now I want to be able to correlate off of the purchase order number, but notice that they are not named the same, nor are they even the same type (one is an element, the other is an attribute. The only real thing that needs to be the same is the same data type (string, or int, or whatever; in this case they are both strings)

So I create a property schema and call the value PurchaseOrderNumber, that I set to MessageContextPropertyBase (I always do this so that if one of the messages doesn’t have a element, it can still be assigned to the message), and also to show that the property schema property name doesn’t have to match either of the schema defined elements/attributes.

So now I go back into each of the schemas and add a new property to each of the schemas, I add the reference to the schema, and then click Add and PurchaseOrderNumber shows up in the drop down



Now I create the Orchestration and create a new CorrelationType


And then Add the POMsg and AckMsg and the CorrelationType to get this:


Now I create my send shape to send out the Purchase Order initializing the correlation set


And add the following correlation set to the receive shape



A few notes:

  • This does not need to happen at the beginning of the orchestration
  • Generally I have an expression that sets the value of the property ex: POMsg(BizTalk_Server_Project1.PurchaseOrderNumber)=”123”;

Here is why the map looks like it does: heavens knows I have gone down this thought process:

‘I want to create a LOC_2 repeating record for each ShipmentStatus element and I also want to create a LOC_2 for every DeliveryCode element. The looping functiod means I want to create a record for whatever is the source, simply linking two of these should work.’


Unfortunately, the looping funtiod can only be one output, not two looping functiods to the same output.

Here is the input and output definitions:


Here is some sample data:


I have created a sample that shows how to solve the problem, one the hard way, the other the easy way.

First the hard way:

Let me explain how you need to think how the out of the box functionality:

"I need to load all of the data into a repeating temporary table and then extract data from the table into the output structure"

Using out of the box functiods:

You want to use the table looping functoid and table extractor functiods


The arguments to the table looping functiod are:


The ones that are really​ important are the input[0], input[1],input[4], and input[5], the other ones are hard coded data.

Input[0] is the scoping, generally the root node.

Input[1] is how big the table is going to be (generally the number of output elements you need to create) (we will see the importance in a moment).

Input[4] and Input[5] are the drivers to the creation of the output records.

Now lets look at the table


I put Data1 and Data2 as the first column (even though it won’t be the first output) and marked the Gated check box, so that if there is no Data1 or Data2 in the input, no empty records be created. For each Data1 record, I am going to create a record and hard code 11, C571data1 and C571data2, for every Data2 I will hard code 12, C571data3 and C571data4 (they could have been links from the source if I wanted)

Now the output:

The link from the table looping functiod is linked to the LOC_2 record (which repeats)

The table extractor functiods are as follows:

  • The top table extractor functiod’s argument is 2 (exctract column 2 (value 11 or 12)

  • The next one under is 1 (column 1′s data)

  • The one after that is 3, and then 4

This returns a result set:



(not easy to understand however)

Now the easy way:

The easy way is to think of it, like I described

"I want to create a structure that needs to be sourced from a repeating structure, and I also need to create the same structure based on a different source element."

So we start by dragging the first ‘source’ to the destination and going into the properties of the other elements and hard coding something


and I went into the LOC01 and C5172, and C5173 and hard coded it:


Now, I validate the map and see the output xsl:


We want to create a ns0:Output and then we loop through each Data1 and then put data in LOC01 and then map the Data1/text() in the C5171, and put some data in the C5173, pretty easy to understand thus far.​

So let me change the data to match the previous map:


Now let’s copy the <xsl:for-each node and save it in notepad

Now lets go re-do the map for the second record we want to create


And looking at the xsl (which looks eerily similar)


So I change the xsl


Now I simply copy both of the <xsl:for-each into an inline xsl functiod box and connect the output to the LOC_2 record (no inputs):



Here is what the map looks like:


I validate it and the underlying xsl looks like this:


Which also creates:


Both ways are do-able, I just don’t like thinking that the BizTalk gods at MS put together the functiods. XSL seems more logical to me.

Jul 022014

So in discussions with a lot of executives about the situations of integration, often times the question is asked: Why use BizTalk, it seems like a lot of work to get integration done when I could just use tools that Microsoft provides already.

I have struggled with coming up with a good answer to this question, because, yes Microsoft provides other integration tools packaged with other server products that solve the same problems that BizTalk Server solves.

Lets take SQL Server Integration Services (SSIS). It transforms data from one data type to another. You don’t even need SQL Server to be the source or destination. From outward appearances it can do all of the things that BizTalk can.

SSIS is great, but SSIS is akin to a machete, whereas BizTalk is akin to a Swiss army knife



They both have their uses, if I need to cut down a swath of weeds, or to clear a trail of underbrush, a machete is what I would use. If I needed to whittle away a piece of wood, I would use the Swiss army knife. Could I accomplish the same thing with the other tool? YES! Clearing under brush with a Swiss army knife, possible, but not the best, carving a wooden sculpture with a machete, I guess it can be done.

Can a Swiss army knife deal with a screw? Yes! Can I cut paper, can I open up a can, can I file my fingernails? All yes! Is it the best tool for the job? Probably not, but it is far more comprehensive than a lot of other tools.

So also is SSIS compared to BizTalk. If I wanted to do a mass update, without a lot of moving part SSIS is great, if I need to bulk move data from one place to another, SSIS is the job. If I need to design a workflow process, where there are multiple stops (along with different types of end points), BizTalk is the way to go.

Are there better screw drivers than the one provided with the Swiss army knife, how about can openers, how about scissors, yes, yes, and yes.

WCF exposed C# interfaces are much faster, and operate at a much granular level. However, you lose some of the functionality that comes out of the box with BizTalk, namely tracking, exception handling, etc,

Food for thought.


So I am on a project that takes existing BTS 2004 and converting it to BTS 2013.

They have a UNIX ftp server that has processes can’t lock files while being written. This causes a little challenge with the FTP adapter. The process writes the file, and then writes a trigger file.


Payload file: eric.samplefile1.txt

Trigger file: eric.samplefile1.txt.trg

What they did back in 2004, is create the standard ftp adapter that would look for the *.trg, and then the pipeline would re-connect and swap the payload of the (nearly) empty trigger file and replace the data with the actual payload of the file.

This is a ‘tricky’ way, but one I would never champion: it is getting the payload via .net code.

Now we are connecting via sFTP, and there is no publically available sFTP code to ‘backdoor’ connect the sFTP server. I needed to find a different way.

What I did was

  • create a trigger receive location, and the pipeline
  • which uses ExplorerOM to create a non primary receive location based on the trigger file name being picked up
  • The payload receive location knows it is not a trigger file based on the receive location name in combination of the filename extension
  • When the pipeline (which is the same one as the trigger file) gets the payload, it goes and deletes itself.

This allows for multiple files to be processed, all ports are always on, and it ‘self’ cleanses.

Here is what the receive location looks like:


When the trigger is picked up eric.samplefile1.txt.trg, the pipeline creates a new receive location with the same pipeline component


When the file payload file is picked up eric.samplefile1.txt, the pipeline runs and deletes itself, leaving the receive location looking like this…



Here is the pipeline code that accomplishes this…

using System; using System.Xml; using System.ComponentModel; using System.Collections; using Microsoft.BizTalk.Message.Interop; using Microsoft.BizTalk.Component.Interop; using Microsoft.Win32; using Microsoft.BizTalk.ExplorerOM; using System.Xml.Linq; using System.Web; using System.Diagnostics; using System.IO; using System.Threading; using System.Linq; using SSOUtility; namespace StottCreations.PipelineComponents.Trigger { [ComponentCategory(CategoryTypes.CATID_PipelineComponent)] [ComponentCategory(CategoryTypes.CATID_Decoder)] [System.Runtime.InteropServices.Guid("7F3DF154-3267-4154-ABC4-E163D0B79E39")] public class ReceiveLocationConfiguration : Microsoft.BizTalk.Component.Interop.IBaseComponent, Microsoft.BizTalk.Component.Interop.IComponent, Microsoft.BizTalk.Component.Interop.IPersistPropertyBag, Microsoft.BizTalk.Component.Interop.IComponentUI { #region Variables string mgmtDb, port, name, realFileName, transportDefinition, ftpServer, ftpFolderPath, originalFileName; XDocument doc = new XDocument(); #endregion #region PipelineProperties private string ssoAppName = null; public string SSOAppName { get { return ssoAppName; } set { ssoAppName = value; } } #endregion #region IBaseComponent [Browsable(false)] public string Name { get { return "Receive Location Configuration Component"; } } [Browsable(false)] public string Version { get { return "1.0"; } } [Browsable(false)] public string Description { get { return "Modifies an available receive location for the SFTP adapter based on the trigger file"; } } #endregion #region IComponent public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { getConnectionString(); getPortName(inmsg); getTransportDefinition(inmsg); getLocationName(inmsg); getServer(); getPath(); if (isTrigger(Convert.ToString(inmsg.Context.Read("ReceiveLocationName", "")))) { createLocation(); inmsg = null; } else { destroyLocation(); } return inmsg; } #endregion #region IPersistPropertyBag public void GetClassID(out Guid classid) { classid = new System.Guid("7F3DF154-3267-4154-ABC4-E163D0B79E39"); } public void InitNew() { } public void Load(Microsoft.BizTalk.Component.Interop.IPropertyBag pb, Int32 errlog) { string val = (string)ReadPropertyBag(pb, "SSOAppName"); if (val != null) ssoAppName = val; } public void Save(Microsoft.BizTalk.Component.Interop.IPropertyBag pb, Boolean fClearDirty, Boolean fSaveAllProperties) { object val = (object)ssoAppName; WritePropertyBag(pb, "SSOAppName", val); } private static object ReadPropertyBag(Microsoft.BizTalk.Component.Interop.IPropertyBag pb, string propName) { object val = null; try { pb.Read(propName, out val, 0); } catch (System.ArgumentException) { return val; } catch (Exception ex) { throw new ApplicationException(ex.Message); } return val; } private static void WritePropertyBag(Microsoft.BizTalk.Component.Interop.IPropertyBag pb, string propName, object val) { try { pb.Write(propName, ref val); } catch (Exception ex) { throw new ApplicationException(ex.Message); } } #endregion #region IComponentUI [Browsable(false)] public IntPtr Icon { get { return IntPtr.Zero; } } public IEnumerator Validate(object projectSystem) { if (projectSystem == null) throw new System.ArgumentNullException("No project system"); IEnumerator enumerator = null; ArrayList strList = new ArrayList(); try { } catch (Exception e) { strList.Add(e.Message); enumerator = strList.GetEnumerator(); } return enumerator; } #endregion #region Helper private bool isTrigger(string locationName) { bool returnValue = false; BtsCatalogExplorer root = new BtsCatalogExplorer(); try { root.ConnectionString = mgmtDb; ReceivePort receivePort = root.ReceivePorts[port]; returnValue = (receivePort.PrimaryReceiveLocation.Name == locationName) ? true : false; } catch (Exception e) { root.DiscardChanges(); throw e; } return returnValue; } private void createLocation() { string password = SSOClientHelper.Read(SSOAppName, "FTP_Password"); int iteration = 0; while (true) { BtsCatalogExplorer root = new BtsCatalogExplorer(); try { root.ConnectionString = mgmtDb; ReceivePort receivePort = root.ReceivePorts[port]; int nextPort = receivePort.ReceiveLocations.Count; receivePort.AddNewReceiveLocation(); XmlDocument transportData = new XmlDocument(); transportData.LoadXml(HttpUtility.HtmlDecode(receivePort.PrimaryReceiveLocation.TransportTypeData.Replace("<Password vt=\"1\" />", "<Password vt=\"8\"></Password>"))); transportData.SelectSingleNode("//FileMask").InnerText = realFileName; transportData.SelectSingleNode("//Password").InnerText = password; receivePort.ReceiveLocations[nextPort].Name = name; receivePort.ReceiveLocations[nextPort].Address = String.Format("sftp://{0}:22{1}/{2}", ftpServer, ftpFolderPath, realFileName); receivePort.ReceiveLocations[nextPort].TransportTypeData = transportData.OuterXml; receivePort.ReceiveLocations[nextPort].TransportType = receivePort.PrimaryReceiveLocation.TransportType; receivePort.ReceiveLocations[nextPort].ReceivePipeline = receivePort.PrimaryReceiveLocation.ReceivePipeline; receivePort.ReceiveLocations[nextPort].ReceivePipelineData = receivePort.PrimaryReceiveLocation.ReceivePipelineData; receivePort.ReceiveLocations[nextPort].ReceiveHandler = receivePort.PrimaryReceiveLocation.ReceiveHandler; receivePort.ReceiveLocations[nextPort].Enable = true; root.SaveChanges(); break; } catch (Exception e) { iteration++; root.DiscardChanges(); if (iteration < 30) { Random rd = new Random(); Thread.Sleep(rd.Next(100, 1000)); } else { throw e; } } } } private void destroyLocation() { BtsCatalogExplorer root = new BtsCatalogExplorer(); try { root.ConnectionString = mgmtDb; ReceivePort receivePort = root.ReceivePorts[port]; ReceiveLocation deleteableLocation = null; foreach (ReceiveLocation location in receivePort.ReceiveLocations) { if (Path.GetFileName(location.Address).EndsWith(originalFileName)) { deleteableLocation = location; break; } } receivePort.RemoveReceiveLocation(deleteableLocation); root.SaveChanges(); } catch (Exception e) { root.DiscardChanges(); throw e; } } private void getConnectionString() { string regEntry = @"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\BizTalk Server\3.0\Administration"; string server = Registry.GetValue(regEntry, "MgmtDBServer", "BTSServer").ToString(); string database = Registry.GetValue(regEntry, "MgmtDBName", "BizTalkMgmtDb").ToString(); mgmtDb = String.Format("Server={0};Initial Catalog={1};Integrated Security=SSPI;", server, database); } private void getPortName(IBaseMessage inmsg) { port = (System.String)inmsg.Context.Read("ReceivePortName", ""); } private void getLocationName(IBaseMessage inmsg) { string initialLocation = (System.String)inmsg.Context.Read("ReceiveLocationName", ""); string triggerFileName = (System.String)inmsg.Context.Read("ReceivedFileName", ""); originalFileName = Path.GetFileName(triggerFileName); realFileName = Path.GetFileNameWithoutExtension(triggerFileName); name = String.Format("{0} - {1}", initialLocation, realFileName); } private void getTransportDefinition(IBaseMessage inmsg) { transportDefinition = (System.String)inmsg.Context.Read("InboundTransportLocation", ""); } private void getServer() { // sftp://mysftpserver:22/dev//eric.*.trg ftpServer = transportDefinition.Replace("sftp://", String.Empty); string[] splitTransport = ftpServer.Split(':'); ftpServer = splitTransport[0].ToString(); } private void getPath() { // sftp://mysftpserver:22/dev//eric.*.trg char[] splitting = ":22".ToCharArray(); string[] segments = transportDefinition.Split(splitting); string[] elements = segments[segments.Length - 1].Split('/'); string fileMask = elements[elements.Length - 1]; ftpFolderPath = segments[segments.Length - 1].Replace("/" + fileMask, string.Empty); } #endregion } }

Jun 282014

I have heard of re-tweeting, but re-blogging?


So Brian called today asking what was the apparatus for when you open a new BizTalk project, where does the BizTalk Management Database and Server come from.

We splunked around finally found it:


Another oddity: the btproj.user file doesn’t get updated until you close the solution.


Okay, so I needed to have a value to set the agreement. According to the documentation it states in step 3:

If step 2 does not succeed, resolves the agreement by matching the party name in the message context properties with the DestinationPartyName property, which is set as additional agreement resolver in the Identifiers tab of agreement properties.

Here is the correct Identifiers screen that needs to be populated.


© 2014 BizTalk Blog Suffusion theme by Sayontan Sinha