mercoledì, gennaio 12, 2011

Programmatically persisting an Orchestration

Sometimes ago I had the necessity to persist an orchestration just before entering an atomic shape (you’ll see in the next post why…)

Unfortunately, as any good biztalker knows, persistence point are decided by Engine when one of the following conditions happen:

  • Start Orchestration Shape
  • Suspend Shape
  • At the end of transactional scope (Atomic and LongRunning)
  • When Orchestration terminates
  • When the engine determines that the instance should be dehydrated.
  • When the Orchestration engine is asked to shutdown.
  • When a Debugger breakpoint is reached.
  • Send Shape (at the end of)

So it seems there’s no way to persist the state just before entering an atomic scope(the closest thing is to put another transactional scope just before the atomic one but you’ll agree that’s a bad & ugly workaround).

Anyway, looking a bit inside the Service Class, I noticed the following method:

   1: public void Persist (
   2:     bool dehydrate,
   3:     Context ctx,
   4:     bool idleRequired,
   5:     bool finalPersist,
   6:     bool bypassCommit,
   7:     bool terminate
   8: )

Parameter names seems self-explanatory to me and therefore I tried to invoke it from an expression shape placed just before the atomic scope:

   1: Microsoft.XLANGs.Core.Service.RootService.Persist(
   2: false, // dehydrate.
   3: Microsoft.XLANGs.Core.Service.RootService.RootContext, // ctx: The actual service instance context.
   4: false, // idleRequired.
   5: false, // finalPersist.
   6: false, // bypassCommit.
   7: false // terminate.
   8: );

The result was that Orchestration Service Instance persisted as soon as it reached the expression shape instead of waiting the end of the atomic scope.

lunedì, gennaio 10, 2011

BizTalk Versioning Strategy (3/4)

Continued from part 2.

.NET Versioning.

So every problem seems due to the fact that orchestration are tightly coupled with .NET types representing message schemas and therefore they’ll fail to execute if fed with wrong .NET Type.

In fact we’ve reduced BizTalk Versioning strategy problem to a normal .NET Versioning one:

  • We’ve deployed an application (our orchestration) referencing types contained in another assembly (our schemas, v1.0.0.0).
  • We’ve deployed a new version of this referenced assembly (v1.0.0.1).
  • Will our application continue to work unaffected when receiving 1.0.0.1 types instead of 1.0.0.0?

The .NET answer is “it depends”.

In fact, as depicted here,

The specific version of an assembly and the versions of dependent assemblies are recorded in the assembly's manifest. The default version policy for the runtime is that applications run only with the versions they were built and tested with

So the answer seems to be a sounding no but, continuing to read:

unless overridden by explicit version policy in configuration files (the application configuration file, the publisher policy file, and the computer's administrator configuration file).

So there’s a way: by default the orchestration will crash (because receiving unexpected type) but we can override this behavior simply by declaring explicitly the version policy!

The above page suggested 3 approaches, let’s examine them in details:

Application Configuration File

This approach consists in putting redirection directives directly inside the application configuration  file.

This means that we should place our redirection from 1.0.0.0 to 1.0.0.1 directly inside the BTSNTSvc.exe.config file.

I don’t like to mess with BizTalk application config file and it’s wrong even from a logical point of view: this kind of redirection is thought for application developers (that in this case is the BizTalk product group) to explicitly affirm (in the application config) that the program will work with these assembly version redirections. But I’ve not developed BizTalk Server, I’ve simply developed some components hosted in BizTalk Server.

So I don’t like this approach too much.

Computer Administrator Configuration File

This approach consists in putting redirection directives inside the machine configuration  file.

If possible I like this approach less than the previous one for the very same reasons:

If I don’t like to mess with BizTalk Configuration File you can imagine how much I hate to mess with the whole Machine.config.

And, from a logic point of view, this kind of redirection is thought for system administrators of the machine and I’m not either too, I’m just a developer who has components hosted on that machine.

Publisher Policy File

This approach, a bit more complex than the previous two, consists in generating a twin policy assembly file which, when deployed together with the original assembly, enable redirection.

This is the best approach for a biztalk developer: it is used by a component developer (and we’re component developers) to state that a component is compatible with another version of the same (and this is exactly what we’re trying to do).

Understanding how to generate a Publisher Policy File is off topic for this (too long) post, but I placed on MSDN Code Gallery a powershell script that should help you producing your publisher policy files.

Recap

My policy for schema versioning is therefore the following:

Each time I made a (backward compatible) change in a schema (such as adding a field to an existing schema or adding a new schema to the assembly) I simply increase build or release number.

My build process will build the newly changed schema artifacts producing also the publisher policy file (using a slightly modified version of the script I linked above).

Then, on the BizTalk box, I’ll deploy the new version of the schemas (no need to unenlist orchestration or remove artifacts because I’m simply adding a new side by side schema).

Afterwards I put in gac the publisher policy file corresponding to the new schemas.

We had a zero downtime (we just deployer new artifacts, neither stopped nor removed old ones) and the net effect is that now every biztalk artifact (orchestration, maps, pipelines, etc) will use the newest (and backward compatible) artifact without problems.

Dynamic Maps

Some times ago, while searching for a method to mock up Extension Objects during unit tests (in a future post the solution I came out with…) I stumbled into this interesting technique for realizing dynamic maps I want to share.

(For Dynamic Map here I mean a map whose content is not statically compiled into the map dll and therefore whose behavior can be changed without any redeploy)

BizTalk maps are simply .NET classes, as depicted in this article of Paolo Salvatori, our btm is took by BizTalk compiler and transformed into a sealed class inheriting by TransformBase class.

The component responsible to execute maps in BizTalk is the EPMTransform class contained in the Microsoft.BizTalk.EPMTransform assembly.

This class does a lot of things but the central point is in its Initialize method:

  • First of all it load the assembly containing our map class
  • Then, using reflection, it instantiate our map class and cast it to a TransformBase.
  • It reads just instantiated StreamingTransform and TransformArgs properties from our map class.

To have a dynamic map we should be able therefore to override these StreamingTrasform and TransformArgs methods but, unfortunately, both methods are declared in TransformBase and we can’t override these two methods because they’re not declared as virtual.

Luckily what these properties are doing is no rocket-science:

StreamingTransform will simply load the xslt contained in the XmlContent property in the transform and return it.

TransformArgs is a bit more complex but it simply parse the Extension Object format used by BizTalk and contained in property XsltArgumentListContent to return the corresponding XsltArgumentList object.

Both XmlContent and XsltArgumentListContent are strings properties containing Xml data and both are abstract in TransformBase, therefore we can redefine them in our map classes.

Beware of private const.

Maps classes are very simple classes, in the following image the result of a BizTalk map (called MyMap) disassembled using reflector:

image

SchemaReferenceAttribute are used to point to schema classes referenced by the map (in my case both source and destination schemas are referring to a schema named MySchema and contained in TCPSoftware.Sample.DynamicMap) and strings _strSrcSchemasList0, _strTrgSchemasList0, SourceSchemas and TargetSchemas are repeating the same information.

Aside from XmlContent and XsltArgumentListContent you’ll find the same private const twins called _strMap and _strArgList

Notice that set of private const fields… they’re totally redundant and should be removed (private, const, and never used inside the class… if you ask me they’re not necessary) BUT if you remove them BizTalk will be unable to deploy the map (probably those fields are extracted using reflection by administration console, not a smart thing to do since they’re private and their content is duplicated in public properties but hey, biztalk is still using XslTransform so there’s definitely space for improvements in TransformBase design…) anyway, for the rest of the article, you may safely ignore them, letting their default value.

Dynamic Maps.

Imagine to redefine our class inheriting from TransformBase and containing the following definition of XmlContent:

   1: public override string XmlContent
   2: {
   3:     get
   4:     {
   5:         string MyXsltFileLocation = "C:\\XSLTStore\\Xslt1.xslt";
   6:         return System.IO.File.ReadAllText(MyXsltFileLocation);
   7:     }
   8: }

Just a demo snippet but I hope you will see the value in it: we can redirect the true map logic to an xslt file written externally to the deployed map (in the example on our filesystem. but nothing prevent us to put it on a db) allowing us to fix problems or enhance mapping logic just by using notepad (or doing a db update) without any long and error prone deployment process.

And if you hate having to mess with XSLT directly, what about the following snippet:

   1: public override string XmlContent
   2: {
   3:     get
   4:     {
   5:         string assemblyToTestName = "Maps1.dll";
   6:         string mapToTestName = "Map"; 
   7:         Type type = Assembly.Load(assemblyToTestName).GetType(mapToTestName);
   8:         TransformBase b = (TransformBase)Activator.CreateInstance(type);
   9:         return b.XmlContent;
  10:     }
  11: }

With the above implementation the deployed map will simply redirect BizTalk requests to another map.

Possibilities are endless: What about combining both approaches and having a lookup (on a db or on the filesystem) to check if map should be redirected before returning the default (compiled) map?

In this way the system will continue to work as intended (compiled) but, in case of necessity you may simply add an entry in a database or in a configuration file to override the deployed map redirecting processing to another one.

Will it work?

The answer is yes (and you can experiment with this sample) but there’s a catch (which IMHO is a great pro of using this technique):

Remember when I said that the EPMTransform will load our maps inside a method called Initialize?

Well, as the name imply, this method is just called when a map needs to be initialized, not every time the map is applied because BizTalk will cache the XslTransform once initialized and this means that our custom code won’t be called at every map invocation, just when the map is first initialized.

So, if you want that your custom XmlContent method to be invoked, you HAVE to restart the host instance hosting the map.

On the downside this means that no, you can’t simply start notepad, do a modify, and see biztalk use the modified map; in fact you have to start notepad, do a modify, restart a service, and see biztalk use the modified map (not a big deal…)

On the contrary, this also means that your custom code can be pretty complex (in the previous section we imagined to access a database or to read a file or even to create a class instance via reflection) and your mapping performance won’t suffer.

Conclusions

Even if I’m not suggesting the usage of such technique in production I can’t really see a single reason for not using it during developments: being able to test new solution simply changing a map file with notepad will increase your efficiency tremendously.

domenica, gennaio 09, 2011

BizTalk Versioning Strategy (2/4)

This is the second part in the versioning discussion, the first part here.

Messages and Objects: Two Sides of the same coin

BizTalk has a strange dualism when it came to message processing:

On one side BizTalk is a .NET application, and therefore a message is identified by its .NET Type, the assembly containing it and its Strong Name.

On the other side BizTalk is an XML Oriented processing middleware and therefore a message (if it’s an xml) is also identified by its XSD characteristics (what is called in BizTalk terminology its “MessageType” which is identified by root node name and namespace)

In the below example I double clicked to a deployed schema in the biztalk administration console, just to discover that the schema is identified by the following couple of properties:

MessageType: http:/MySchema1#Root

Type: BizTalk_Server_Project1.Schema1, BizTalk Server Project1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=a76ffbe6c9882f67

image_thumb[2]

The same dualism can be read in a message instance inspected through the admin console: as you can notice from MessageType and SchemaStrongName context properties.

image_thumb[3]

Why is this distinction important?

This distinction is extremely important because help us to understand some strange biztalk behaviors and anticipate versioning problems:

If you try to deploy two different schemas but with the same root node name and namespace in two different assemblies BizTalk will blow up as soon as a message matching the schema will enter the messagebox with the following exception

Cannot locate document specification because multiple schemas matched the
message type "http:/MySchema1#Root"

(The nasty thing of this problem is that it obviously affects every message matching the schema, not only messages of the offending deploy, this means that one day a process works correctly, the day after someone decide to deploy another schema for another project without noticing the messagetype mismatch and booom even the process that was working correctly is not working anymore. This experience taught me to put schemas in assemblies separated for systems instead of processes: it reduce the possibility to put the same messagetype in different assembly because usually system name is also contained in message xml namespace)

Knowing the dualism depicted above the problem is now clear:

BizTalk receive an XML Message from the external, the first thing it does is to lookup in its database to find the .NET type related to that XML Message (using MessageType as key).

If it has two matches it is unable to decide which type to use for message deserialization and it throws the exception.

But, if you deploy the same messagetype in another version of the same assembly then no exception will be raised.

This is due to the fact that (as depicted here) biztalk engine has an exception to the no multiple match rule: If the multiple match came from different versions of the same assembly then the schema coming from the latest (highest version) is used.

Equipped with all these new knowledge let’s see how every biztalk artifacts will react to a schema update (i.e. old schema version = 1.0.0.0 while the new schema is deployed side by side with version = 1.0.0.1).

Pipelines

XmlDisassembler component is responsible to identify and promote the MessageType and SchemaStrongName of the message.

Therefore, applying what seen till now, the xmldisassembler will assign the MessageType without problems (after all it’s simply concatenating Root name and namespace) and will resolve SchemaStrongName to version 1.0.0.1.

If, for some reason, you want to override the “default to higher version” rule you can indicating explicitely the SchemaStrongName in the “Document schemas” property of the pipeline component

Maps

Maps are simply XLST transforms therefore they’re usually not at all interested in SchemaStrongNames or .NET Types but they only look at the MessageType.

Actually, when placing several maps on a port, a message is directed towards one or another map just examining its messagetype and nothing else.

But, at compile time, maps are compiled against biztalk schemas (source and destination schemas) and therefore, at the end of the transformation maps simply “append” the expected destination MessageType and SchemaStrongName to the outbound message.

In other words, If you’ve compiled your map against 1.0.0.0 version of your schema, the map will continue to work seamless if you send a 1.0.0.1 version of your message inside but it will produce a 1.0.0.0 version of the outcome.

Orchestrations

Orchestrations are different: they’re statically compiled block of code and therefore they’re not at all aware of XML Messages but they only understand .NET types.

Therefore, if you send to an orchestration (compiled against 1.0.0.0 version) messages with version 1.0.0.1, even if they’re identical (from an xml point of view) to 1.0.0.0 message orchestration will raise an exception (“Received unexpected message type” seen in the previous part).

Doing a quick recap therefore we can say that side by side schema deployment with its “default to higher version” rule is good when you’ve just maps and pipelines (so it’s perfect in a messaging-only scenario) but it will fail miserably if you’ve orchestrations in your solution.

In the next part we will try to find a solution to this problem.

sabato, gennaio 08, 2011

Loop Shape “demystified”

In orchestration design I think that the loop shape is often misunderstood and misused and is the cause of a lot of unnecessarily complex orchestrations I saw.

Even the shape definition is ambiguous:

You can use the Loop shape to repeat actions in a continuous loop, as long as some condition is met.

The key point is that, differently from C#, XLANG language has neither for cycle nor do-while, the only XLANG construct available is the C# while-do equivalent: the loop shape.

A lot of people try to implement for cycles using loop shapes and this is simply wrong. Think about it: if for loop was banned from C# you would try to recreate it using while or would search for problem solutions which naturally blend to while instead of for loop?

I’m totally for the second option: model your problem in terms of language constraints you have, don’t try to fight against the language to better fit your model.

For what?

Let’s make an example of this mismatch, and imagine we want to implement our for loop in an orchestration:

  1. We need a variable to keep the counter (the classical i)
  2. We need a variable to keep the maximum (if we don’t want to cable it directly in an expression)
  3. We need an expression to do things with i variable
  4. We need an expression to increase the counter.
  5. An (optional) expression to check termination ( i==max ) 
  6. An (optional) boolean variable to keep termination condition.

This bring us to ask ourselves other (totally out of context) questions:

  1. Counter must be increased entering the iteration or exiting?
  2. Exit condition must be evaluated entering on exiting the iteration?

In a for loop these questions are obvious but remember we don’t have a for loop in XLANG, we’re just faking it…

So, till now we have declared two or three variables in our orchestration, we’ve placed two or three expression shapes in our loop, we’re posing ourselves a couple of confused questions and the worse thing is that all these overhead is infrastructural, have you noticed that we’ve not talked at all business requirements?

In the meanWHILE…

When we realize that loop shape is a while everything will became simple and natural:

  1. We need a function to tell us if while must continue or not .
  2. A function to iterate to the next element.
  3. A function to return current element.

An object with these characteristics is well known in informatics and in C# it is represented by an interface: System.Collections.IEnumerator (it exists also a generic version but since XLANG doesn’t support generics we’re stuck with non generic version).

Interface has 2 methods and one property:

bool MoveNext()

Include first two functions of the above list: move to the next element if it exists and return true, false otherwise

Object Current

Returns the current element (third function)

void Reset()

Reset the enumerator.

With an IEnumerator our loop will become trivial to write:

image

and the same (to show equivalence) written in C#

   1: while(enumerator.MoveNext())
   2: {
   3:   //Use enumerator.Current
   4: }

A single variable, implementing a standard .NET interface and we’ve three advantages:

  1. we’re using a single variable in the orchestration.
  2. we’ve moved infrastructure logic outside of orchestration.
  3. we’re  simplifying and making more readable our orchestration.

venerdì, gennaio 07, 2011

BizTalk Versioning Strategy (1/4)

Like said in the previous post BizTalk solution management can be a challenging task due to the complexity of the biztalk artifacts update process.

BizTalk does its best to avoid the possibility to leave the system in inconsistent state and this is a good thing, but doing that it throws the burden of an incredibly bureaucratic deployment workflow on the developer even when it is clearly unnecessary.

Let’s make the following not-so-hypothetical case:

I’ve an orchestration (contained in an assembly ProcessA.orchestrations.dll) which uses a map (contained in ProcessA.maps.dll) which uses some schemas (contained in System1.schemas.dll and System2.schemas.dll).

As you can see, I’ve followed the best practices of separating different BizTalk artifacts in different assemblies and enhanced it a bit (further separating artifacts for processes and systems) to easily detect impact of our future modifications on the project.

Let’s say we need to implement another process between the same systems.

What we need to do is to add a new schema to the System1.schemas.dll to represent new messages exchanged in the second process.

Well, to update the System1.schemas.dll we need to:

  1. Since you’re updating a schema dll, you must remove it.
    1. Since schema is used on maps you need to remove maps that are using it (ProcessA.maps.dll).
      1. Since maps are used in orchestration you need to remove also orchestrations (ProcessA.orchestrations.dll)
        1. To remove an orchestration assembly you need to unenlist every orchestration contained in it.
          1. To unenlist every orchestration you must terminate every service instance existing referencing them.
          2. If orchestration is bound to a physical port you need to stop & unenlist every related port.
      2. If maps are used in a port you’ll need to manually remove maps from every port where it is used.
  2. Finally you can deploy the schema.
    1. But after you need to redeploy also maps you removed (ProcessA.maps.dll)
      1. But after you need to redeploy also orchestrations you removed (ProcessA.orchestrations.dll)
        1. You must remember to enlist again orchestration.
        2. You must remember to enlist/start receive/send ports
      2. You need to re set maps on receive/send ports that were removed.

A damned long sequence of steps especially considering that I was forced to completely redeploy ProcessA when I know that it's not necessary because the only modification I did to schema assembly was to add a new schema leaving untouched the old one… in other words my changes were backward compatible but by default biztalk assumes they’re not and this causes the problem.

A bit unfair.

I know, above I voluntarily omitted a couple of things that would have made life easier and I’m going to remediate here adding also the reasons why I don’t consider them a solution at all.

Using “Modify”

If you select a resource from the BizTalk Administration Console and press the right mouse key you’ll notice a “Modify…” voice in the popup menu.

image

This option bring us to a dialog window nearly identical to the “Add –> New BizTalk Assembly…”

image

But with a difference: if we decide to “Refresh” the assembly with a new one, BizTalk became smarter and will let us deploy the new one without removing the old one. Therefore, in the upper scenario, every list of step I reported could be avoided simply using Modify instead of the classical “Add –> New BizTalk Assembly –> Overwrite

Why BizTalk team decided to implement such a different behavior for these two nearly identical options (Add with overwrite and Modify) is a mystery to me but as long as i know that the difference is there it’s ok for me.

So problem solved?

No, definitely not and for two (IMHO good) reasons:

  1. Modify option has no command line equivalent, BTSTask.exe in fact has just the AddResource /Overwrite alternative, leaving us unable of using this feature for install/update scripts or automatic deploy msi.
  2. Modify works great if there are no inter-application dependencies but fails miserably (with the same problem of AddResource with Overwrite) if there is some application reference.

The second reason needs a better explanation: Let’s say for above project I defined different applications such as Common (containing all common components such as schemas, helper functions and so on) ProcessA (containing orchestrations, ports and maps pertaining to ProcessA) and so on.

Obviously ProcessA Application will reference Common Application and when we try to modify the schema.dll in Common (exactly as we did before when all processes where in the same application) we obtain a failure as depicted below.

image

Increasing Assembly Version.

Another solution could be simply to increase the assembly version of the modified schemas dll (let’s say from 1.0.0.0 to 1.0.0.1).

BizTalk allows side-by-side deployment of assemblies (more on this later) and so this could be the solution we were searching for:

ProcessA will continue to use 1.0.0.0 version of schemas while the new ProcessB will use the new 1.0.0.1 version (therefore containing the new schema)

This may seems to work but sooner or later (if you use orchestrations) you’ll face a similar exception

'Inner exception: Received unexpected message type MySchema, Schemas, Version=1.0.0.1,
Culture=neutral, PublicKeyToken=e1a7e3de4e628631' does not match expected type MySchema, Schemas,
Version=1.0.0.0, Culture=neutral, PublicKeyToken=e1a7e3de4e628631'.

To understand why (and when) this exception will be raised and how we can solve the problem once for all we need to examine BizTalk resolution mechanism in depth.

giovedì, gennaio 06, 2011

Using Unity in BizTalk Solutions

The Unity Framework is the pattern & practice solution for realizing Dependency Injection and Inversion Of Control containers.

In few words Dependency Injection is the simplest way to compose applications starting from replaceable components.

Let’s say you have to implement a configuration layer inside your application: you still don’t know if the configuration store will be implemented in SSO (if you’re a smart biztalk developer ;o) ) or on a database or in a xml file or in a text file or whatever, all you simply know is that you’ll need to GetConfigurationData in some ways.

So, as every good OO developers, you’ll start in developing an Interface declaring the method you’ll use to retrieve configuration data such as the following:

   1: public interface IConfigurationStore
   2: {
   3:     public string GetConfigurationData(params string keys)
   4: }

Then you’ll implement configuration concrete classes implementing this interface, you’ll have a SQLConfigurationStore,a XmlConfigurationStore and so on.

And when you’ll need to get a configuration value, you’ll use the interface instead of a concrete implementation in your code therefore removing dependency to the concrete classes from your application:

   1: private IConfigurationStore Config
   2: [...]
   3:  
   4: String ConnectionString = 
   5:     Config.GetConfiguration("MyApplication", "SQLLookupConnData")
   6:  

But there’s a catch:

In your application code you still have to add a dependency to the concrete class somewhere, in fact, in your code you’ll have soon or later to assign the variable

   1: IConfigurationStore Config = new SQLConfigurationStore(connstring);

and so, we’ve reduced the number of dependencies to just an assignment, but the dependency is still there.

Dependency Injection Containers comes to the rescue here, eliminating this last dependency point and allowing us to implement true decoupled components.

Instead of directly assign the variable to the concrete class we simply ask the DI Container to do the work for us (below the code necessary in Unity)

   1: [...]
   2: UnityContainer container= new UnityContainer();
   3: [...]
   4: IConfigurationStore store = container.Resolve<IConfigurationStore>();
   5: String connectionString = store.GetConfigurationData("MyApplication","SQLLookupConnData");

Unity is acting as a opaque factory and its instantiation work is driven from an external xml file; By default unity searches this information in the app.config file but you can redirect it to a separated xml file.

Using BizTalk I prefer this second approach so I don’t have to mess with the BTSNTSvc.exe.config.

Anyway a snippet of this xml file is the following.

   1: <container name="Container">
   2:     <types>
   3:         <type type="IConfigurationStore" mapTo="XmlConfigurationStore" >
   4:             <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement, Microsoft.Practices.Unity.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
   5:                 <constructor>
   6:                     <param name="fileName" parameterType="System.String">
   7:                         <value value="C:\Configurations\XmlFile.xml"/>
   8:                     </param>
   9:                 </constructor> 
  10:             </typeConfig>
  11:             <lifetime type="singleton" />
  12:         </type>
  13:         <type name="SQL" type="IConfigurationStore" mapTo="SQLConfigurationStore" >
  14:             <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement, Microsoft.Practices.Unity.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
  15:                 <constructor>
  16:                     <param name="connectionString" parameterType="System.String">
  17:                         <value value="...connection string to sql configuration store..."/>
  18:                     </param>
  19:                 </constructor> 
  20:             </typeConfig>
  21:             <lifetime type="singleton" />
  22:         </type>
  23:     </types>
  24: </container>

Notice that name=SQL attribute in the snippet, it is used by unity when a specific resolution if required: in the above example, since we used the parameterless Resolve method then store will be valorized with nameless entry (XmlConfigurationStore), but if we used Resolve<…>(“SQL”) in the above code, then Unity would have used the entry with name attribute equals to ”SQL” (SQLConfigurationStore) to instantiate the required concrete class.

Add BizTalk to the equation.

BizTalk Server Solution maintenance can be truly a nightmare: even if BizTalk architecture is definitely component-oriented each deployment must be done in a long, boring, and unavoidable procedure.

If you want to update an orchestration you must:

  1. Undeploy old orchestration.
    1. You can’t remove orchestration if you first don’t unenlist them
      1. you can’t unenlist orchestration if you first don’t terminate each service instance related to it.
  2. Deploy new orchestration.
  3. Bind new orchestration.
  4. Enlist new orchestration.

If you want to update a pipeline (to replace a pipeline component with another one) you’ve to follow a similar tedious and error prone procedure.

Even if there are ways to workaround this problem (such as using a versioning policy as I’ll explain in another post) the main lesson is: make your architecture so flexible that redeploy of biztalk assemblies are rarely needed and only for important changes (not just because I decided to move my configuration datastore from file to database or because I want to track the information X from messages while till now I was tracking information Y).

Unity, and other DI Containers are invaluable allied in this battle for maintainability and flexibility.

BAM Writer Component.

IMHO the killer application for DI in BizTalk environment is in BAM instrumentation.

A bit of history

BizTalk presents an interesting feature, inherited by BizTalk Server 2004 edition, called BAM Interceptors .

The goal of interceptors is to let biztalk developer deploy their application and decide afterwards which data to write to BAM, as explained in the msdn

In each step of your application where you could have data of interest, you call Interceptor OnStep, provide an identifier for the step, and provide some data or arbitrary object that you are using in your application.
You must implement a callback function so when the callback occurs, your callback procedure gets the current step ID and your data object.
The BAM interceptor decides which data to request at each step, based on the configuration that you can create programmatically.
The BAM Interceptor then uses the obtained data to call either DirectEventStream or BufferedEventStream that you need to keep around and pass each time as an argument to OnStep.
You may keep different pre-created interceptors representing different preferences for the data and milestones for BAM.

Every time I read this explanation I see in it a Dependency Injection description ante-litteram (even if someone could argue that in 2004 DI was pretty well known in IT…)

In the runtime code there are invocation to a BAMInterceptor object which is defined externally and loaded at runtime (the only difference is that instead of using a DI Container driven by an external configuration file BizTalk uses a naïve approach based on BinaryFormatter serialization of a previously defined object)

Coming back to 2011

Following the spirit of interceptors I wanted to monitor (and possibly peek data of  interest) every message which entered or exited from my BizTalk Solution.

And I wanted to decide after the solution is running in production (even after weeks or months, just when necessity arise), which milestones and kpi to track and extract from messages.

One pipeline (component) to rule them all.

To reach this goal i implemented a single pipeline component, called BAMTracking which took a single parameter (aside from the enabled/disabled flag) BAMContext.

image

image

Internally the Execute method of the pipeline simply uses Unity to resolve and instantiate the correct BAMWriter using BAMContext string as optional unity name to differentiate at runtime between different BAMWriter implementations

   1: private IBaseMessage doExecute(IPipelineContext pContext, IBaseMessage pInMsg)
   2: {
   3:     try
   4:     {
   5:         if (String.IsNullOrEmpty(this.bamContext))
   6:             dataExtractor = Container.Resolve<IBAMWriter>();
   7:         else 
   8:             dataExtractor = Container.Resolve<IBAMWriter>(this.bamContext);
   9:     }
  10:     catch (DependencyMissingException)
  11:     {
  12:         dataExtractor = Container.Resolve<IBAMWriter>();
  13:     }
  14:  
  15:     dataExtractor.Extract(pContext, pInMsg);
  16: }

IBAMWriter Interface

At this point the BAMWriter interface is very simple and mimic the Execute method

   1: public interface IBAMWriter
   2: {
   3:     void Extract(IPipelineContext pContext, IBaseMessage pInMsg);
   4: }
 
In this way I can deploy my BizTalk Infrastructure once and when the needs to track a particular information from a particular message will rise I’ll simply implement the correct logic in an BAMWriter class, register its assembly within the Unity configuration file and change BAMContext property on pipeline per-instance configuration screen.
 
No further deploy is needed.

Tips when using Unity with BizTalk.

Use external xml configuration.

As depicted above I prefer to externalize in a standalone xml file all unity configuration instead of using a section in the BTSNTSvc.exe.config file.

This allows me to safely experiment with the xml file and allow me to quickly bring unity configuration from an environment to another one leaving untouched other settings.

The following code snippet is used to load Unity configuration from an external xml file:

   1: public static void Create(string FileName)
   2: {
   3:     ExeConfigurationFileMap map = new ExeConfigurationFileMap();
   4:     map.ExeConfigFilename = FileName;
   5:     System.Configuration.Configuration config
   6:       = ConfigurationManager.OpenMappedExeConfiguration(map, ConfigurationUserLevel.None);
   7:     UnityConfigurationSection section
   8:       = (UnityConfigurationSection)config.GetSection("unity");
   9:     container = new UnityContainer();
  10:     section.Containers["Container"].Configure(container);
  11: }

Where to put filename.

Looking at the code above smartest will notice that we’ve reintroduced a dependency: the reason why, by default, unity uses the app.config to search for its configuration data is that the app.config is well known to the runtime without having to explicitly point to it.

If we start to use an external xml file we need to tell application where to find it and having a concrete codeline such as

Create(“C:\\Unity\\Config.xml”)

will vanish every effort to remove dependency we made till here.

To make everything work you’ll have to externalize this filename too; one could use the registry to store this information (because it gave you per user and per key permissions, hierarchical structure and a lot of other interesting things) but I actually prefer the plain old simple environment variables because of its simplicity.

I define an environment variable called “UNITY_CONFIGURATION” containing the full path to the Unity configuration file and from application code I can simply refer it with a:

Create(System.Environment.ExpandEnvironmentVariables(“%UNITY_CONFIGURATION%”));

and that’s all.

Use Unity with GACed assemblies.

Unity examples you’ll find around won’t work with gaced assemblies but only with in place assemblies, this is because in unity configuration files they refer to assemblies by name and not by strong name, therefore CLR can’t do a resolution against GAC

In BizTalk nearly all dll used must be placed in GAC to work but luckily the solution is very simple as hinted above: when compiling your Unity configuration file remember to use the full strong name instead of the partial name of assemblies you want to use and the problem will go away.

   1: <!-- This is NOT going to work for GACed Assembly -->
   2: <typeAlias alias="ILogger"
   3:     type="UnityExamples.Common.ILogger, UnityExamples.Common" />
   4:  
   5: <!-- This IS going to work for GACed Assembly -->
   6: <typeAlias alias="ILogger"
   7:     type="UnityExamples.Common.ILogger, UnityExamples.Common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=45b4585645364e32" />