So I finally did what I should have done years ago: I got my own domain and host my own blog on Azure.

You can find it at or

2015-01-07: Edit. The part about 32 vs 64 bit was wrong on my part! I have rewritten the post.
Also, since this post was written Microsoft has updated the original article.

Skipping ahead a bit

After some feedback about the last article (thanks everyone) you all seem to know how to follow instructions from Microsoft. You all told me to skip to the part about the instructions being wrong. So I will.

Make sure to read the entire article.

32 vs 64 bit

Ever heard of that issue? Ever run into trouble because of that? Of course you have! Now you might be able to avoid it.

When you reach step 11 in the article “How to restore your databases” you run into the statement that you have to run the 64 bit cmd if you are running in a 64-bit environment (who does not?).

The thing is that the cmd-prompt you get when use Run+cmd is the 64 bit one! If you (like me) navigate to %windir%SysWow64 and run cmd, thinking you will get the 64 bit one that way, you will get the 32 bit one!!!


This is were I was wrong. It is a very confusing that the 64 bit version is located in the System32 folder and the 32 bit under SysWow64, but that is the way it is. In short: Simply use Run+Cmd and you will get the correct version.

So why is this a problem?

Reading registry settings the wrong way

If you open the file C:\Program Files (x86)\Microsoft BizTalk Server 2010\Bins32\Schema\Restore\UpdateRegistry.vbs and take a look at line 23. It reads: bKey = WshShell.RegRead("HKLM\SOFTWARE\Microsoft\BizTalk Server\3.0\Administration\MgmtDBServer").

If you run this script in a 32-bit command prompt the script will actually look for this key: HKLM\SOFTWARE\Wow6432Node\Microsoft\BizTalk Server\3.0\Administration\MgmtDBServer. Now in this row there is nothing wrong with that as it will find a key.

The trouble comes a couple of rows down: WshShell.RegWrite will only update the 64-bit registry settings(!) and all your 32-bit hosts will fail.

Not updating the SSO database!

Now this is scary, because it effects the SSO database. It does not update any registry setting about the SSO database, but it uses the program called Ssoconfig to change the database name. This is done by using the %path%-parameter called “CommonProgramFiles”, which is awesome. However, dead wrong as it points to the 32-bit common program files (if you the wrong cmd) and Ssoconfig is not installed there. The effect is that the SSO database is not moved and the script does not even throw an error.

This is very easy to test:

  1. Open a 32-bit command-prompt and type in the following: cd "%CommonProgramFiles%.
  2. This will change the directory to “c:\ProgramFiles\Common Files”, which is where Ssoconfig is located.
  3. Now try the same on a 64-bit command prompt (located at C:\Windows\SysWOW64\cmd.exe) and run the same command.
  4. Result C:\Program Files (x86)\Common Files. That is NOT the right path.

The solution is clear. Run the UpdateRegistry.vbs under a normal (64 bit) command prompt.

Minor mistake about EDI

Now go to line 131. It says to look for an attribute called EDI. If it finds this attribute, the EDI settings will be updated. However step 11 in the guide says you should update your SampleUpdateInfo.xml to include an attribute called MsEDIAS2. Update the line in the script to: set node = configObj.selectSingleNode("/UpdateConfiguration/OtherDatabases/Database[@Name='MsEDIAS2']").

The UpdateDatabase-script

This contains very few strange things but once again run it under a 64-bit cmd as keys are read. I also commented out everything about Hws. If you use it (strange person) then leave it in.

BAM databases and other BAM components

If you are using BAM in your environment, good for you. Be sure to follow this article to the letter after moving the databases and running the scripts. It basically points everything (the BAM portal and BAM WS) to the new database.

Finishing up

Ok so there are a couple of more pointers. The jobs you scripted for BizTalk, some have to be updated by hand (no biggie).

  1. BizTalk Backup has a database server name in it.
  2. Operations_OperateOnInstances_OnMaster_BizTalkMsgBoxDb also has a server name in it.
  3. Rules_Database_Cleanup_BizTalkRuleEngineDb.sql run this when/if the Rule engine DB has been created on the new server.
  4. TrackedMessages_Copy_BizTalkMsgBoxDb.sql also has a server name in it.

Closing words

The article used to contain links to my version of the update-scripts, but as the normal scripts work as long as you use the correct version of cmd, they only contain one error (the EDI one) as far I know.

Remember; This is not really all that hard and if you practice you can move an entire production environment in under an hour.

The correct title might be: “Moving the BizTalk databases – The right way”. Because the way outlined in the BizTalk help, is not much help. Scared yet? Well not after reading thru my articles.

Why move databases?

This is a very natural question and the answer is very simple: Because you have to. At some time you might need to move databases around in a big datacenter or, as for me, you need to move to a default instance on a new server (remember kids BAM needs SSIS to work properly and it will not on a named instance).

Is it possible?

Yes, very. There is even a script that does a lot of things for you but there are a lot of manual steps you need to take.

The scenario

So in order to make these articles a bit more texbook-like I will describe a scenario that I went thru.

The object of moving the databases was that the customer wanted one of their environment to be less of a testing environment and more like a staging environment. The BizTalk installation was setup on an SQL server machine that was used for patch testing and such. Not a good base for a staging environment.

I moved the databases form a working environment into a new server. The BizTalk databases did not exist before the move (this is useful to know if you plan on doing a log shipping database move, because this is not).

The BizTalk machines (this is a clustered environment) stayed the same, only the databases were moved.


The two BizTalk machines are called BizTalk001 and 002. They are a BizTalk cluster but if you do not have a clustered environment; just ignore everything I write about that.

The old SQL server is called PhyBizTalkDb and the databases to be moved are installed under the BizStage named instance.

The new SQL Server is called VBizTalkDb and the BizTalk databases will be installed at the default instance.

Microsoft support and disclaimer

In no way would I undertake this feat if Microsoft did not support it, and they do. You can find the quite extensive article here. I will refer to this article so keep it handy.

The next thing to understand is that in no way can you make me liable for anything you do to your BizTalk environment based on what you read here.

I was fortunate enough, as well as prepared enough, to test the whole scenario in another environment before heading into the scenario above. I highly recommend you do the same. Just see to that the BizTalk and SQL Servers are on different machines, otherwise you don’t get any practice.


Before you begin you have get a couple of things done.

  1. Get a person who knows SQL server and has access rights to everything you need on both machines. On an enterprise level this is usually not the BizTalk admins, nor the BizTalk developers.
  2. Plan the outage! In our case we were lucky enough to get a full week between two testing stints. Set aside a day in which the platform is completely out.
  3. Plan the backups! Lets say you get what I got: The backups run once a day at 3am. Therefore nothing may enter or leave the platform after 3am. You need that backup to be 100% compatible with a fallback (retreat?) scenario.
  4. Script all the BizTalk SQL jobs to files and store them securely.
  5. Script all the BizTalk SQL users and store them securely as well.
  6. Get a txtfile and paste the names of the source and destination servers and everything else you might find useful.
  7. Read thru the article by Microsoft just to see what you are expected to do, and what you might need to ignore.

More to follow.

This coming Tuesday I will make my debut as a speaker at BUGS (BizTalk User Group Sweden) and some wanted to know what I will try to cover in my Tips and Tricks session.

Here are some:

  • Trace och Trace using BAM-tracking
  • Recoverable interchange
  • Ordered delivery and WS-anrop
  • “Copy name” in map
  • Separated list in a map
  • Datatypes and imports in schemas
  • ”Copy” in filemask
  • Debatching using xml
  • Rename files while reading
  • Autentication-tab in the FILE-adapter
  • Negative service Window
  • Debugging pipeline components
  • Empty rows at the end of flatfiles
  • Using the subscriptions-query usefully

There will be more but these are some at least. Hope to see you there.

Slides available on the Swebug website.

Book pic

The short, short version: A very good book arriving very late. If you have an implementation that uses Windows Server AppFabric (or AppFabric for short) then there is no reason what so ever for not getting this book. A definite 5 star!


Anyone remember the days when we called it Dublin and was afraid that Microsoft would topple our whole livelihood? Well I do and remember embracing it.

No matter how you look at it AppFabric is a good product and it is free, as in no charge. I cannot really understand why it did not take-off as it should have. Maybe it just got lost in all the Azure hype.

The book

I would like to state that this book would really have improved the chances of it being used in enterprise applications. The book is just that good. …but late. I wish I had this book in 2010 when I was trying to implement AppFabric and make it work like a “light and free version of BizTalk”. Others, like Jon Flanders and the ever working Ron Jacobs.

The book is following the, now established, pattern of the cookbook series from Packt Publishing; a short introduction, a description on how to do something and the a “how it works”. Sometimes they add a “there’s more”. I like this pattern a lot , however as I have pointed out before, you loose some overall continuity and each recipe can be a bit isolated.

The writing is very good and as always I feel that the authors are really knowledgeable about the subject and that they have worked hard to keep it simple. A definite plus in my opinion.

The book convers

  • - Installing Windows Server AppFabric
  • Getting Started with AppFabric Caching
  • Windows Server AppFabric Caching – Advanced Use Cases
  • Windows Server AppFabric Hosting Fundamentals
  • More Windows Server AppFabric Hosting Features
  • Utilizing AppFabric Persistence
  • Monitoring Windows Server AppFabric Deployment
  • Scaling AppFabric Hosting, Monitoring, and Persistence
  • Configuring Windows Server AppFabric Security

They pretty much cover everything even if you should really know something about WCF and WF to really have use for this book, but if you are looking for a way to host your WCF and WF-services I think you already do.

In conclusion

This book is very good if you already have some application(s) running on AppFabric or if you are considering hosting some existing services on AppFabric. If you do not then this book is of no use. To me it is a nostalgic trip on a very good product that I never got to use.

Once again: If you want to use, or have, AppFabric: Buy this book. It is better that the only other existing book.

About the authors

Hammad Rajjoub
Works at Microsoft and can be found here, and on Twitter.

Rick G. Garibay
Works at Neudesic and can be found here, and on Twitter.

windows-8-logoCannot activate your copy of Windows 8? Try running the command-prompt in elevated mode and enter the following command:

slmgr.vbs -ipk "ENTER-YOUR-KEY-HERE”

More info on Slmgr can be found here.

On August the 28th I will have my first speaking engagement outside Logica, at the Swedish BizTalk User Group.

As a long time attendee I have seen a lot of speakers and I am sure I will do a fair job of it. The topic is “All the small things”, which will try to cover a lot of little hints, tips and tricks when developing BizTalk solutions or running BizTalk.

It will not be an architectural talk but rather a Level 300 aimed at developers. “Tickets” available via Eventbrite.

While browsing for the answer to the question: “How do I add SOAP-headers to a message sent using the WCF-custom or WCF-basicHttp adapter?” I never really found a good, short answer. So I thought I’d give it a go.

Setting SOAP headers

I assume you know what SOAP-headers are and why you might use them. If not, then back to basics.

In my case the client needed BizTalk to send request with the WS-addressing SOAP header called “To”. I needed to know the easiest way to do this and preferably using configuration and no orchestrations.

To the best of my knowledge, this is the simplest way to do it.

Using a pipeline

Use a pipeline component to promote this property:

My guess is that you local BizTalk-code hub already have a pipeline component to promote arbitrary values. If you do not, the code for promoting the property is here.

The only thing to remember is that the value of the property is a bit special. You can hard code the values of your headers, even using xml-formatting; no problem, but you have to surround the value with a <headers> tag.

<h:To xmlns:h="">rcvAddr</h:To>

This will result in the WCF adapter serializing a SOAP envelope with SOAP headers that contains the value you give between the <headers> tag.

Here is the result in my environment:


Using an orchestration

This is a bit more work, but a very useful way to get the same result of you already have an orchestration. A bit more information can be found here.

What you basically do is setting the property from an assignment shape, much like you would access a FILE.RecieveFileName.

outboundMessageInstance(WCF.OutboundCustomHeaders) =
"<headers><add:To xmlns:add="">rcvAddr</h:To></headers>"

There are a lot of things I do not know about BizTalk. The list is getting shorter but here is something I found.

I was trying to verify a flow within a known environment. Everything else seemed to work apart from this one flow. A technician submitted files to a directory and the file was picked up. However it did not show up on any tracking; neither the basic BizTalk tracking nor our BAM-implementation noticed the file.

The files was picked up and I verified that the file was picked up by BizTalk. I could not submit the file myself as I did not have access to the path.

After a while I remembered to check to log on the other BizTalk node in the cluster and then it became clear. A simple warning said: “The FILE receive adapter deleted the empty file "\\Server\Share\testfile.txt" without performing any processing.” I have to admit that I did not know that. It is actually a “know issue

What happens is that the file is picked up but as the technician just submitted files using the old Right-click + New the file is empty. BizTalk does not process empty streams as it were and the file is deleted without any trace in the tracking.

Here’s a tip

In some scenarios you might receive an empty file to start a flow within BizTalk. Perhaps some system is telling BizTalk “That data you’re so interested in is done”. Make sure that file contains some data. Perhaps just a repeat of the file name or the letter “M”.

This presentation is not present at Channel9 at the moment. That is a shame because this was, in my opinion, the best presentation of the whole conference.

The session was presented by Alex Jauch, currently at NetApp but he used to work for Microsoft. Actually he was behind the precursor that became the MCA. I had never even heard about this guy before and I would say that it is a shame. I have now though.

The heading for the session seem ominous and deterministic but given my personal experience I would say that it is not far from the truth to simply assume that “cloudification” will fail. Incidentally it is also the title of Alex’ book :-)

Alex (or should I say Mr. Jauch?) started the session by clearly stating that he was about to say things that not all of us would agree upon. He would also try to upset some people! Bold and funny in my opinion.


The, or even a, definition for what cloud computing really is, can be hard to come by and one definition might differ a lot from the next. Alex presented the definition made by NIST. He pointed to the fact that NIST is a governmental agency and these are notorious for not agreeing on anything. The fact that they have agreed on a definition for cloud computing gives some credibility to it.

According to them there are five essential characteristics that together form a cloud. If any of these are left out, you are not a true cloud provider. They are:

On-demand self-service. A consumer should be able to change provisioning in the cloud by him/herself.

Broad network access. Capabilities are available over the network and accessed through standard mechanisms.

Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model.

Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand.

Measured service. Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

So if your cloud does not have a portal for adding resources in a way that the consumer can do it, you do not have a cloud service.

The full definition (2 pages) can be found here.

So why do we fail?

I say that it comes down to this comparative table

Traditional IT Customer Centric IT (Cloud)
Sets IT standards Supports business requirements
Focus on operations excellence Focus on customer satisfaction
Engineering is key skillset Consulting is key skillset
Sets policy Seek input
Focus on large projects Focus on smaller projects
Organized by technology Organized by customer
Technology focus Business value focus
Delivers most projects in house Brokers external vendors as needed

It is not around technology we fail. It is in how we use it and the attitudes in those that implement the technology. When trying to run a cloud service as “we always have”, in a traditional manner that is when we fail.

In order to run a successful a successful cloud shop, we must change focus and really (am he means really) focus on the customer. A very telling quote from the session was around the focus on operations vs. focus on customer.

“’We had a 100% uptime last month’ What does that mean if the customer still has not manage to sell anything?”

So if someone is telling you: "We sell cloud”, at least ask them about the 5 points from the NIST definition.

If you (or your organization) is thinking about deliver cloud capacity: Good luck.

Be being recognized by others you gain confidence. Also to some of us it simply feels good :-)

Steef-Jan Wiggers (MVP, author, BizTalker, Dutchman and avid tweeter) runs a “Meet the community” series on his blog. I am the subject of that last installment.

Full title: “Achieving Enterprise Integration Patterns with Windows Azure Service Bus”. Or another way to put it: ”What will happen to BizTalk when it gets cloudy”.

Well I would say, after the session, that BizTalk had very little o do with this. If you are a BizTalk developer you probably saw how things you do every day can be done in the Service Bus. Basically Clemens showed some of the most common patterns and how they are implemented in the Service bus. For instance splitter/aggregator, content based routing and recipient list.

There were some news that he told us.


This is a very small, binary and lightweight protocol for queues that will be supported in the next version of the Service Bus (coming End Of Year).

SharePoint and Workflows

The next version of Sharepoint will stand on top of WF that in turn will stand on top of queues in the Service Bus. WF will store it’s state using sessions in the queues. I also think that “will” can be replaced by “is possible to”.

A real product

For some of us who have been around to see MS try to launch integration products like Windows Server AppFabric (Dublin) for instance. I have to say that this time it feel like MS think this is a real product. They have some customers using it today and Clemens show a couple of them. So I get a feeling that this is for real. I did not get that from Windows Server AppFabric. (A good product though).

Side note: To all who have not yet read the integration bible: Patterns of Enterprise Application Architecture, Clemens recommends it too.

Why the Azure Service Bus?

In short, Microsoft sees a future where you have to be able to process large quantities of messages (small) and that the flow of these messages might vary over time. Not only during the month but also the flow might start small but increase heavily during a short time. Examples of this might be smart meter readers for household electricity that collet readings but also receives messages from the suppler.

In scenarios like this, it is much smarter to by capacity on demand; Cloud and the Service Bus makes these scenarios possible.

Another example might be something that Clemens built; a smart thermostat for AC-units. He has even written an article in the current issue of MSDN magazine about it.

Ok, so new stuff

Right now the maximum size of a message is 256 kb (including headers). There might be an increase to this size but it will “almost certainly guaranteed” not go over 1 mb.

In the end of year (might be December the 62nd) there will be a new release that will contain part of that which is called project Iguazo, which is basically a message distribution system in which you can build trees of subscriptions. Divisions and sub-divisions and further sub-divisions make up the branches that at the end of the branch is a device. This makes for a very easy distribution to individual devices but also entire countries of devices just by smart addressing.

Some tips at the end then

NHTTP is one of Clemens little side projects and it is basically the use of the headers in http to send data in a key-value fashion. The N stands for NoHyperText and to use it you can simply prefix your properties with a P- and the access them from your code by accessing the HTTP headers directly. More info here.

When you configure queues and topics, use an auditing queue that gets a copy of all messages sent to the bus.

The presenter, Augusto Valdez, started by stating something to get everyone in a good mode, as we all know that WP is a very good product but it still lacks in sales. They did their own research  by going to the amazon us website and look for what phones people like. The top 3 are WP and out of the top 9, 7 are.

Windows Phone 8 will release at the same time as Windows 8. The different teams are working together, collaborating and trying to get the same experience on the phone as well as desktop or pad.

So here 8 new features in Windows Phone 8

1. The latest and greatest hardware. it will support dual cores and more. It will support 3 different resolutions, the highest begin 1280X720 16:9. They will continue to support MicroSDs and even expand on that functionality by allowing you to install apps from a MicroSD!

2. IE 10. this will be the same code that runs on Windows 8 so it will have great JavaScript and HTML 5 performance. It will also include anti-phishing since that is a great problem with mobile devices at the moment.

3. Native code support. The same code that runs on Windows 8 will run on the phone. Think about the time this little gem might save you.

4. Full support of NFC (near field communication). Now the words “full support” might mean different things to different people but that is what he said. NFC is to me pure science fiction, which either makes it cool or me seem really old.

5. The most complete wallet. Well if you say so. I won’t hold my breath but if we could make way of all these membership cards and cash I would be a very happy guy. Also, the security will sit in the SIM-card and not in the hardware. That means that the security is portable and you can move your identity between different devices.

6. Nokia map technology. This means a lot of things but mostly it means offline maps. Download all the maps for… lets say Amsterdam, and use them all day without roaming charges.

7. Windows Phone 8 for business


If you are using Windows 8 and Windows Phone 8 there should not be any reason not to use the same apps on all your devices. This is when that shared core comes into play. Now the phone is encryptable and you can treat the phone as any other laptop (nearly) in the business, and push different apps to different phones. Perhaps also enforcing some security and restrictions.

Another important thing is that you can install applications to your phone and not use Marketplace. This is of course important to business users. (That little fact won a guy in the audience an Nokia 900 buy the way!)

8. The start screen


Once again the shared core comes into play and the extended functionalities of the live tiles on Windows 8 will come to Windows Phone 8. the picture is actually from a prototype phone the presenter used to demo features.

The old version

So what will happen to Windows Phone 7.5? Many already know that you will not be able to upgrade a WP7.7 to WP8. Mr Waldez told us that there will be a WP7.8 that will come close to what WP8 will do but not all the way.

I wonder if Scott Gu can sing? If so he has a lovely basso.

There was very little news to me in this session as I am a frequent attender of the Swedish IMG_1602Azure User Group, however a little repetition might improve my knowledge.

There are a couple of things that still amazes me when it comes to Azure. the first one is the 99.95% monthly SLA. This means that Microsoft guarantees that your servers are up all but about three hours during a 30 day month.

The next thing that amazes me is still the cost of hosting a server. Two small instances (1.6 GHz processor, 1.75 megs of ram, 225 Gb of storage) with 100 Gb of data transfer cost €90 per month from the first month! I can easily tell you the names of a couple of providers that will charge you €600 for the same service.

Also remember this: MSDN Premium and Ultimate comes with Azure! So there is nothing stopping you from giving it a try at least, perhaps time frames but not cost. You only pay for what you use. Start small and scale up or use it heavily in a few hours and the close them. You don’t pay any more.

Virtual private networking is finally here. They talked about it for a while before but now you can have a network within the cloud and the connect to your local network using VPN tunneling. They even provide a way of scripting the Virtual network so that the local network can use VPN to access it (and vice versa).

Since all machines that are running in Windows Azure are VHD you can use VHD that you already have on premise or perhaps other providers.

I stared thinking about something: What can you do locally that you cannot do in Windows Azure? There was no time for questions at the end but perhaps someone can give me a suggestion on twitter.

The next now cool thing is Azure Websites. Something I really wish I had access to back in the day so I could focus on content and not building the actual stuff. Well,  you get 10 free with MSDN. They are very very to deploy using VS 2012. you can also connect them to TFS (Online version as well) and make use of continuous build and deploy.

Welkom in Amsterdam!

Windows is now big! And when I say big I mean HUGE. They do not only power smaller and smaller devices but larger and larger as well. Some specs for the now Windows Server 2012 (yes they are calling it that so stop calling it Windows Server “8”): You can run 64 nodes in a single cluster, you can use up to 4 TB of memory per server, you can run 4 000 VMs, a single VM can support 1 TB of memory and 64 TB of virtual disk!

Given those figures we are very close to seeing the end of the physical server as the goto solutions for information and transaction heavy solutions such as BizTalk tend to be. All this virtualization also makes it easer to maintain and move around as the specs of the different applications changes. Good news for us.

IMG_1607The most impressive part was the way they got more than 1 million I/Os per second from a single virtual machine. Compare this number to your fairly standard (and fast) SSD drive that has about 8 000. I can safely say that physical servers is no longer they primary way to go. Even SQL server can deliver it’s very near maximum performance in a virtual environment. One Microsoft guy said about 99% of all tasks.

The other things they they focused on was the good capabilities to utilize hybrid cloud. They even provisioned a AMS server using Windows Systems center. They also talked a lot about how to integrated different versions of the cloud and how it can be monitored from the same place, including that AMS server. For us familiar with other cloud providers that focus mainly on IaaS, this is a very good thing because most of the time it simply comes down to maintainability.

In Berlin in 2010 I blogged about the keynote as well, and in that post I “predicted” that we might see a future in which we buy desktops in the cloud for our company and they look and behave just as they normally would. We are not quite there yet in some aspects but in other aspects, Microsoft has surpassed my predictions and also my expectations! We can now run servers in the cloud just as we would run them on prem.

More Posts Next page »