December 2008 - Posts

Event log service is unavailable issue
21 December 08 01:50 PM | Johan Hedberg | 16 comment(s)

I thought I'd blog about this issue I had, since it was in the end so easy to solve, but I had a hard time finding a good description of both my specific problem and any resolution. I am a bit ashamed to say that I got quite creative before trying this.

The problem I was having was that when opening the Event Viewer on my Windows Server 2008 I'd got a message saying that "The Event Log service is unavailable. Verify the service is running.". And if I went to look it was in fact not running. The thing is though that I could easily start it, and it would keep running, until I went to the Event Viewer to look at the logs, which would then bring it down.

I solved this by simply deleting all the files in the C:\Windows\System32\winevt\Logs folder.

Update 2010-02-22: Feedback in comments suggest that you might need to restart after performing this step.  

I'm not going to patent the solution, or make the claim that it will work in every case, but it did for me, and if you are experiencing this problem it's easy enough to try.

Recipes for working with Microsoft Virtual PC
21 December 08 12:28 AM | Johan Hedberg | 2 comment(s)

Robert Folkesson, a Swedish Microsoft developer evangelist, wrote recently about a recipe for getting good performance out of a VPC (in Swedish). In summary and translated to English, he suggest that you run the base VPC from an USB memory stick and run with the configuration file and therefore undo disks located in the system drive. This will help performance since USB memory does reads really well, while doing writes less so.

I thought I'd share a link to a document from Microsoft with loads of information about Microsoft Virtual PC Best Practices, see Working with Microsoft Virtual PC (I've got it tucked away should the link go dead). If you work a lot with VPC's, have a look through the document. There are some worthwhile stuff in there. It describes several procedures for how to speed things up (far more than I use).

The steps that I perform before tucking away the VPC includes:

  • Clean up all logs and temporary folders, including running disk cleanup.
  • (Temporarily) disable and remove the page file (and restarting)
  • Defrag the (virtual) disk
  • Re-enabling the page file
  • Run the pre-compactor
  • Do a last cleanup and shut down.
  • Run the the virtual disk compactor.

And then tuck the disk away as base image. This is regardless if you plan to use it as a base for a differentiating disk strategy or for using state files or undo disks. I've never tried running it from a USB memory stick though, I don't even have a memory stick big enough. Yet... ;)

Filed under: ,
I'm in .iso installation hell
20 December 08 10:15 PM | Johan Hedberg | 1 comment(s)

I've been installing a new VPC with Windows 2008, SQL Server 2008, Visual Studio 2008 and BizTalk Server 2009. I've never had so much trouble installing from .iso files. It seems to work 1 out of 5 times, or less. Even when you try the same file over and over it just pops up different errors, and then the fifth time it just works. It's incredibly time consuming. I don't even care to think about how many times I've re-downloaded and re-tried the installation. It's all done now, but please someone make this easier!

If I never see a "A file that is required cannot be installed because the cabinet file d:\somepath\cabXx.cab has an invalid digital signature." again it will be too soon.

I'm posting a few links for my future reference. Not that they didn't really help me much, but maybe they will next time. They at least seem to be describing the issues I've been having, and the solution that seems to make the chance of success higher is to copy the content from the .iso onto the harddrive (in my case the vhd drive).

http://social.msdn.microsoft.com/forums/en-US/vssetup/thread/00cf6d2d-2bf1-49ae-8453-07ff11b0a380/
http://social.msdn.microsoft.com/Forums/en-US/vssetup/thread/ecb1403a-a343-43d9-92c6-a50f5bee3cf6/

But hey, what doesn't kill you makes you stronger, or so they say...

Amazon plays it safe or "How RESTful.NET traveled to Sweden in 8 days"
11 December 08 06:39 PM | Johan Hedberg | 2 comment(s)

We ordered Jon Flanders new RESTful.NET book from Amazon US. It was a tough choice. The local supplier had it on their site, but stated that it would take 26-40 business days to deliver. Amazon UK was slightly better, but not by much since at the time they didn't have the book marked as in stock either. And with the shipping options and cost if was more expensive by a more than slim margin. Amazon US was second best priced, if anything Bokus has really good prices. But in this case that was worth nothing when they couldn't deliver. The shipping options to Europe aren't great at Amazon. Expedited international shipping costs $18 and was said to arrive in 8-19 days. Even though they had the book marked as in stock at the site the first order confirmation I got informed me that the book would be shipped 8 days after I placed the order with an estimate arrival of December 24th. The actual time it took before it shipped was 4 days, according to my second confirmation. This time I got an estimated arrival of December 31st.

But here I am, it's December 11th and I've got it in my hand. Amazon plays it safe.
I'm all for that; better to get a good surprise than a bad one. But I feel they are in risk of loosing customers with that approach. But then again, the customers they do have will be happy customers.

However, with that said. At this point in time, a couple of weeks later than we ordered, I highly recommend you to get this book from Bokus if you are a Swedish resident, since they now have it for delivery in 5-8 days. Cheaper than Amazon, and without the cost of shipping it from the US.

Now I'm looking forward to get some reading done.

BizTalk mapper - Using a message as a lookup and merging data
10 December 08 11:51 PM | Johan Hedberg | 5 comment(s)

Following on my previous post on how to append records to existing (looping) records I am following up on that with a solution, or pattern as that's a word in high fashion, for how to use a second message as a lookup table to merge or enrich data in the first message. In this sample we are not adding any new rows, we are enriching existing rows and merging the data of the second message into the records of the first document. Again, it's a form of enrichment pattern.

Say for example that you have a list of products and now you need to do a lookup into some other system that owns a specific type of data, like for example when the first document only contains information about who the manufacturer of a product is, but you need to go to the logistics system to lookup who the distributor is as well, and enrich the document with that information before sending it to the ERP system. How would you do that? And just for the sake of argument, let's asume that you would use a chunky interface to talk to the logistic system instead of a chatty - meaning you would of course send several products to the system and get several products, and their distributors, in response.

With the BizTalk mapper, using only the basic functoids provided, I don't have a solution for that. I've tried different options, but the mapper just didn't like what it saw. The solution then, is the advanced functoids, more specifically the scripting functoid and a custom xslt call template (it might as well be done using inline xslt for that matter, a call template is not crucial for the solution, it's just my choice).

So to setup the scenario. Here are the two incoming schemas:

image image

In both schemas Product is repeating.

As always when you have two messages as input to a map, the typical place you would place that map is inside an orchestration. So we create an orchestration, get our messages created, and create a map and make it take two inputs and one output and we are all set. Review my previous post for more info on this. In this post I'll jump right to the mapping logic. First of, let me demonstrate using regular BizTalk mapper shapes what I would like to do, but that will not work!

image

What I want this to do is: "Loop all products, and the use the ID of the product to lookup the Distributor that we recieved from the logistics system and output that to the output". But again, it doesn't work. However if this would be a single product that I would need to enrich, the solution in the above image does work! (the loop in that case is of course ignored). But in this case I have multiple products.

For the sake of clarity I've added a loop. It isn't really needed - the BizTalk mapper would have implied the loop had I not and produce the same xslt as the above would. I could connect the Schema2 product to the same loop functoid and that would introduce the same look as in my previous post, where it tries to loop the second messages Products as well, but that's no good. I could even try to use an additional loop functoid, but that would just get blatantly ignored by the mapper.

Anyway, so what can we do. Enter the scripting functoid, enter xslt.

I can do this:

image

As you can see, I am now connecting a scripting functoid between the Product.ID of the input and the Product.Distributor of the output. The second message has no links at all! Instead the lookup logic is contained in the scripting functoid.

So how does this magic that the scripting functoids does look like?

image

And here is the actual xslt script:

<xsl:template name="MyXsltCallTemplate">
  <xsl:param name="param1" />
  <xsl:for-each select="../../../../InputMessagePart_1/s0:Root/Products/Product">
    <xsl:if test="string(ID)=$param1">
      <Distributor>
        <xsl:value-of select="Distributor" />
      </Distributor>
    </xsl:if>
  </xsl:for-each>                   
</xsl:template>

To explain the script in words, what it does is that it loops through the records until it finds the correct Product with the matching ID, and when it does, it outputs the Distributor information. I'm realize there is a way to do this without looping, but I'll leave that particular gem for those who comment to point out.

A simple example:

Input:

<ns0:Root xmlns:ns0="http://schemas.microsoft.com/BizTalk/2003/aggschema">
  <InputMessagePart_0>
    <ns1:Root xmlns:ns1="http://MappingLookupPattern.Schema1">
      <Products>
        <Product>
          <ID>123</ID>
          <Manufacturer>Manufacturer_0</Manufacturer>
          <Distributor />
          <LotsOfOtherInfo>LotsOfOtherInfo</LotsOfOtherInfo>
        </Product>
      </Products>
    </ns1:Root>
  </InputMessagePart_0>
  <InputMessagePart_1>
    <ns2:Root xmlns:ns2="http://MappingLookupPattern.Schema2">
      <Products>
        <Product>
          <ID>123</ID>
          <Distributor>Distributor_0</Distributor>
        </Product>
      </Products>
    </ns2:Root>
  </InputMessagePart_1>
</ns0:Root>

Output:

<ns0:Root xmlns:ns0="http://MappingLookupPattern.Schema1">
  <Products>
    <Product>
      <ID>123</ID>
      <Manufacturer>Manufacturer_0</Manufacturer>
      <Distributor>Distributor_0</Distributor>
      <LotsOfOtherInfo>LotsOfOtherInfo</LotsOfOtherInfo>
    </Product>
  </Products>
</ns0:Root>

For those reading my previous post saying that what we did there was nothing special, I believe some of you will have found this example slightly more interesting. It's a useful thing to be able to do in so many scenarios. And yes, I recognize that there are those of you who seldom use the mapper at all any more. I am still among those that think it has it's benefits, especially since more often then not there are at least some junior members of the team that aren't comfortable with pure xslt, and there is really no reason in that scenario to take an otherwise quite simple map to pure xslt when all you need is a small scripting functoid to do the trick.

Filed under:
BizTalk mapper - Appending new records to existing (looping) records
10 December 08 10:37 PM | Johan Hedberg | 4 comment(s)

Does BizTalk Server maps have patterns? What are patterns? Wikipedia states that it's a pattern if

"...it happens 5 or more times..."

and that a design pattern in computer science

"...is a general reusable solution to a commonly occurring problem..."

To summarize, the solution that I'm going to present is certainly something that I've been up against a number of times, and its something that's really quite useful, yet simple, and something that the BizTalk mapper does really well. It's the task of data concatenation, or appending data if you'd rather call it that. It's an implementation of the content enricher pattern.

Say that you have two schemas, for simplicity, lets call them Schema1 and Schema2. Now both of them are based on a kind of name/value looping structure. What you would like to do is to append the name/value pairs in Schema2 to the recurring record in Schema1 and produce a new, enriched schema.

image image

In Schema1, Record is repeating, and in Schema2, Row is repeating.

Now how do you combine that. That answer is... easily.

Now to do this combining you have to use an orchestration, since you can't do a map with multiple inputs outside an orchestration. So we create an orchestration, get our messages created, and create a map and make it take two inputs, that is, we drop a transform shape, double click it and assign two input messages, and one output, like so:

image

After you open the map created, you will have two message parts under the root, like so:

image

Now we can do the mapping:

image

As always, the best way to check that the map really does what you are after is to look at the xslt generated. I'm not going to paste the entire xslt generate here, but if we take it down to just the important part, it's this:

<ns0:Root>
  <Header />...
  <Records>
    <xsl:for-each select="InputMessagePart_0/ns0:Root/Records/Record" />...
    <xsl:for-each select="InputMessagePart_1/s0:Root/Rows/Row" />...
  </Records>
</ns0:Root>

The map we created will loop first the Record records, and then the Row records, and for each such found it will output (not shown) the Record record with a Name and Value.

Mission accomplished.

Just so there is no misunderstanding, here are sample documents. First the xml sent in (the combination of two xml messages conforming to the two schemas)

<ns0:Root xmlns:ns0="http://schemas.microsoft.com/BizTalk/2003/aggschema">
  <InputMessagePart_0>
    <ns1:Root xmlns:ns1="http://MappingConcatenation.Schema1">
      <Header>
        <ID>ID_0</ID>
      </Header>
      <Records>
        <Record>
          <Name>Name_1</Name>
          <Value>Value_1</Value>
        </Record>
      </Records>
    </ns1:Root>
  </InputMessagePart_0>
  <InputMessagePart_1>
    <ns2:Root xmlns:ns2="http://MappingConcatenation.Schema2">
      <Rows>
        <Row>
          <Name>Name_2</Name>
          <Value>Value_2</Value>
        </Row>
      </Rows>
    </ns2:Root>
  </InputMessagePart_1>
</ns0:Root>

 

and then the result:

<ns0:Root xmlns:ns0="http://MappingConcatenation.Schema1">
  <Header>
    <ID>ID_0</ID>
  </Header>
  <Records>
    <Record>
      <Name>Name_1</Name>
      <Value>Value_1</Value>
    </Record>
    <Record>
      <Name>Name_2</Name>
      <Value>Value_2</Value>
    </Record>
  </Records>
</ns0:Root>

For those saying that what I've done is nothing special. You are quite right, it's not, but it's a very useful pattern.

Filed under:
BizTalk Server 2009 CTP goes public
08 December 08 07:12 PM | Johan Hedberg

Finally. Ever since being told of a rough estimate of the date it was to be released I've been waiting for it to come. And now it has. Read the official word at Steve Martins blog. Get it here. ESB Guidance 2.0 is here.

It took a while longer than I originally expected. Still, it's been available as a TAP for awhile, and with it the details of the features contained within, so there are no real news or suprises there, but it will be good to get your hands on it. And lots of folks are and will of course blog about the event as well, so there is really no point of iterating the features again.

 When installing Windows Server 2008 to run BizTalk Server 2009 on your image, remember that you can configure it as a workstation, it's been blogged about here (don't miss the pdf download) and there is a KB available as well.

How-to: Get status from a windows service?
07 December 08 11:28 PM | Johan Hedberg

I was asked: "How can I call into a Windows Service to get it's status?"

A little more background said that status was to be retrieved by clients (a website) on the same machine.

My response was to use WCF with the NetNamedPipeBinding. Since the recipient of my response, by his own accord, was unaccustomed to WCF, I decided to write up a small app that showed the concept.

The key thing here is that service expose an endpoint at an address that looks like this: net.pipe://localhost/ServiceMonitoringEndpoints/<name_of_service>. Name of service will be the same name as the ServiceName property of the ServiceInstaller component. This enables us to et the status of an arbitrary service just by knowing its name. First we look at the status through ServiceController and then, if the service is running, we call the WCF service to get the actual status, in this case in the form of a string. Pretty simple, wrapped up in a Library component ready to be used by both Windows Service servers and their clients. The Windows Service server is configured through its config file in my sample, but that's something you can alter should that be one of your requirements.

Basically the server configuration is (mex of course not really necessary):

<services>
  <service behaviorConfiguration="includeExBehavior"
      name="SharedMonitoringUtilityLibrary.MonitoringService">
    <endpoint address="" binding="netNamedPipeBinding" 
              contract="SharedMonitoringUtilityLibrary.IMonitoringService" />
    <endpoint address="mex" binding="mexNamedPipeBinding" 
              contract="IMetadataExchange" />
  </service>
</services>

and the it uses one of ServiceHosts constructors to supply that takes type and address:

protected override void OnStart(string[] args)
{
    host = new ServiceHost(typeof(SharedMonitoringUtilityLibrary.MonitoringService),
        new Uri("net.pipe://localhost/ServiceMonitoringEndpoints/MonitorMeWithWCFService_Service1/"));
    host.Open();
    MonitoringService.Status = "Started at " + DateTime.Now.ToString();
}
protected override void OnStop()
{
    if (host != null)
        host.Close();
}

as you can see from the code above, I'm simply using a static method to supply the status to the running WCF Service. This particular point of interaction wasn't my primary point of this post or code. A better option could be getting an instance of the host from the other direction using an overloaded ServiceHost and Get the host using OperationContext.InstanceContext.Host.. Another option might even be where the server is a client to it's own service, using a set method to set the status. What you don't want to do is make the service hostdependent. Just be aware that my choice above is clearly limiting, and that you have plenty of other options.

The client is dynamically configured to look for and call the service using the following method:

Binding binding = new NetNamedPipeBinding();
EndpointAddress endpoint = new EndpointAddress(
    string.Format(@"net.pipe://localhost/ServiceMonitoringEndpoints/{0}/", serviceName));
MonitoringServiceClient client = new MonitoringServiceClient(binding, endpoint);
statusInfo = client.GetStatus();

Although it's surrounded by other things as well in the actual sample code, like the call to ServiceController.

Here is the sample code.

It contains three projects. The Windows Service Server, the library and a simple console client. Compile the sample code, install the service using installutil and run the client. Now that you've validated it and it's running you can begin expanding it.

Additional Resources:
Hosting WCF Service in Windows Service screencast
Hosting and consuming WCF services
NamedPipe activation

Filed under: , ,
Resource availability for standard platforms (like BizTalk Server) is insufficient
07 December 08 06:38 PM | Johan Hedberg

The papers in Sweden reported last week that resources with expert level knowledge on standard platforms and systems are hard to come by. In summary it says that its an emerging trend in Sweden to use standard platforms, but the fact that resources are hard to come by is pushing resource prices up. The majority still considers it to be worth it, because of the wins they foresee.

It's not a moment too soon. And I'm not saying that because I happen to be one of those scarce resources that will (supposedly) be high priced (in fact I see prices being pushed down due to the economy if anything). The reason that I think that it's about time is that too many companies has for too long been all about the "not built here" attitude in combination with "that doesn't meet up to our exact standards, therefore it can't be any good" point of view. Standard platforms are better. They do incur less overhead and less time spent on plumbing. We see that more and more with different technologies emerging; Microsoft and others are moving in to alleviate the burden of doing plumbing from developers. WCF is an excellent example of this. Standard platforms is another. BizTalk Server certainly contains a lof of functionality that would otherwise be quite complex and time consuming to build. Unfortunately not everyone is following the trend, just the other day I heard of a large A-listed company that is going against the trend instead...

But of course, for standard platforms to be successful it demands people that know how to utilize them, or the end solution will turn up nearly as complex as without, and far from standardized. But please, look at standardized platforms before looking to develop it yourself. Don't reinvent the wheel!

The article, (in Swedish) is here: http://www.idg.se/2.1085/1.194781/stor-brist-pa-experter-pa-standardsystem

Filed under:

This Blog

News

    Messenger

    Twitter Updates

      Follow me on twitter

      Visitors

      Feedburner Subscribers

      Locations of visitors to this page

      Disclaimer

      All material is provided AS IS voiding any thinkable or unthinkable effect it might have for any use whatsoever. There... is that clear enough ;)

      Pages

    Syndication