Tag Archives: architecture

How to make a #REST-ish API with #WCF

Where I work we integrate a lot with external partners where they sometimes dictate the conditions. At the same time we have our technology, architecture and environment to deal with and solve it as good we can. This put us in a bit of a position recently when we were tasked with “make a REST API that conformes to this document”. We use WCF. And then there was headache.

But, since WCF is carrier agnostic, this shouldn’t be any problem right?! You can deliver via HTTP, TCP, you name it. It’s just the flick of a switch in some config. Turns out it wasn’t quite that easy. The document stated that the request will be sent as content-type “application/json”, it also stated that it will be a POST request and furthermore that the actual JSON request will be a payload in the BODY of the message. WCF don’t give you access to the body. Since it is by design to be protocol agnostic and not all protocols have a body or they differ from eachother.

This is all very well documented if you enter the correct search terms. Read more on both the problem and solution here, here (this is where most paths lead) and here.

This is mostly for me to remember for the future since I know now that I’ve gone through it I did it once before but forgot all about it until it was right there in front of me.

While it all sounds complicated it really isn’t. It’s as easy as 1-2-3, like David points out in his blog post on the subject.

1. Add a new WebContentTypeMapper, like so:

public class RawContentTypeMapper : WebContentTypeMapper
{
  public override WebContentFormat GetMessageFormatForContentType(string contentType)
  {
    //Make if-else statements here if you want to respond differently to different contentTypes. This one will always return Raw regardless of what you attack it with.
    return WebContentFormat.Raw;
  }
}

2. Make your new content type mapper available for use:

<bindings>
  <customBinding>
    <binding name="RawMapper">
      <webMessageEncoding webContentTypeMapperType="Your.Namespace.RawContentTypeMapper, Your.Assembly, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
    </binding>
  </customBinding>
</bindings>

3. Put the new available content type mapper to use:

<endpoint contract="MyContract" behaviorConfiguration="MyEndpointBehavior" binding="customBinding" bindingConfiguration="RawMapper" />

(4.) When working with a Raw content type, your usual way of working with WCF will be put slightly out of order. Instead of using your standard contracts, your method will need to take in a Stream. After that you are free to work as you like. Here is what we did.

public string Check(Stream request)
{
  var json = new StreamReader(request, Encoding.UTF8).ReadToEnd();
  var internalRequest = JsonConvert.DeserializeObject(json);
  var querystring = HttpContext.Current.Request.QueryString; //perhaps some additional data in here?

  //Do stuff

  return string.Empty;
}

The above will give you everything you wanted from the start. Almost.

Now you are thinking “NICE! And then I just use the UrlTemplate(…) to get a descent way of finding the methods!”. Well, not quite. It won’t turn out like http://mydomain.ext/api/check, but rather http://mydomain.ext/path/endpoint.svt/api/check. While I haven’t tried, I’d go with some URL rewriting in IIS or something. Perhaps you have a loadbalancer who can do it for you, or some other mechanism.

Tagged , , , ,

Alert general!

At work we recently started decoupling our services a bit more than previously. It is an older system which has been around a few years and haven’t got much love. We quite quickly identified that work being done was done on the same thread as the request, in sync, so the user had to wait for all work to complete until they were released into freedom again.

Grabbing the beast by the horns my colleague started looking into and learning NServiceBus which we already use at the office for other systems. This way we could decouple the workload off of IIS and letting the user continue to work, instead of waiting half an hour for the import to finish, or a spritemap to completely regenerate. But, on the other hand the user didn’t get any response as when the job finished, or if it did finish.

Now what…? Signal it! RRRR!

This signaling/notification system can quite easily and nicely be achieved with todays techniques. Since we are more or less bound to Microsoft platform, we went for SignalR to solve it.

Now, out of the box SignalR don’t scale very well. But they do have a nice scale out technology which scales out via a backplane that might consist of almost any technology you choose. They ship support for SqlServer, Azure Servicebus and Redis. We have SqlServer in place so the choice was already made. This way any message sent via SignalR will be sent to the backplane, via which all other clients on other machines listens to and publishes messages to its users.

So with this in place we could handle our environment of loadbalancing and fail over. The last piece is the worker service described at the top, which will want to send an update when it finished or if it failed. This is achieved by adding it as another client to the equation.

To the code!

This example uses the canonical SignalR chat example. There are two web apps which could run on separate machines, and a console app which acts as the worker service and connects to one of the web apps as a client.

There is actually extremely little code involved to achieve this. What you do is this:

  1. Add both the SignalR and SignalR SqlServer backplane packages to your web apps from NuGet.
  2. Set up the backplane like so in the Startup class for each web app:
            public class Startup
            {
                    public void Configuration(IAppBuilder app)
                    {
                            GlobalHost.DependencyResolver.UseSqlServer(ConnectionStringProvider.SignalRPubSub());
                            app.MapSignalR();
                    }
            }
    

    Notice that the backplane itself will set up all required tables and stuff, so all you need to do is provide a connectionstring and a database.

  3. Make sure to add the SignalR Client package to the worker service, the one that will “call in” with additional messages.
  4. Then, calling in is simply as easy as just connecting to one the web apps, doesn’t matter which one, create a local hub proxy with the same name as the one you want to communicate with, and upon message sending – call the server side method of the hub.
            class Program
            {
                    static void Main(string[] args)
                    {
                            const string name = "Console";
                            var conn = new HubConnection("http://localhost:55566/");
                            var proxy = conn.CreateHubProxy("chatter");
    
                            conn.Start().Wait();
                            proxy.Invoke("Notify", "Console", conn.ConnectionId);
    
                            string message;
                            while ((message = Console.ReadLine()) != null)
                            {
                                    proxy.Invoke("Send", name, message).Wait();
                            }
                    }
            }

  5. Then… done!

The entire working code is available on GitHub for your pleasure. There is however some irrelevant code commited which fetches history. It’s not done in this example but it doesn’t affect the demo, so I just left it there. Might fix it in the future. What is left is to read the data back from a byte array in correct order.

Further reading

Tagged , ,