<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[piotrwalat.net]]></title><description><![CDATA[piotrwalat.net]]></description><link>http://piotrwalat.net/</link><generator>Ghost 0.11</generator><lastBuildDate>Fri, 04 Oct 2019 19:23:51 GMT</lastBuildDate><atom:link href="http://piotrwalat.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[CLR Diagnostics with ClrMD and ScriptCS REPL - ScriptCS.ClrDiagnostics]]></title><description><![CDATA[<p>WinDbg+SOS  has been used by .NET developers for years. It is a very powerful profiling/analysis tool that unfortunately is quite hard to use and exposes native-only API. By releasing ClrMD  library Microsoft makes CLR heap memory inspection accessible to regular C# developers and enables them to write customized</p>]]></description><link>http://piotrwalat.net/clr-diagnostics-with-clrmd-and-scriptcs-repl-scriptcs-clrdiagnostics/</link><guid isPermaLink="false">41d2c33b-3d91-4fa0-9ab7-cd5431211c68</guid><category><![CDATA[C#]]></category><category><![CDATA[CLR]]></category><category><![CDATA[CLR heap]]></category><category><![CDATA[debugging]]></category><category><![CDATA[profiling]]></category><category><![CDATA[REPL]]></category><category><![CDATA[scriptcs]]></category><category><![CDATA[SOS]]></category><category><![CDATA[windbg]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Mon, 13 May 2013 20:00:00 GMT</pubDate><content:encoded><![CDATA[<p>WinDbg+SOS  has been used by .NET developers for years. It is a very powerful profiling/analysis tool that unfortunately is quite hard to use and exposes native-only API. By releasing ClrMD  library Microsoft makes CLR heap memory inspection accessible to regular C# developers and enables them to write customized profiling tools. I have created a simple ScriptCS script pack that allows for interactive debugging under REPL. Now we can use both C# and some of SOS features under the same console :). More after the break.</p>

<!--more-->  

<h3>Microsoft.Diagnostics.Runtime</h3>  

<p>Not that long ago .NET Runtime team <a href="http://blogs.msdn.com/b/dotnet/archive/2013/05/01/net-crash-dump-and-live-process-inspection.aspx">announced</a> the beta release of <a href="https://nuget.org/packages/Microsoft.Diagnostics.Runtime">Microsoft.Diagnostics.Runtime component</a> (aka ClrMD). <br>
The package delivers managed API for .NET process and crash dump inspection which is similar to <a href="http://msdn.microsoft.com/en-us/library/bb190764.aspx">SOS Debugging Extensions</a>. If you are a seasoned Windows programmer you probably have been using WinDbg a couple of times, for example to track and identify logical memory leaks in your applications. <br>
ClrMD simply brings some of that capabilities as an API. That is a big deal for a couple of reasons:  </p>

<ul>  
    <li>Makes memory profiling and process analysis much easier for regular C#/.NET developers,</li>
    <li>Allows people and businesses to write custom diagnostic tools tailored for their needs.</li>
</ul>  

<p>Remember that ClrMD is still in beta phase and also that attaching and debugging is an invasive process (don't run it on production servers).</p>

<p>For more in depth introduction to the topic please read the <a href="http://blogs.msdn.com/b/dotnet/archive/2013/05/01/net-crash-dump-and-live-process-inspection.aspx">original blog post</a>  </p>

<h3>ScriptCS and REPL</h3>  

<p><a href="http://scriptcs.net/">ScriptCS</a> is a cool project started by <a href="https://twitter.com/gblock">@gblock</a> and inspired by <a href="https://twitter.com/filip_woj">@filip_woj</a> that uses Roslyn and NuGet to make C# scripting easy (no .csprojs required). It is getting a lot of traction in the community recently - it is definitely one of my favourite initiatives in the C# world, putting Roslyn to a great use.</p>

<p>Just recently Glenn added initial REPL (<a href="http://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop">Read-eval-print loop</a>) support to the project meaning that now you can use C# in an interactive shell, kind of like a JavaScript console.  </p>

<h3>ClrMD + ScriptCS = ScriptCS.ClrDiagnostics</h3>  

<p>Based on the above I've created a script pack that brings ClrMD into ScriptCS and is aimed to provide interactive CLR diagnostics environment under ScriptCS REPL. <br>
It is available on <a href="https://github.com/pwalat/ScriptCs.ClrDiagnostics">GitHub</a> and <a href="http://nuget.org/packages/ScriptCs.ClrDiagnostics/0.1.0-beta1">NuGet</a> as <a href="https://github.com/pwalat/ScriptCs.ClrDiagnostics">ScriptCS.ClrDiagnostics</a></p>

<p>Here is how you can use it:  </p>

<ul>  
    <li>Install ScriptCS using nightly builds (this is needed due to a bug in current version that prevents from installing prerelease packages) <em>cinst scriptcs -pre -source <a href="https://www.myget.org/F/scriptcsnightly/">https://www.myget.org/F/scriptcsnightly/</a> </em>Alternatively you can build from source<em>
</em></li>  
    <li>Install ScriptCS.ClrDiagnostics: <code>scriptcs -install ScriptCs.ClrDiagnostics -pre</code></li>
    <li>Launch ScriptCS in REPL mode (you may need to get latest version for that): <code>scriptcs.exe</code></li>
    <li>You should be able to load the script pack and attach to a process like this:</li>
</ul>  

<pre>// Load ClrDiag object
&gt; var c = Require&lt;ClrDiag&gt;();

// Attach to process 
&gt; c.Attach(6152)
Attaching to process PID=6152
Using CLR Version=v4.0.30319.18033 DACFileName=mscordacwks_amd64_Amd64_4.0.30319.18033.dll
Succesfully attached to process PID=6152 Name=WpfApplication2</pre>

<p>You can also use the process name:  </p>

<pre>c.Attach("MyApplication")</pre>

<p>Check current state  </p>

<pre>&gt; c.IsAttached 
True         

&gt; c.Process.WorkingSet64
51265536</pre>

<p>ClrMD API is available to use, eg. <em>ClrRuntime</em>  </p>

<pre>&gt; c.Clr.Threads.Count
2</pre>

<p>Script pack also provides some helpers that should make analysis easier:  </p>

<pre>&gt; c.PrintTypes("System.");
Total size               Count   Name
167.82KB                 398     System.Object[]
143.29KB                 2404    System.String
126.98KB                 2385    System.Delegate[]
126.98KB                 2385    System.Delegate[]</pre>

<p>PrintTypes() will simply output all the types on managed heaps sorted by total memory consumed, it lets you optionally filter by type name and limit returned result set.</p>

<p>You can also display each thread's call stack using PrintStackTrace():  </p>

<pre>&gt; c.PrintStackTrace()                                                                                                                                     
Stacktrace for ThreadId=3200                                                                                                                              
           0       A9E9B8 InlinedCallFrame                                                                                                                
           0       A9E9B8 InlinedCallFrame                                                                                                                
 7F92AF92093       A9E990 DomainBoundILStubClass.IL_STUB_PInvoke(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32)
 7F92AF87640       A9EA60 System.Windows.Threading.Dispatcher.GetMessage(System.Windows.Interop.MSG ByRef, IntPtr, Int32, Int32)                          
 7F92AF85E9E       A9EB20 System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame)                                     
 7F9272172DA       A9EBC0 System.Windows.Application.RunInternal(System.Windows.Window)                                                                   
 7F927216BD7       A9EC60 System.Windows.Application.Run()                                                                                                
 7F8E3700107       A9ECA0 WpfApplication2.App.Main()</pre>

<p>Or for one thread using index:  </p>

<pre>&gt; c.PrintStackTrace(0)                                                                                                                                     
Stacktrace for ThreadId=3200                                                                                                                              
           0       A9E9B8 InlinedCallFrame                                                                                                                
           0       A9E9B8 InlinedCallFrame                                                                                                                
 7F92AF92093       A9E990 DomainBoundILStubClass.IL_STUB_PInvoke(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32)</pre>

<p>After you are done, detach from the process like this:  </p>

<pre>&gt; c.Detach()                                                    
Successfully detached from process PID=6152 Name=WpfApplication2</pre>

<h3>Troubleshooting</h3>  

<p>ScriptCs.ClrDiagnostics lets you also specify DAC file (<em>mscordacwks.dll</em>) location while attaching:  </p>

<pre>&gt; c.Attach(6152, @"C:\Windows\Microsoft.NET\Framework64\v4.0.30319\mscordacwks.dll");</pre>

<p>If you encounter an error with StackOverflow being thrown in REPL console - it is caused by jsv serialization recently added to ScriptCS. You may need to wait for the fix or manually apply a simple workaround (use simple .ToString() in place of object serialization to jsv).  </p>

<h3>Summary</h3>  

<p>The project has been created as an experiment, but it shows how great capabilities ScriptCS REPL has.</p>

<p>Oh, and you can also do that:  </p>

<pre>&gt; c.Play().ImperialMarch();</pre>]]></content:encoded></item><item><title><![CDATA[Building reactive XAML apps with ASP.NET SignalR and MVVM]]></title><description><![CDATA[<p>A great portion of mobile applications consumes data from HTTP services. This is usually achieved as a pull scenario in which apps initiate the data flow from the server. In many cases <em>pushing</em> data to the client is a more natural and potentially much better solution. In this blog post</p>]]></description><link>http://piotrwalat.net/building-reactive-xaml-apps-with-asp-net-signalr-and-mvvm/</link><guid isPermaLink="false">09fac154-babe-456c-a4ed-b37e48e48cbf</guid><category><![CDATA[ASP.NET SignalR]]></category><category><![CDATA[ASP.NET]]></category><category><![CDATA[MVVM]]></category><category><![CDATA[mvvm light]]></category><category><![CDATA[reactive applications]]></category><category><![CDATA[reactive programming]]></category><category><![CDATA[signalr]]></category><category><![CDATA[Windows Phone 8]]></category><category><![CDATA[XAML]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Wed, 08 May 2013 09:52:34 GMT</pubDate><content:encoded><![CDATA[<p>A great portion of mobile applications consumes data from HTTP services. This is usually achieved as a pull scenario in which apps initiate the data flow from the server. In many cases <em>pushing</em> data to the client is a more natural and potentially much better solution. In this blog post I will explore how ASP.NET SignalR can help XAML developers simplify the task of creating and consuming push services over HTTP. I will also show how to leverage MVVM pattern to create a user experience that is driven by incoming data. The example will be built for Windows Phone 8 and will use MVVM Light library.</p>

<!--more-->  

<h3>What do I mean by <em>'reactive app'</em>?</h3>  

<p>MVVM and data-binding is by its very nature reactive. Any changes made to the view model should automatically propagate to the data-bound view. Thus being said, in many scenarios the true source of such change comes from the model that resides on remote server (eg. because a new entity has been added). Very often we want to be able to <em>push</em> this change across the wire so that it triggers relevant view updates. This is where SignalR can be leveraged - it can help us extend 'reactivity' beyond the network boundary.</p>

<p>In traditional scenarios apps that consume HTTP services will usually pull data from the server. This may happen on demand - e.g. when opening a new screen that presents a list of customers, or in regular intervals to check for any updates. There is nothing inherently bad with this approach and it suits a large number of cases. Sometimes, however constant polling for changes is undesired and can be sub optimal from performance perspective. Consider for example a financial app that should provide real-time like updates to the user (eg. commodity quotes) - in this case it feels natural to have data pushed to us from the service as new updates arrive.</p>

<p>From technical point of view SignalR may still use polling, but from logical point of view it creates an abstraction over the actual mechanism (be it long polling, web sockets or anything else) which makes our apps independent from it. This is a huge benefit.</p>

<p>Let's see how we can use SignalR to write a simple 'financial' app.  </p>

<h3>Push service with ASP.NET SignalR</h3>  

<p>We will use SingalR <em>hubs</em> feature to create a push service. Create an <em>Empty ASP.NET Web Application</em> in Visual Studio and NuGet install or reference SignalR.  </p>

<pre>Install-Package Microsoft.AspNet.SignalR.SystemWeb -Version 1.0.1 

protected void Application_Start(object sender, EventArgs e)
{
    RouteTable.Routes.MapHubs();
}</pre>

<p>The example will operate on a simple model - a Quote class representing individual.. 'quote' (e.g. a currency exchange rate).  </p>

<pre>public class Quote
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal Price { get; set; }
    public decimal PriceChange { get; set; }
}</pre>

<p>The hub will simply push quote updates to all interested parties.  </p>

<pre>[HubName("Quote")]
public class QuoteHub : Hub
{
    public void UpdateQuote(Quote quote)
    {
        Clients.All.updateQuote(quote);
    }
}</pre>

<p>For the sake of example we will generate some random data and push quote updates in regular intervals. This code is not that important, but may help if you want an example on how to generate sample data in SignalR projects.  </p>

<pre>internal class DataStreamThread
{
    private const int UpdateIntervalInMilliseconds = 4000;

    private object randomLock = new object();

    private Dictionary&lt;string, decimal&gt; quotes 
                        = new Dictionary&lt;string, decimal&gt;()
                              {
                                {"EUR/USD", 1.320m},
                                {"GBP/USD", 1.550m},
                                {"AUD/USD", 1.032m},
                                {"PLN/USD", 0.3168m},
                                {"USD/JPY", 99.045m},
                                {"Gold", 1467.2m},
                                {"Silver ", 24.02m},
                              };

    public void Start()
    {
        Random random = new Random();

        int counter = 0;
        foreach (KeyValuePair&lt;string, decimal&gt; quote in quotes)
        {
            counter++;
            int id = counter;
            ThreadPool.QueueUserWorkItem(_ =&gt;
            {
                var hubContext = GlobalHost
                        .ConnectionManager.GetHubContext&lt;QuoteHub&gt;();

                Quote q = new Quote()
                {
                    Id = id,
                    Name = quote.Key,
                    Price = quote.Value, 
                    PriceChange = 0,
                };

                while (true) //sic
                {
                    double randomChange;
                    lock(randomLock)
                    {
                        randomChange = (random.NextDouble() - 0.42) / 100;
                    }
                    decimal change = Math
                    .Round((decimal)randomChange * q.Price, 6);
                    q.Price += change;
                    q.PriceChange = change;
                    try
                    {
                        hubContext.Clients.All.updateQuote(q);
                    }
                    catch (Exception ex)
                    {
                        System.Diagnostics
                        .Trace.TraceError("Error thrown while updating clients: {0}", ex);
                    }
                    var intervalAdj = random.Next(-700, 700);
                    Thread.Sleep(UpdateIntervalInMilliseconds + intervalAdj);
                }
            });
        }
    }
}</pre>

<p>That's all we need on the server side. The most important line - <em>hubContext.Clients.All.updateQuote(q);</em> will push sample data to all hub clients.  </p>

<h3>Windows Phone 8 XAML application</h3>  

<p>As a next step let's write a XAML client that will present this data. I will use a Windows Phone 8 app as an example, but Windows 8 (or WPF and Silverlight) will be conceptually similar.</p>

<p>Here is how we want it to look like. The quotes should update automatically, ideally with some kind of subtle animation.  </p>

<p style="text-align: center;"><a href="http://www.piotrwalat.net/wp-content/uploads/2013/05/Sample1.png"><img class="aligncenter  wp-image-1465" alt="Sample" src="http://www.piotrwalat.net/wp-content/uploads/2013/05/Sample1.png" width="291" height="527"></a></p>  

<p>Start by adding a new Windows Phone 8 XAML application and installing MVVM Light via NuGet. Instead of reusing the <em>Quote</em> class (by adding as link to WP8) we will create a separate model on the client side in order to implement <em>INotifyPropertyChanged</em>.  </p>

<pre>public class Quote : INotifyPropertyChanged
{
    private string _name;
    private decimal _price;
    private decimal _priceChange;

    public int Id { get; set; }

    public string Name
    {
        get { return _name; }
        set
        {
            if (value == _name) return;
            _name = value;
            OnPropertyChanged();
        }
    }

    public decimal Price
    {
        get { return _price; }
        set
        {
            if (value == _price) return;
            _price = value;
            OnPropertyChanged();
        }
    }

    public decimal PriceChange
    {
        get { return _priceChange; }
        set
        {
            if (value == _priceChange) return;
            _priceChange = value;
            OnPropertyChanged();
            OnPropertyChanged("ChangePercentage");
        }
    }

    public double ChangePercentage
    {
        get 
          { 
            return (double) (_priceChange / (_price - _priceChange))
                                                             * 100; 
          }
    }

    public event PropertyChangedEventHandler PropertyChanged;

    protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)
    {
        PropertyChangedEventHandler handler = PropertyChanged;
        if (handler != null) handler(this, 
            new PropertyChangedEventArgs(propertyName));
    }
}</pre>

<p>Now let's think about how our view model(s) should receive update notifications from SignalR hub. We could open connection directly in the view model, get hub proxy instance and subscribe to updateQuote event. Something like that:  </p>

<pre>public MainViewModel()
{
    _connection = new HubConnection(EndpointAddress);
    _hubProxy = _connection.CreateHubProxy(StockHubName);
    _hubProxy.On&lt;Quote&gt;("updateQuote", q =&gt; UpdateQuoteHandler);

    _connection.Start();
}</pre>

<p>This would work and maybe is not that bad, but... it doesn't seem that doing all this is something our view model should be concerned about. What it really cares about is data updates and connection state changes. How it happens and what is the underlying technology should be owned by another class. <br>
Also if we had more than one view model using the same hub (e.g. list-&gt;detail scenario) we would need to duplicate the code and connections.</p>

<p>Let's use MVVM Light's messaging capabilities to propagate updates in a decoupled way (we could have also used an interface that exposes events). <br>
This also means we will introduce more abstractions and complexity, so bear that in mind especially when working on small applications.  </p>

<pre>public class ConnectionStateChangedMessage
{
    public ConnectionState OldState { get; set; }
    public ConnectionState NewState { get; set; }
}

public class QuoteUpdatedMessage
{
    public Quote Quote { get; set; }
}</pre>

<p>Now we need a class that will connect to SignalR hub and push the updates.  </p>

<pre>public interface IDispatcher
{
    void Dispatch(Action action);
}

public interface IConnectedDataProvider
{
    Task StartAsync();
    void Stop();
}

public class SignalRDataProvider : IConnectedDataProvider
{
    private HubConnection _connection;
    private IHubProxy _hubProxy;

    private const string EndpointAddress 
            = "http://192.168.1.18/Piotr.XamlSignalR.Service/";
    private const string StockHubName = "Quote";
    private const string QuoteUpdateName = "updateQuote";

    private readonly IMessenger _messenger;
    private readonly IDispatcher _dispatcher;

    public SignalRDataProvider(IMessenger messenger, IDispatcher dispatcher)
    {
        _messenger = messenger;
        _dispatcher = dispatcher;
        _connection = new HubConnection(EndpointAddress);
        _hubProxy = _connection.CreateHubProxy(StockHubName);
        _hubProxy.On&lt;Quote&gt;(QuoteUpdateName, p =&gt; _dispatcher
                .Dispatch(() =&gt; UpdateQuote(p)));

        _connection.StateChanged += _connection_StateChanged;
    }

    void _connection_StateChanged(StateChange stateChange)
    {
        ConnectionState oldState = ConnectionStateConverter
            .ToConnectionState(stateChange.OldState);
        ConnectionState newState = ConnectionStateConverter
            .ToConnectionState(stateChange.NewState);

        var msg = new ConnectionStateChangedMessage()
        {
            NewState = newState,
            OldState = oldState,
        };

        _dispatcher.Dispatch(() =&gt; _messenger
            .Send&lt;ConnectionStateChangedMessage&gt;(msg));
    }

    public Task StartAsync()
    {
        return _connection.Start();
    }

    private void UpdateQuote(Quote quote)
    {
        var msg = new QuoteUpdatedMessage()
        {
            Quote = quote
        };
        _messenger.Send&lt;QuoteUpdatedMessage&gt;(msg);
    }

    public void Stop()
    {
        _connection.Stop();
    }
}</pre>

<p>Please forgive me interface/class name ;)</p>

<p>The rationale behind providing <em>IDispatcher</em> interface is to be able to provide platform specific implementations (WP8, Windows 8, WPF, etc.). SignalR action handler can start from a non-UI thread and we need to be able to marshal execution to the UI thread. <br>
Oh and remember - the WP8 emulator runs as a virtual machine, so make sure you don't use <em>localhost</em> as the endpoint address.</p>

<p>Now we can instantiate this class (e.g. in Application object) and start the connection.  </p>

<pre>private readonly IConnectedDataProvider _dataProvider 
    = new SignalRDataProvider(Messenger.Default, new PhoneDispatcher());

///...

private async void Application_Launching(object sender, LaunchingEventArgs e)
{
    await _dataProvider.StartAsync();
}

private void Application_Closing(object sender, ClosingEventArgs e)
{
    _dataProvider.Stop();
}

///...</pre>

<p>This will ensure that update events are being sent and can be consumed by the view model... which leads us to the view model itself.  </p>

<pre>public class MainViewModel : ViewModelBase
{
    private ConnectionState _connectionState;
    public ConnectionState ConnectionState
    {
        get { return _connectionState; }
        set
        {
            if (_connectionState == value) return;
            _connectionState = value;
            RaisePropertyChanged("ConnectionState");
            RaisePropertyChanged("IsConnected");
        }
    }

    public bool IsConnected
    {
        get { return ConnectionState == ConnectionState.Connected; }
    }

    public MainViewModel()
    {
        Items = new ObservableCollection&lt;Quote&gt;();

        MessengerInstance
            .Register&lt;QuoteUpdatedMessage&gt;(this, UpdateQuoteHandler);
        MessengerInstance
            .Register&lt;ConnectionStateChangedMessage&gt;(this,
                            ConnectionStateChangedHandler);
    }

    private void ConnectionStateChangedHandler(ConnectionStateChangedMessage msg)
    {
        ConnectionState = msg.NewState;
    }

    private void UpdateQuoteHandler(QuoteUpdatedMessage msg)
    {
        var quote = msg.Quote;
        var match = Items.FirstOrDefault(q =&gt; q.Name == quote.Name);
        if (match != null)
        {
            match.Price = quote.Price;
            match.PriceChange = quote.PriceChange;
        }
        else
        {
            Items.Add(quote);
        }
    }

    public ObservableCollection&lt;Quote&gt; Items { get; private set; }

    public override void Cleanup()
    {
        MessengerInstance.Unregister(this);
        base.Cleanup();
    }
}</pre>

<p>It is quite simple and hopefully readable. <em>MessengerInstance</em> is a property provided by <em>ViewModelBase</em> and can be mocked for the sake of unit tests. Now, we need to write XAML view that will present data to the customer. For brevity I will only include most important parts here, you can have a look at full markup in the source code provided.  </p>

<pre>        &lt;ListBox x:Name="QuoteList" Opacity="0.5" Margin="0,0,-12,0"
            ItemsSource="{Binding Source={StaticResource ItemsViewSource}}"&gt;

            &lt;ListBox.ItemTemplate&gt;
                &lt;DataTemplate&gt;

                    &lt;Grid x:Name="TemplateContainer"&gt;
                        &lt;!-- (...) --&gt;
                        &lt;i:Interaction.Triggers&gt;
                            &lt;ec:DataTrigger Binding="{Binding PriceChange, Converter={StaticResource DecimalToDoubleConverter}}" Comparison="GreaterThan" Value="0.0"&gt;
                                &lt;ec:GoToStateAction StateName="Up" TargetObject="{Binding ElementName=TemplateContainer}"/&gt;
                            &lt;/ec:DataTrigger&gt;
                            &lt;ec:DataTrigger Binding="{Binding PriceChange, Converter={StaticResource DecimalToDoubleConverter}}" Comparison="LessThan" Value="0.0"&gt;
                                &lt;ec:GoToStateAction StateName="Down" TargetObject="{Binding ElementName=TemplateContainer}"/&gt;
                            &lt;/ec:DataTrigger&gt;
                            &lt;ec:DataTrigger Binding="{Binding PriceChange, Converter={StaticResource DecimalToDoubleConverter}}" Comparison="Equal" Value="0.0"&gt;
                                &lt;ec:GoToStateAction StateName="NoChange" TargetObject="{Binding ElementName=TemplateContainer}"/&gt;
                            &lt;/ec:DataTrigger&gt;
                        &lt;/i:Interaction.Triggers&gt;
                        &lt;Grid&gt;
                            &lt;!-- Item layout goes here --&gt;
                        &lt;/Grid&gt;
                    &lt;/Grid&gt;
                &lt;/DataTemplate&gt;
            &lt;/ListBox.ItemTemplate&gt;
        &lt;/ListBox&gt;</pre>

<p>Because we want to have a strong order of elements on the list we use CollectionViewSource to do the sorting.  </p>

<pre>&lt;phone:PhoneApplicationPage.Resources&gt;
    &lt;CollectionViewSource x:Key="ItemsViewSource" Source="{Binding Items}"&gt;
        &lt;CollectionViewSource.SortDescriptions&gt;
            &lt;scm:SortDescription PropertyName="Id"/&gt;
        &lt;/CollectionViewSource.SortDescriptions&gt;
    &lt;/CollectionViewSource&gt;
&lt;/phone:PhoneApplicationPage.Resources&gt;</pre>

<p>As you can see we didn't have to write a single line of C# in the view's code-behind. Sorting, updates and visual state transitions will all happen thanks to XAML.</p>

<p>Unfortunately I ran out of time to include a Windows 8 sample, but the code has been built in such a way that creating a Win8 application should be a simple task as we can reuse most parts (including view model, SignalR data provider).</p>

<p>ASP.NET SignalR bridges the gap between reactive world of MVVM and HTTP services that reside on a remote server.</p>

<p>The complete project is available on <a href="https://bitbucket.org/pwalat/piotr.xamlsignalr">bitbucket</a> and below you can see the end result. Of course the data is completely unrealistic :)</p>

<p><a href="http://vimeo.com/65718738">http://vimeo.com/65718738</a></p>]]></content:encoded></item><item><title><![CDATA[Using Redis with ASP.NET Web API]]></title><description><![CDATA[<p>In this article I am going to show how to use Redis as a data store in a ASP.NET Web API application. I will implement a basic scenario that leverages <em>ServiceStack.Redis</em> library and its <em>strongly typed</em> Redis client, show how to model and store one-to-many relationships and how</p>]]></description><link>http://piotrwalat.net/using-redis-with-asp-net-web-api/</link><guid isPermaLink="false">0c52b6fb-13fd-4a3c-9ef7-c72f063c8e4b</guid><category><![CDATA[Autofac]]></category><category><![CDATA[ASP.NET Web API]]></category><category><![CDATA[ASP.NET]]></category><category><![CDATA[NoSQL]]></category><category><![CDATA[redis]]></category><category><![CDATA[sorted sets]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Tue, 26 Mar 2013 10:27:03 GMT</pubDate><content:encoded><![CDATA[<p>In this article I am going to show how to use Redis as a data store in a ASP.NET Web API application. I will implement a basic scenario that leverages <em>ServiceStack.Redis</em> library and its <em>strongly typed</em> Redis client, show how to model and store one-to-many relationships and how to use Web API dependency injection capabilities along with Autofac to inject repositories into controllers.</p>

<!--more-->  

<h3>Client libraries</h3>  

<p>At the time of writing there are two popular and actively developed C# client libraries for Redis available:  </p>

<ul>  
    <li><a href="https://github.com/ServiceStack/ServiceStack.Redis">ServiceStack.Redis</a> - created by <a href="https://twitter.com/demisbellot">Demis Bellot</a> of ServiceStack fame and based on <a href="http://twitter.com/migueldeicaza">Miguel de Icaza's</a> <a href="http://github.com/migueldeicaza/redis-sharp">redis-sharp</a> project,</li>
    <li><a href="https://code.google.com/p/booksleeve/">BookSleeve</a> - mantained by Mark Gravell and as I understand <a href="http://marcgravell.blogspot.ie/2011/04/async-redis-await-booksleeve.html">used by Stack Exchange</a>.</li>
</ul>  

<p>Before making a choice I would suggest trying both of them and deciding which API and capabilities better suit your project. <br>
BookSleeve has non-blocking (asynchronous) API, provides thread-safe connection object, while ServiceStack implementation provides JSON serialization, connection pool like client factory and uses convention to simplify POCO object persistence.</p>

<p>In this article I will use <em>ServiceStack.Redis</em>, but remember that <em>BookSleeve</em> has been proved in a big real-world web application and is also very capable.  </p>

<h3>Redis in a nutshell</h3>  

<p>If you are reading this article then very likely you already know what Redis is. If you are an experienced Redis user interested in ASP.NET Web API integration you can safely jump to the next part.</p>

<p>In order to use Redis efficiently and avoid potential pitfalls one needs to understand a little bit about how it works and how different it is from relational databases.I strongly recommend reading one of books or online materials available on the topic.</p>

<p>Simply put Redis is an in-memory key-value data store that supports durability. <br>
<em>In-memory</em> and <em>key-value</em> sounds much like a memory cache - and indeed you can think of Redis as of a specialized and more advanced memory cache. Unlike other caches (such as <a href="http://memcached.org/">memcached</a>) Redis delivers richer feature set including things like sorted sets and even <a href="http://redis.io/commands/eval">Lua scripting</a> capabilities.</p>

<p>It's main advantage over 'traditional' databases comes from the fact that it stores and retrieves data directly to / from operating memory - which means it is really fast.</p>

<p>Redis is simple and specialized - unlike relational databases it does not provide any table-like abstractions nor relational capabilities. Instead, it provides five fundamental data types along with specialized operations that can manipulate those types (stored values). This is why it is sometimes refered as a <em>data structure server</em>:  </p>

<ul>  
    <li><em>strings</em> - the most basic and atomic type that can be used to store any data (integers, serialized POCO objects, etc.),</li>
    <li><em>lists</em> - lists of strings that are sored by insertion order,</li>
    <li><em>sets</em> - logical sets of strings,</li>
    <li><em>hashes</em> - maps between string-only keys and string values,</li>
    <li><em>sorted sets</em> - similar to <em>sets</em>, but each element is associated with a <em>score</em> that is being used to sort.</li>
</ul>  

<p>Examples of specialized <a href="http://redis.io/commands">commands</a>:  </p>

<ul>  
    <li>strings - SET, INCR, APPEND, INCRBY, STRLEN, SETBIT,</li>
    <li>lists - LPUSH, LPOP, LTRIM, LINSERT,</li>
    <li>sets - SADD, SDIFF, SINTER, SUNION, etc.</li>
</ul>  

<p>Hopefully this should give you a basic feel for what Redis is about :)  </p>

<h3>Why would I use it?</h3>  

<p>Whether and how easily your application could benefit from Redis depends on its architecture, data volume, data complexity and experienced loads. When used correctly Redis can be bring major performance improvements and may help scale application out.</p>

<p>Here are some use cases I can think of:  </p>

<ul>  
    <li>as a main data store,</li>
    <li>as one of multiple data stores, for example storing small, but frequently accessed information,</li>
    <li>as a highly performant read-only view over your domain model,</li>
    <li>as a cache.</li>
</ul>  

<p>Bearing in mind that Redis operates in memory, the first option is quite extreme and viable only if your data sets are small (or you can afford to have lots of RAM). <br>
Because in this article I want to focus on ASP.NET Web API integration not architectural aspects I will choose this option.  </p>

<h3>Using Redis in a ASP.NET Web API application</h3>  

<p>I will use an empty ASP.NET Web API application as my starting point along with two third party libraries:  </p>

<ul>  
    <li>ServiceStack.Redis - C# Redis client,</li>
    <li>Autofac - dependency injection container with Web API integration.</li>
</ul>  

<p>Obviously we will also need a working Redis server instance. If you don't have one running already you can <a href="https://github.com/MSOpenTech/Redis">download</a> Windows port provided by MS Tech. Please note that the port is not considered production ready yet (you need to use one of the official packages for that), but is good for development scenarios.  </p>

<h3>Model</h3>  

<p>For the sake of this example lets consider the following requirements:  </p>

<ul>  
    <li>the API should provide capability to store Clients, retrieve Client details and retrieve list of all Clients in the system,</li>
    <li>Clients may place orders that consist of multiple items,</li>
    <li>API should expose a list of N best selling items.</li>
</ul>  

<p>Here is how we could design the model:  </p>

<pre>public class Customer
{
    public Guid Id { get; set; }
    public string Name { get; set; }
    public IList&lt;Guid&gt; Orders { get; set; }
    public Address Address { get; set; }
}

Properly defining your data model will help you use Redis in an efficient way. Redis stores values as byte blobs internally and *ServiceStack.Redis* will serialize the whole object graph for us. Thus it is important that we define aggregate boundaries. As you can see Address is a *value object* and will be persisted and retrieved as a part of Customer *aggregate*, while *Orders* property is a list of ids.

public class Order
{
    public Guid Id { get; set; }
    public Guid UserId { get; set; }
    public IList&lt;OrderLine&gt; Lines { get; set; }
}

public class OrderLine
{
    public string Item { get; set; }
    public int Quantity { get; set; }
    public decimal TotalAmount { get; set; }
}

public class Address
{
    public string Line1 { get; set; }
    public string Line2 { get; set; }
    public string City { get; set; }
}</pre>

<p>Now let's define repository contracts:  </p>

<pre>public interface ICustomerRepository
{
    IList&lt;Customer&gt; GetAll();
    Customer Get(Guid id);
    Customer Store(Customer customer);
}

public interface IOrderRepository
{
    IList&lt;Order&gt; GetCustomerOrders(Guid customerId);
    IList&lt;Order&gt; StoreAll(Customer customer, IList&lt;Order&gt; orders);
    Order Store(Customer customer, Order order);
    IDictionary&lt;string, double&gt; GetBestSellingItems(int count);
}</pre>

<p>The implementation can look like this:  </p>

<pre>public class CustomerRepository : ICustomerRepository
{
    private readonly IRedisClient _redisClient;

    public CustomerRepository(IRedisClient redisClient)
    {
        _redisClient = redisClient;
    }

    public IList&lt;Customer&gt; GetAll()
    {
        using (var typedClient = _redisClient.GetTypedClient&lt;Customer&gt;())
        {
            return typedClient.GetAll();
        }
    }

    public Customer Get(Guid id)
    {
        using (var typedClient = _redisClient.GetTypedClient&lt;Customer&gt;())
        {
            return typedClient.GetById(id);
        }
    }

    public Customer Store(Customer customer)
    {
        using (var typedClient = _redisClient.GetTypedClient&lt;Customer&gt;())
        {
            if (customer.Id == default(Guid))
            {
                customer.Id = Guid.NewGuid();
            }
            return typedClient.Store(customer);
        }
    }
}

public class OrderRepository : IOrderRepository
{
    private readonly IRedisClient _redisClient;

    public OrderRepository(IRedisClient redisClient)
    {
        _redisClient = redisClient;
    }

    public IList&lt;Order&gt; GetCustomerOrders(Guid customerId)
    {
        using (var orderClient = _redisClient.GetTypedClient&lt;Order&gt;())
        {
            var orderIds = _redisClient.GetAllItemsFromSet(RedisKeys
                        .GetCustomerOrdersReferenceKey(customerId));
            IList&lt;Order&gt; orders = orderClient.GetByIds(orderIds);
            return orders;
        }
    }

    public IList&lt;Order&gt; StoreAll(Customer customer, IList&lt;Order&gt; orders)
    {
        foreach (var order in orders)
        {
            if (order.Id == default(Guid))
            {
                order.Id = Guid.NewGuid();
            }
            order.CustomerId = customer.Id;
            if (!customer.Orders.Contains(order.Id))
            {
                customer.Orders.Add(order.Id);
            }

            order.Lines.ForEach(l=&gt;_redisClient
                .IncrementItemInSortedSet(RedisKeys.BestSellingItems,
                                                                 (string) l.Item, (long) l.Quantity));
        }
        var orderIds = orders.Select(o =&gt; o.Id.ToString()).ToList();
        using (var transaction = _redisClient.CreateTransaction())
        {
            transaction.QueueCommand(c =&gt; c.Store(customer));
            transaction.QueueCommand(c =&gt; c.StoreAll(orders));
            transaction.QueueCommand(c =&gt; c.AddRangeToSet(RedisKeys
                .GetCustomerOrdersReferenceKey(customer.Id),
                orderIds));
            transaction.Commit();
        }

        return orders;
    }

    public Order Store(Customer customer, Order order)
    {
        IList&lt;Order&gt; result = StoreAll(customer, new List&lt;Order&gt;() { order });
        return result.FirstOrDefault();
    }

    public IDictionary&lt;string, double&gt; GetBestSellingItems(int count)
    {
        return _redisClient
            .GetRangeWithScoresFromSortedSetDesc(RedisKeys.BestSellingItems, 
            0, count - 1);
    }
}</pre>

<p>As you can see repositories expose specialized operations. We make use of Redis sorted set type to efficiently store and retrieve best selling products list.</p>

<p>It is worth noting how we implemented Customer -* Orders relation. We store customer's orders (their ids) in a dedicated set so that they can be retrieved quickly without the need for pulling out entire <em>Customer</em> entity.  </p>

<h3>Client and connection lifecycle management</h3>  

<p>One of the challenges we will face is connection/client lifecycle management. As you may already know Web API ships with an extensible dependency injection mechanism that can be leveraged to inject and dispose dependencies on per request basis. Instead of writing custom <em>IDependencyResolver</em> implementation from scratch (which is also an option) we can use of of .NET DI libraries such as Ninject, StructureMap, Unity, Windsor or Autofac. The last one is my personal favorite and has good Web API integration that is why I am going to use it in this example.</p>

<p>ServiceStack.Redis ships with <em>IRedisClient</em> factories called <em>client managers</em>:  </p>

<ul>  
    <li>BasicRedisClientManager - client factory with load-balancing support,</li>
    <li>PooledRedisClientManager - client factory with load-balancing and connection pooling - useful when working,</li>
    <li>ShardedRedisClientManager - provides sharding of client connections using consistent hashing.</li>
</ul>  

<p>Because these classes are thread-safe we can use one factory instance across all requests.  </p>

<pre>public class ApiApplication : System.Web.HttpApplication
{
    public IRedisClientsManager ClientsManager;
    private const string RedisUri = "localhost";

    protected void Application_Start()
    {
        ClientsManager = new PooledRedisClientManager(RedisUri);

        AreaRegistration.RegisterAllAreas();

        WebApiConfig.Register(GlobalConfiguration.Configuration);
        ConfigureDependencyResolver(GlobalConfiguration.Configuration);

        FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
        RouteConfig.RegisterRoutes(RouteTable.Routes);
    }

    private void ConfigureDependencyResolver(HttpConfiguration configuration)
    {
        var builder = new ContainerBuilder();
        builder.RegisterApiControllers(Assembly.GetExecutingAssembly())
            .PropertiesAutowired();

        builder.RegisterType&lt;CustomerRepository&gt;()
            .As&lt;ICustomerRepository&gt;()
            .PropertiesAutowired()
            .InstancePerApiRequest();

        builder.RegisterType&lt;OrderRepository&gt;()
            .As&lt;IOrderRepository&gt;()
            .PropertiesAutowired()
            .InstancePerApiRequest();

        builder.Register&lt;IRedisClient&gt;(c =&gt; ClientsManager.GetClient())
            .InstancePerApiRequest();

        configuration.DependencyResolver
            = new AutofacWebApiDependencyResolver(builder.Build());
    }

    protected void Application_OnEnd()
    {
        ClientsManager.Dispose();
    }
}</pre>

<p>We are using pooled connection manager as <em>IRedisClientsManager</em> implementation. Every time a request is made a new client instance will be retrieved, injected into repositories and disposed at the end of request.</p>

<p><span style="font-size: 1.17em;">Controllers</span></p>

<p>Now that we have repositories let's implement the controllers - one for adding and retrieving the customers and one for managing orders.  </p>

<pre>public class CustomersController : ApiController
{
    public ICustomerRepository CustomerRepository { get; set; }

    public IOrderRepository OrderRepository { get; set; }

    public IQueryable&lt;Customer&gt; GetAll()
    {
        return CustomerRepository.GetAll().AsQueryable();
    }

    public Customer Get(Guid id)
    {
        var customer = CustomerRepository.Get(id);
        if (customer == null)
        {
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }

        return customer;
    }

    public HttpResponseMessage Post([FromBody] Customer customer)
    {
        var result = CustomerRepository.Store(customer);
        return Request.CreateResponse(HttpStatusCode.Created, result);
    }

    public HttpResponseMessage Put(Guid id, [FromBody] Customer customer)
    {
        var existingEntity = CustomerRepository.Get(id);
        if (existingEntity == null)
        {
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        customer.Id = id;
        CustomerRepository.Store(customer);
        return Request.CreateResponse(HttpStatusCode.NoContent);
    }
}

public class OrdersController : ApiController
{
    public IOrderRepository OrderRepository { get; set; }
    public ICustomerRepository CustomerRepository { get; set; }

    public HttpResponseMessage Post([FromBody] Order order)
    {
        var customer = CustomerRepository.Get(order.CustomerId);
        var result = OrderRepository.Store(customer, order);
        return Request.CreateResponse(HttpStatusCode.Created, result);
    }

    [ActionName("top")]
    [HttpGet]
    public IDictionary&lt;string, double&gt; GetBestSellingItems(int count)
    {
        return OrderRepository.GetBestSellingItems(count);
    }

    [ActionName("customer")]
    [HttpGet]
    public IList&lt;Order&gt; GetCustomerOrders(Guid id)
    {
        return OrderRepository.GetCustomerOrders(id);
    }
}</pre>

<p>That's about it. We are now using Redis as our data store and dependencies should be auto-wired.</p>

<p>Source is available on <a href="https://bitbucket.org/pwalat/piotr.rediswebapi">Bitbucket</a>.</p>]]></content:encoded></item><item><title><![CDATA[Running ASP.NET Web API services under Linux and OS X]]></title><description><![CDATA[<p>In this blog post I am going to show how you can host ASP.NET Web API services under Gentoo Linux and OS X on top of Mono's ASP.NET implementation. I will use Nginx and FastCGI to communicate between HTTP server and Mono.</p>

<p>A couple of months ago I've</p>]]></description><link>http://piotrwalat.net/running-asp-net-web-api-services-under-linux-and-os-x/</link><guid isPermaLink="false">0334ebb4-a662-4712-8968-6809b7a7900b</guid><category><![CDATA[ASP.NET Web API]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Mono]]></category><category><![CDATA[Nginx]]></category><category><![CDATA[Open source]]></category><category><![CDATA[Gentoo]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Mon, 04 Mar 2013 02:00:03 GMT</pubDate><content:encoded><![CDATA[<p>In this blog post I am going to show how you can host ASP.NET Web API services under Gentoo Linux and OS X on top of Mono's ASP.NET implementation. I will use Nginx and FastCGI to communicate between HTTP server and Mono.</p>

<p>A couple of months ago I've experimented with running ASP.NET Web API on a Linux box, but ran into blocking issues caused by some functionality missing from Mono. I've decided to give it another go now when more recent versions of the runtime are available.</p>

<!--more-->

<p><span style="font-size: 1.17em;">Getting started</span></p>

<p>Yes, that is correct you should be able to run Web API services under Linux using recent versions of Mono :). The approach I am taking is to use Visual Studio to write the application and then run it on Linux.</p>

<p>Just a general remark - be advised that copying and running non open source assemblies (like System.Core) is probably not ok from licensing point of view (I am not a legal expert, though). This shouldn't happen under normal circumstances (unless you willingly overwrite Mono assemblies) and ASP.NET Web API is 100% open source, so it is not a problem in this scenario.</p>

<p>I will create a very basic Web API service, having in mind that any unnecessary dependency can create problems. Remember that not all libraries that play nicely with Web API will work happily on Mono.</p>

<p>Start off by adding an empty MVC4 application, remember that you can choose either to use .NET 4.0 or 4.5 when doing that. The latter supports<em> async/await</em> and makes writing message handlers a little bit easier.  Even though Mono 2.11 <a href="http://tirania.org/blog/archive/2012/Mar-22.html">introduced</a> support for some of 4.5 APIs I ran into issues when trying to run 4.5 Web API app against XSP 2.11. This is why I am going to use .NET 4.0 (which means you should be able to use VS2010 as well).</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2013/03/NewProject.png"><img class="alignnone size-full wp-image-1293" alt="New Project" src="http://www.piotrwalat.net/wp-content/uploads/2013/03/NewProject.png" width="683" height="619"></a></p>

<p>I am going to create a very simple model and a CRUD controller for testing purposes. Also I will not bother with any kind of IoC container and just go with a static in memory repository.  </p>

<pre class="lang:c# decode:true" title="Beer">public class Beer : Entity  
{
    public string Name { get; set; }
    public string Description { get; set; }
    public decimal Price { get; set; }
}

public class Entity  
{
    public Guid Id { get; set; }
}</pre>

<pre class="lang:c# decode:true" title="BeersController ">public class BeersController : ApiController  
{
    private IRepository&lt;Beer&gt; _beerRepository;

    public BeersController()
    {
        _beerRepository = WebApiApplication.BeerRepository;
    }

    public IEnumerable&lt;Beer&gt; Get()
    {
        return _beerRepository.Items.ToArray();
    }

    public Beer Get(Guid id)
    {
        Beer entity = _beerRepository.Get(id);
        if (entity <mark> null)
        {
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        return entity;
    }

    public HttpResponseMessage Post([FromBody] Beer value)
    {
        var result = _beerRepository.Add(value);
        if (result </mark> null)
        {
            // the entity with this key already exists
            throw new HttpResponseException(HttpStatusCode.Conflict);
        }
        var response = Request.CreateResponse&lt;Beer&gt;(HttpStatusCode.Created, value);
        string uri = Url.Link("DefaultApi", new { id = value.Id });
        response.Headers.Location = new Uri(uri);
        return response;
    }

    public HttpResponseMessage Put(Guid id, Beer value)
    {
        value.Id = id;
        var result = _beerRepository.Update(value);
        if (result <mark> null)
        {
            // entity does not exist
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        return Request.CreateResponse(HttpStatusCode.NoContent);
    }

    public HttpResponseMessage Delete(Guid id)
    {
        var result = _beerRepository.Delete(id);
        if (result </mark> null)
        {
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        return Request.CreateResponse(HttpStatusCode.NoContent);
    }
}</pre>

<p>We can seed the repository with sample data.  </p>

<pre class="lang:c# decode:true" title="Global.asax.cs">public class WebApiApplication : System.Web.HttpApplication  
{
    public static IRepository&lt;Beer&gt; BeerRepository = new InMemoryRepository&lt;Beer&gt;();

    protected void Application_Start()
    {
        BeerRepository.Add(new Beer()
        {
            Name = "Blue Moon",
            Description = "Belgian-style witbier. Orange-amber in color with a cloudy appearance.",
            Id = Guid.NewGuid(),
            Price = 1.99m
        });
        var config = GlobalConfiguration.Configuration;
        WebApiConfig.Register(config);
    }
}</pre>

<p>Before continuing make sure that the project runs correctly under ASP.NET and IIS.</p>

<p>&nbsp;</p>

<h3>Setting up the environment</h3>  

<p>There are two main things you will need in order to run the service on Linux:  </p>

<ul>  
    <li><span style="line-height: 13px;">Working Mono installation including XSP - I will be using version 3.0.5, so this or any above should work,</span></li>
    <li>HTTP server (unless you want to use Mono's own XSP, which should be enough for testing purposes) that supports FastCGI. I will be using Nginix. Apache should also work (through mod_mono).</li>
</ul>  

<p>Now, there are two main ways of getting Mono - either to compile and install it directly from the source code available at <a href="https://github.com/mono/mono">github </a>(or official mono packages available at download <a href="http://www.go-mono.com/mono-downloads/download.html">site</a>) or to install a package provided by your distribution. Mono support varies among different distributions and to be honest is usually pretty weak when it comes to latest versions. If you just want to experiment and don't have an existing Linux installation I would suggest choosing OpenSUSE or Gentoo (evil grin).</p>

<p>To compile from official tarball package use:  </p>

<pre class="lang:sh decode:true">./configure --prefix=/usr/local  
make  
make install</pre>  

<p>And to clone and compile latest code from master branch:  </p>

<pre class="lang:sh decode:true">git clone git://github.com/mono/mono.git  
cd mono  
./autogen.sh --prefix=/usr/local
make  
make install</pre>  

<p>In this example I will be using Gentoo Linux (as it is my favorite distro :)). Official Gentoo repository (at the time of writing this post) does not contain Mono <a href="http://gentoo-portage.com/dev-lang/mono">versions </a>above 2.X so we will need to use layman and dotnet overlay.  </p>

<pre class="lang:sh decode:true">emerge -av layman  
layman -a dotnet</pre>  

<p>Depending on your current configuration you may also need to add USE keywords to the xsp package and unmask mono packages as newest versions are usually considered 'unstable'.  </p>

<pre class="lang:sh decode:true">echo "dev-lang/mono ~amd64" &gt;&gt; /etc/portage/package.keywords  
echo "dev-dotnet/xsp ~amd64" &gt;&gt; /etc/portage/package.keywords

echo "dev-dotnet/xsp net40 net45" &gt;&gt; /etc/portage/package.use</pre>  

<p>Now cross your fingers and emerge mono:  </p>

<pre class="lang:sh decode:true">emerge -av =mono-3.0.5</pre>  

<p>Use a higher version if available as it is likely that it contains bugfixes. This may take a while so go grab a coffee or something ;)</p>

<p>Portage will try to resolve dependencies and if everything goes well you should be able to run mono on your system.  </p>

<pre class="lang:sh decode:true">ester ~ # mono --version  
Mono JIT compiler version 3.0.5 (tarball Sat Mar  2 13:04:59 Local time zone must be set--see zic manual page 2013)  
Copyright (C) 2002-2012 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com</pre>  

<p>If it does not work (e.g. compilation fails) - don't give up, try to search for the solution or use a different package (for example using official source packages instead of cutting edge latest master).</p>

<p>The next step is to install XSP (Mono application server). Again you can use the <a href="https://github.com/mono/xsp">sources</a> or get the package provided by your distribution. Under gentoo this means:  </p>

<pre class="lang:sh decode:true">emerge -av xsp</pre>  

<p>If both installations were successful go ahead and copy the exported (right click on the project and Publish...) Web API application to some directory on your Linux box (eg. using WinSCP). You can use XSP to test if everything works (remember to use xsp4, which is a .NET 4 version):  </p>

<pre class="lang:sh decode:true">xsp4 --root /home/pwalat/Piotr.WebApiMono/ --port 8082</pre>  

<p>Voila! now use your favorite browser or HTTP debugger to test the service.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2013/03/Post.png"><img class="alignnone size-full wp-image-1319" alt="Post" src="http://www.piotrwalat.net/wp-content/uploads/2013/03/Post.png" width="821" height="184"></a></p>

<p>As  you can see VS studio compiled Web API code just works under Mono runtime. You don't even need to fiddle with web.config.</p>

<p>Alternatively can choose to compile the sources under Mono.  </p>

<pre class="lang:sh decode:true">xbuild Piotr.WebApiMono.sln</pre>  

<p>Now let's set up Nginx to serve our Web API project as using XSP is good for ad-hoc testing only. First of all install the server and make sure you enable fastcgi support:  </p>

<pre class="lang:sh decode:true">echo "www-servers/nginx fastcgi ssl nginx_modules_http_gzip_static" &gt;&gt; /etc/portage/package.use  
emerge -av nginx</pre>  

<p>Once you have nginx installed you will need to modify virtual host configuration to run off fastcgi (/etc/nginx/nginx.conf):  </p>

<pre class="lang:sh decode:true">server {  
 listen   80;
 server_name  domain.com;
 access_log   /var/log/nginx/domain.com.access.log;
 location ~ / { 
   root /var/www/domain.com/;
   index index.html index.htm default.aspx default.htm;
   fastcgi_index /default.htm;
   fastcgi_pass 127.0.0.1:9002;     
   fastcgi_param  PATH_INFO          "";
   fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
   include /etc/nginx/fastcgi_params;
 } 
}</pre>

<p>This configures Nginix to pass incoming requests to 127.0.0.1:9002 where Mono FastCGI server will be listening. Of course you can have multiple applications hosted by one Nginx instance (e.g. you can mix ASP.NET with PHP or static pages).</p>

<p>To run fastcgi-mono-server4 we need to provide it a list of applications. We can do it either as a command line parameter or use .webapp config files. Let's do the latter as it is more manageable approach.  </p>

<pre class="lang:sh decode:true">mkdir /etc/webapps  
nano /etc/webapps/MonoWebApi.webapp</pre>  

<pre class="lang:xhtml decode:true">&lt;apps&gt;  
&lt;web-application&gt;
        &lt;name&gt;MonoWebApi&lt;/name&gt;
        &lt;vhost&gt;domain.com&lt;/vhost&gt;
        &lt;vport&gt;80&lt;/vport&gt;
        &lt;vpath&gt;/&lt;/vpath&gt;
        &lt;path&gt;/var/www/domain.com&lt;/path&gt;
&lt;/web-application&gt;
&lt;/apps&gt;</pre>

<p>Now we can run both servers:  </p>

<pre class="lang:c# decode:true">/etc/init.d/nginx start  
fastcgi-mono-server4 --appconfigdir /etc/webapps /socket=tcp:127.0.0.1:9002</pre>  

<p>If everything worked correctly your service should run off Nginx and be available at <a href="http://domain.com/beers">http://domain.com/beers</a></p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2013/03/nginx.png"><img class="alignnone size-full wp-image-1327" alt="nginx" src="http://www.piotrwalat.net/wp-content/uploads/2013/03/nginx.png" width="568" height="161"></a></p>

<p>Instead of having to run your Mono server each time from command line you probably will want to have it started during the boot time. Here is a simple /etc/init.d/mono-fastcgi script to facilitate this:  </p>

<pre class="lang:c# decode:true">#!/bin/sh  
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin  
DAEMON=/usr/bin/mono  
NAME=mono-fastcgi  
DESC=mono-fastcgi  
PORT=9002  
MONOSERVER=$(which fastcgi-mono-server4)  
MONOSERVER_PID=$(ps auxf | grep fastcgi-mono-server4.exe | grep -v grep | awk '{print $2}')  
WEBAPPS="/etc/webapps"  
case "$1" in  
start)  
if [ -z "${MONOSERVER_PID}" ]; then  
echo "starting mono server"  
${MONOSERVER} --appconfigdir ${WEBAPPS} /socket=tcp:127.0.0.1:${PORT} &amp;
echo "mono server started"  
else  
echo ${WEBAPPS}  
echo "mono server is running"  
fi  
;;
stop)  
if [ -n "${MONOSERVER_PID}" ]; then  
kill ${MONOSERVER_PID}  
echo "mono server stopped"  
else  
echo "mono server is not running"  
fi  
;;
esac  
exit 0</pre>  

<pre class="lang:sh decode:true">chmod +x /etc/init.d/mono-fastcgi  
rc-update add mono-fastcgi default  
/etc/init.d/mono-fastcgi start</pre>

<p>You should have your ASP.NET Web API service running under Nginix now :)</p>

<p>Just to test that basic Web API functionality and message handling pipeline works correctly I've added delegating handler to calculate content's MD5 checksum and a CSV media formatter (have a look at the source code for implementation). Both work  as expected.  </p>

<h3>OS X</h3>  

<p>Generally we need to follow the same procedure as with Linux - i.e. install Mono and then install and configure Nginx.</p>

<p>Installing Mono on Mac should be much easier  than on Linux. Just grab OS X package (I use MDK) from mono site <a href="http://www.go-mono.com/mono-downloads/download.html">here</a> and run the installer.</p>

<p>Optionally you can also install Xamarin Studio (aka Monodevelop 4.0) to open the solution, develop and then build and Deploy ASP.NET Web API application to a folder.  </p>

<p style="text-align: center;"><a href="http://www.piotrwalat.net/wp-content/uploads/2013/03/Screen-Shot-2013-03-03-at-17.21.50.png"><img class="aligncenter  wp-image-1356" alt="Working on a Web API project under OS X" src="http://www.piotrwalat.net/wp-content/uploads/2013/03/Screen-Shot-2013-03-03-at-17.21.50-1024x722.png" width="819" height="578"></a></p>  

<p> Now test the exported application using XSP:</p>

<pre class="lang:sh decode:true">xsp4 --root /Volumes/mc/ExportedApps/Piotr.WebApiMono/ --port 8080</pre>  

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2013/03/Screen-Shot-2013-03-03-at-17.35.28.png"><img class="aligncenter size-full wp-image-1361" alt="Service running under OS X" src="http://www.piotrwalat.net/wp-content/uploads/2013/03/Screen-Shot-2013-03-03-at-17.35.28.png" width="662" height="287"></a></p>

<p>We will install Nginx using MacPorts (if you dont have MacPorts install it from a package available <a href="http://www.macports.org/install.php">here</a>)  </p>

<pre class="lang:sh decode:true">sudo port install nginx  
cp /opt/local/etc/nginx/nginx.conf.example /opt/local/etc/nginx/nginx.conf</pre>  

<p>If you want to run Nginx during startup execute  </p>

<pre class="lang:sh decode:true">sudo port load nginx</pre>  

<p>Once this is done you can follow Nginx configuration instructions for Linux and configure the server to use fastcgi. The configuration file will be located at /opt/local/etc/nginx/nginx.conf. Then run fastcgi-mono-server4 and you will have ASP.NET Web API service running on OS X.</p>

<p>Please be advised that this example was an experiment and you still may run into issues when digging deeper :) That being said, I am sure that running under Mono will make ASP.NET Web API an appealing choice to even wider group of developers.</p>]]></content:encoded></item><item><title><![CDATA[HMAC authentication in ASP.NET Web API]]></title><description><![CDATA[<p>In this article I will explain the concepts behind HMAC authentication and will show how to write an example implementation for ASP.NET Web API using message handlers. The project will include both server and client side (using Web API's HttpClient) bits.</p>

<!--more-->  

<h3></h3>  

<h3>HMAC based authentication</h3>  

<p>HMAC (hash-based message authentication code)</p>]]></description><link>http://piotrwalat.net/hmac-authentication-in-asp-net-web-api/</link><guid isPermaLink="false">771baf08-6b80-41aa-b056-531e6faa5e80</guid><category><![CDATA[delegating handlers]]></category><category><![CDATA[ASP.NET Web API]]></category><category><![CDATA[HTTP]]></category><category><![CDATA[HMAC authentication]]></category><category><![CDATA[http authentication]]></category><category><![CDATA[md5]]></category><category><![CDATA[Security]]></category><category><![CDATA[HMAC]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Thu, 28 Feb 2013 10:45:51 GMT</pubDate><content:encoded><![CDATA[<p>In this article I will explain the concepts behind HMAC authentication and will show how to write an example implementation for ASP.NET Web API using message handlers. The project will include both server and client side (using Web API's HttpClient) bits.</p>

<!--more-->  

<h3></h3>  

<h3>HMAC based authentication</h3>  

<p>HMAC (hash-based message authentication code) provides a relatively simple way to authenticate HTTP messages using a secret that is known to both client and server. Unlike <a href="http://www.piotrwalat.net/basic-http-authentication-in-asp-net-web-api-using-message-handlers/">basic authentication</a> it does not require transport level encryption (HTTPS), which makes its an appealing choice in certain scenarios. Moreover, it guarantees message integrity (prevents malicious third parties from modifying contents of the message).</p>

<p>On the other hand proper HMAC authentication implementation requires slightly more work than basic HTTP authentication and not all client platforms support it out of the box (most of them support cryptographic algorithms required to implement it though). My suggestion would be to use it only if HTTPS + basic authentication does not suit your requirements.</p>

<p>One prominent example of HMAC usage is <a href="http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html">Amazon S3 service</a>.</p>

<p>The basic idea behind HMAC authentication in HTTP can be described as follows:  </p>

<ul>  
    <li>both client and server have access to a secret that will be used to generate HMAC - it can be a password (or preferably password hash) created by the user at the time of registration,</li>
    <li>using the secret client generates a message signature using HMAC algorithm (the algorithm is provided by .NET 'for free'),</li>
    <li>signature is attached to the message (eg. as a header) and the message is sent,</li>
    <li>the server receives the message and calculates its own version of the signature using the secret (both client and server use the same HMAC algorithm),</li>
    <li>if the signature computed by the server matches the on the message it means that the message is authorized.</li>
</ul>  

<p>As you can see the secret key (eg. password hash) is only shared between client and server once (eg. during user registration). Noone will be able to produce a valid signature without the access to the secret also any modification of the message (eg. appending content) will result in server calculating a different signature and refusing authorization.</p>

<p>Broadly speaking to create a HMAC authenticated client/server pair using ASP.NET Web API we need:  </p>

<ul>  
    <li>method that will return a string representing given http request,</li>
    <li>method that based on secret string and message representation calculates HMAC signature,</li>
    <li>client side - message handler that uses these methods to calculate the signature and attaches it to the request (as HTTP header),</li>
    <li>server side - message handler that calculates signature of incoming request and compares it with the one contained in the header.</li>
</ul>  

<h3></h3>  

<h3>Web API client</h3>  

<p>Ok, so let's start by writing the first piece.  </p>

<pre title="IBuildMessageRepresentation">public interface IBuildMessageRepresentation  
{
    string BuildRequestRepresentation(HttpRequestMessage requestMessage);
}</pre>

<pre class="lang:c# decode:true" title="CanonicalRepresentationBuilder ">public class CanonicalRepresentationBuilder : IBuildMessageRepresentation  
{
    /// &lt;summary&gt;
    /// Builds message representation as follows:
    /// HTTP METHOD\n +
    /// Content-MD5\n +  
    /// Timestamp\n +
    /// Username\n +
    /// Request URI
    /// &lt;/summary&gt;
    /// &lt;returns&gt;&lt;/returns&gt;
    public string BuildRequestRepresentation(HttpRequestMessage requestMessage)
    {
        bool valid = IsRequestValid(requestMessage);
        if (!valid)
        {
            return null;
        }

        if (!requestMessage.Headers.Date.HasValue)
        {
            return null;
        }
        DateTime date = requestMessage.Headers.Date.Value.UtcDateTime;

        string md5 = requestMessage.Content <mark> null ||
            requestMessage.Content.Headers.ContentMD5 </mark> null ?  "" 
            : Convert.ToBase64String(requestMessage.Content.Headers.ContentMD5);

        string httpMethod = requestMessage.Method.Method;
        //string contentType = requestMessage.Content.Headers.ContentType.MediaType;
        if (!requestMessage.Headers.Contains(Configuration.UsernameHeader))
        {
            return null;
        }
        string username = requestMessage.Headers
            .GetValues(Configuration.UsernameHeader).First();
        string uri = requestMessage.RequestUri.AbsolutePath.ToLower();
        // you may need to add more headers if thats required for security reasons
        string representation = String.Join("\n", httpMethod,
            md5, date.ToString(CultureInfo.InvariantCulture),
            username, uri);

        return representation;
    }

    private bool IsRequestValid(HttpRequestMessage requestMessage)
    {
        //for simplicity I am omitting headers check (all required headers should be present)

        return true;
    }
}</pre>

<p>A couple of points worth mentioning:  </p>

<ul>  
    <li>we construct message representation by concatenating 'important' headers, http method and uri,</li>
    <li>instead of using incorporating the content we use its md5 hash (base64 encoded),</li>
    <li>all parts of the message (eg. headers) that can affect its meaning and have side effects on the server side should be included in the representation (otherwise an attacker would be able to modify them without changing the signature).</li>
</ul>  

<p>Now lets look at that component that will calculate authentication code (signature).  </p>

<pre class="lang:c# decode:true" title="ICalculateSignature">public interface ICalculteSignature  
{
    string Signature(string secret, string value);
}</pre>

<pre class="lang:c# decode:true" title="HmacSignatureCalculator ">public class HmacSignatureCalculator : ICalculteSignature  
{
    public string Signature(string secret, string value)
    {
        var secretBytes = Encoding.UTF8.GetBytes(secret);
        var valueBytes = Encoding.UTF8.GetBytes(value);
        string signature;

        using (var hmac = new HMACSHA256(secretBytes))
        {
            var hash = hmac.ComputeHash(valueBytes);
            signature = Convert.ToBase64String(hash);
        }
        return signature;
    }
}</pre>

<p>The signature will be encoded using base64 so that we can pass it easily in a header. What header you may ask? Well, unfortunately there is no standard way of  including message authentication codes into the message (as there is no standard way of constructing message representation). We will use <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html">Authorization HTTP header</a> for that purpose providing a custom schema (<em>ApiAuth</em>).  </p>

<pre class="lang:sh decode:true">Authorization: ApiAuth HMAC_SIGNATURE</pre>  

<p>The HMAC will be calculated and attached to the request in a custom message handler.  </p>

<pre class="lang:c# decode:true" title="HmacSigningHandler ">public class HmacSigningHandler : HttpClientHandler  
{
    private readonly ISecretRepository _secretRepository;
    private readonly IBuildMessageRepresentation _representationBuilder;
    private readonly ICalculteSignature _signatureCalculator;

    public string Username { get; set; }

    public HmacSigningHandler(ISecretRepository secretRepository,
                          IBuildMessageRepresentation representationBuilder,
                          ICalculteSignature signatureCalculator)
    {
        _secretRepository = secretRepository;
        _representationBuilder = representationBuilder;
        _signatureCalculator = signatureCalculator;
    }

    protected override Task&lt;HttpResponseMessage&gt; SendAsync(HttpRequestMessage request,
                                 System.Threading.CancellationToken cancellationToken)
    {
        if (!request.Headers.Contains(Configuration.UsernameHeader))
        {
            request.Headers.Add(Configuration.UsernameHeader, Username);
        }
        request.Headers.Date = new DateTimeOffset(DateTime.Now,DateTime.Now-DateTime.UtcNow);
        var representation = _representationBuilder.BuildRequestRepresentation(request);
        var secret = _secretRepository.GetSecretForUser(Username);
        string signature = _signatureCalculator.Signature(secret,
            representation);

        var header = new AuthenticationHeaderValue(Configuration.AuthenticationScheme, signature);

        request.Headers.Authorization = header;
        return base.SendAsync(request, cancellationToken);
    }
}</pre>

<pre class="lang:c# decode:true" title="Configuration">public class Configuration  
{
    public const string UsernameHeader = "X-ApiAuth-Username";
    public const string AuthenticationScheme = "ApiAuth";
}</pre>

<pre class="lang:c# decode:true" title="DummySecretRepository ">public class DummySecretRepository : ISecretRepository  
{
    private readonly IDictionary&lt;string, string&gt; _userPasswords
        = new Dictionary&lt;string, string&gt;()
              {
                  {"username","password"}
              };

    public string GetSecretForUser(string username)
    {
        if (!_userPasswords.ContainsKey(username))
        {
            return null;
        }

        var userPassword = _userPasswords[username];
        var hashed = ComputeHash(userPassword, new SHA1CryptoServiceProvider());
        return hashed;
    }

    private string ComputeHash(string inputData, HashAlgorithm algorithm)
    {
        byte[] inputBytes = Encoding.UTF8.GetBytes(inputData);
        byte[] hashed = algorithm.ComputeHash(inputBytes);
        return Convert.ToBase64String(hashed);
    }
}

public interface ISecretRepository  
{
    string GetSecretForUser(string username);
}</pre>

<p>In a real life scenario you could retrieve the hashed password from the a persistent store (a database). If you remember how we constructed our message representation you will notice that we also need to set content MD5 header. We could do it in HmacSigningHandler, but to have separation of concerns and because Web API allows us to combine handlers in a neat way I moved it to a separate (dedicated) handler.  </p>

<pre class="lang:c# decode:true" title="RequestContentMd5Handler ">public class RequestContentMd5Handler : DelegatingHandler  
{
    protected async override Task&lt;HttpResponseMessage&gt; SendAsync(HttpRequestMessage request,
                                       System.Threading.CancellationToken cancellationToken)
    {
        if (request.Content <mark> null)
        {
            return await base.SendAsync(request, cancellationToken);
        }

        byte[] content = await request.Content.ReadAsByteArrayAsync();
        MD5 md5 = MD5.Create();
        byte[] hash = md5.ComputeHash(content);
        request.Content.Headers.ContentMD5 = hash;
        var response = await base.SendAsync(request, cancellationToken);
        return response;
    }
}</mark></pre>

<p>For simplicity the HMAC handler derives directly from HttpClientHandler. Here is how we would make a request:  </p>

<pre class="lang:c# decode:true" title="HttpClient request">static void Main(string[] args)  
{
    var signingHandler = new HmacSigningHandler(new DummySecretRepository(),
                                            new CanonicalRepresentationBuilder(),
                                            new HmacSignatureCalculator());
    signingHandler.Username = "username";

    var client = new HttpClient(new RequestContentMd5Handler()
    {
        InnerHandler = signingHandler
    });
    client.PostAsJsonAsync("<a href="http://localhost:48564/api/values">http://localhost:48564/api/values</a>","some content").Wait();
}</pre>

<p>And that's basically it as far as http client is concerned. Let's have a look at server part.  </p>

<h3></h3>  

<h3>Web API service</h3>  

<p>The general logic will be that we will want to authenticate every incoming request (we can us per route handlers to secure only one route for example). Each request's authentication code will be calculated using the very same IBuildMessageRepresentation and ICalculateSignature implementations. If the signature does not match (or the content md5 hash is different from the value in the header) we will immediately return a 401 response.  </p>

<pre class="lang:c# decode:true" title="HmacAuthenticationHandler ">public class HmacAuthenticationHandler : DelegatingHandler  
{
    private const string UnauthorizedMessage = "Unauthorized request";

    private readonly ISecretRepository _secretRepository;
    private readonly IBuildMessageRepresentation _representationBuilder;
    private readonly ICalculteSignature _signatureCalculator;

    public HmacAuthenticationHandler(ISecretRepository secretRepository,
        IBuildMessageRepresentation representationBuilder,
        ICalculteSignature signatureCalculator)
    {
        _secretRepository = secretRepository;
        _representationBuilder = representationBuilder;
        _signatureCalculator = signatureCalculator;
    }

    protected async Task&lt;bool&gt; IsAuthenticated(HttpRequestMessage requestMessage)
    {
        if (!requestMessage.Headers.Contains(Configuration.UsernameHeader))
        {
            return false;
        }

        if (requestMessage.Headers.Authorization  null 
            || requestMessage.Headers.Authorization.Scheme 
                    != Configuration.AuthenticationScheme)
        {
            return false;
        }

        string username = requestMessage.Headers.GetValues(Configuration.UsernameHeader)
                                .First();
        var secret = _secretRepository.GetSecretForUser(username);
        if (secret <mark> null)
        {
            return false;
        }

        var representation = _representationBuilder.BuildRequestRepresentation(requestMessage);
        if (representation </mark> null)
        {
            return false;
        }

        if (requestMessage.Content.Headers.ContentMD5 != null 
            &amp;&amp; !await IsMd5Valid(requestMessage))
        {
            return false;
        }

        var signature = _signatureCalculator.Signature(secret, representation);        

        var result = requestMessage.Headers.Authorization.Parameter <mark> signature;

        return result;
    }

    protected async override Task&lt;HttpResponseMessage&gt; SendAsync(HttpRequestMessage request,
           System.Threading.CancellationToken cancellationToken)
    {
        var isAuthenticated = await IsAuthenticated(request);

        if (!isAuthenticated)
        {
            var response = request
                .CreateErrorResponse(HttpStatusCode.Unauthorized, UnauthorizedMessage);
            response.Headers.WwwAuthenticate.Add(new AuthenticationHeaderValue(
                Configuration.AuthenticationScheme));
            return response;
        }
        return await base.SendAsync(request, cancellationToken);
    }
}</mark></pre>

<p>The bulk of work is done by IsAuthenticated() method. Also please note that we do not sign the response, meaning the client will not be able verify the authenticity of the response (although response signing would be easy to do given components that we already have). I have omitted <em>IsMd5Valid()</em> method for brevity, it basically compares content hash with MD5 header value (just remember not to compare byte[] arrays using  operator).</p>

<p>Configuration part is simple and can look like that (per route handler):  </p>

<pre class="lang:c# decode:true" title="Server configuration">config.Routes.MapHttpRoute(  
                name: "DefaultApi",
                routeTemplate: "api/{controller}/{id}",
                constraints: null,
                handler: new HmacAuthenticationHandler(new DummySecretRepository(),
                    new CanonicalRepresentationBuilder(), new HmacSignatureCalculator())
                    {
                        InnerHandler = new HttpControllerDispatcher(config)
                    },
                defaults: new { id = RouteParameter.Optional }
            );</pre>

<h3></h3>  

<h3>Replay attack prevention</h3>  

<p>There is one very important flaw in the current approach. Imagine a malicious third party intercepts a valid (properly authenticated) HTTP request coming from a legitimate client (eg. using a sniffer). Such a message can be stored and resent to our server at any time enabling attacker to repeat operations performed previously by authenticated users. Please note that new messages still cannot be created as the attacker does not know the secret nor has a way of retrieving it from intercepted data.</p>

<p>To help us fix this issue lets make following three observations/assumptions about dates of  requests in our system:  </p>

<ul>  
    <li><span style="line-height: 13px;">requests with different Date header values will have different signatures, thus attacker will not be able to modify the timestamp,</span></li>
    <li>we assume identical, consecutive messages coming from a user will always have different timestamps - in other words that no client will want to send two or more identical messages at a given point in time,</li>
    <li>we introduce a requirement that no http request can be older than X (eg. 5) minutes - if for any reason the message is delayed for more than that it will have to be resent with a refreshed timestamp.</li>
</ul>  

<p>Once we know the above we can introduce following changes into IsAuthenticated() method:  </p>

<pre class="lang:c# decode:true" title="Replay attack prevention">protected async Task&lt;bool&gt; IsAuthenticated(HttpRequestMessage requestMessage)  
{
    //(...)
    var isDateValid = IsDateValid(requestMessage);
    if (!isDateValid)
    {
        return false;
    }
    //(...)

    //disallow duplicate messages being sent within validity window (5 mins)
    if(MemoryCache.Default.Contains(signature))
    {
        return false;
    }

    var result = requestMessage.Headers.Authorization.Parameter <mark> signature;
    if (result </mark> true)
    {
        MemoryCache.Default.Add(signature, username,
                DateTimeOffset.UtcNow.AddMinutes(Configuration.ValidityPeriodInMinutes));
    }
    return result;
}

private bool IsDateValid(HttpRequestMessage requestMessage)  
{
    var utcNow = DateTime.UtcNow;
    var date = requestMessage.Headers.Date.Value.UtcDateTime;
    if (date &gt;= utcNow.AddMinutes(Configuration.ValidityPeriodInMinutes)
        || date &lt;= utcNow.AddMinutes(-Configuration.ValidityPeriodInMinutes))
    {
        return false;
    }
    return true;
}</pre>

<p>For simplicity I didn't test the example for sever and client residing in different timezones (although as long as we normalize the dates to UTC we should be save here).</p>

<p>The code is available as usually on <a href="https://bitbucket.org/pwalat/piotr.webapihmacauth">bitbucket</a>.</p>

<p>Hope this article helps some of you!</p>]]></content:encoded></item><item><title><![CDATA[Arrow function expressions in TypeScript]]></title><description><![CDATA[<p>Along with support for standard function expressions that use the <em>function</em> keyword, TypeScript also introduces a concept of arrow functions. Interestingly this feature is most likely to be included in the <a href="http://wiki.ecmascript.org/doku.php?id=harmony:harmony">next version</a> of JavaScript - ECMAScript 6. Arrow functions introduce a more compact way of defining functions, but also</p>]]></description><link>http://piotrwalat.net/arrow-function-expressions-in-typescript/</link><guid isPermaLink="false">3e934e80-b3b9-4c24-b56f-aefebf196bdf</guid><category><![CDATA[ECMAScript 6]]></category><category><![CDATA[HTML5]]></category><category><![CDATA[JavaScript context]]></category><category><![CDATA[this]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[arrow functions]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Mon, 28 Jan 2013 08:15:18 GMT</pubDate><content:encoded><![CDATA[<p>Along with support for standard function expressions that use the <em>function</em> keyword, TypeScript also introduces a concept of arrow functions. Interestingly this feature is most likely to be included in the <a href="http://wiki.ecmascript.org/doku.php?id=harmony:harmony">next version</a> of JavaScript - ECMAScript 6. Arrow functions introduce a more compact way of defining functions, but also have been designed to work particularly well with callbacks.</p>

<p>This is an introductory post that shows examples of arrow functions usage in TypeScript and explains how they differ from standard function expressions in terms of <em>this</em> keyword binding.</p>

<!--more-->

<p>If you are familiar with C#, arrow functions have similar look and feel to <em>lambda expressions</em>. Let's start by creating a simple TypeScript function that calculates sum earned by deposited funds.  </p>

<pre title="calculateInterest - TypeScript" class="lang:js decode:true">var calculateInterest = function (amount, interestRate, duration) {  
    return amount * interestRate * duration / 12;
}</pre>

<p>Using arrow function expression we can define this function alternatively as follows:  </p>

<pre title="calculateInterest2 - TypeScript" class="lang:js decode:true">var calculateInterest2 = (amount, interestRat, duration) =&gt; {  
    return amount * interestRate * duration / 12;
}</pre>

<p>We can simplify this further by using assignment expression instead of function body block:  </p>

<pre title="calculateInterest3 - TypeScript" class="lang:js decode:true">var calculateInterest3 = (amount, interestRate, duration) =&gt; amount * interestRate * duration / 12;</pre>  

<p>This is a compact form that saves us some typing. It is worth mentioning that all three TypeScript functions will compile to identical JavaScript code.  </p>

<pre title="calculateInterest - compiled to JavaScript" class="lang:js decode:true">var calculateInterest = function (amount, interestRate, duration) {  
    return amount * interestRate * duration / 12;
};
var calculateInterest2 = function (amount, interestRate, duration) {  
    return amount * interestRate * duration / 12;
};
var calculateInterest3 = function (amount, interestRate, duration) {  
    return amount * interestRate * duration / 12;
};</pre>

<p>As you possibly know TypeScript provides a way of describing object contracts in the form of <em>interfaces</em>. Arrow expressions can be used to define function properties of the object.  </p>

<pre title="BankAccount interface" class="lang:js decode:true">interface BankAccount {  
    balance: number;

    withdraw: (amount: number) =&gt; void;
    deposit: (amount: number) =&gt; void;
    calculateInterest(interestRate: number, duration: number) : number;
}</pre>

<p>Just like standard function expressions, arrow functions support optional and default parameters.  </p>

<pre title="executeStandingOrder" class="lang:js decode:true">var executeStandingOrder  
    = function (amount: number = 100, description?: string) {
        var message = 'Standing order amount = $' + amount;
        console.log(message);
    }

var executeStandingOrder2  
     = (amount: number = 100, description?: string) =&gt; {
        var message = 'Standing order amount = $' + amount;
        console.log(message);
    }</pre>

<p>This will result in identical JavaScript being generated for both functions.  </p>

<pre title="executeStandingOrder - JavaScript" class="lang:js decode:true">var executeStandingOrder = function (amount, description) {  
    if (typeof amount === "undefined") { amount = 100; }
    var message = 'Standing order amount = $' + amount;
    console.log(message);
};</pre>

<p>At this point you may think that standard and arrow functions are pretty much the same and can always be used interchangeably. This is not true, as there is one subtle but very important difference between the two.</p>

<p>Consider the following implementation of BankAccount interface.  </p>

<pre title="account object" class="lang:js decode:true">var account: BankAccount = {  
    balance: 2020,
    withdraw: (amount: number) =&gt; {
        this.balance -= amount;
    },
    deposit: function (amount: number) {
        this.balance += amount;
    },
    calculateInterest: function (interestRate: number, duration: number) {
        return this.balance * interestRate * duration / 12;
    }
}</pre>

<p>Now let's deposit and withdraw some funds.  </p>

<pre>account.deposit(4000);
account.withdraw(2000);
console.log('Balance = $' + account.balance);</pre>

<p>If I got my math right, the resulting balance after the operations should be $4020, but when we execute this code (I am using jQuery to execute after DOM has been loaded) we will see the following output.  </p>

<pre>Balance = $6020</pre>

<p>Well, clearly something is wrong with our logic and even worse there is no console error that would point us in a right direction. If you have some JavaScript experience, you may immediately suspect 'this' keyword usage as a potential culprit. And you are right.</p>

<p><strong>Standard functions</strong> (ie. written using <em>function</em> keyword) will <strong>dynamically bind <em>this</em> </strong>depending on execution context (just like in JavaScript), <strong>arrow functions</strong> on the other hand will <strong>preserve <em>this</em> of enclosing context.</strong> This is a conscious design decision as arrow functions in ECMAScript 6 are meant to address some problems associated with dynamically bound <em>this </em>(eg. using <em>function invocation pattern</em>).</p>

<p>To clarify what it means in practice, let's look at generated JavaScript:  </p>

<pre class="lang:js decode:true">var _this = this;  
var account = {  
    balance: 2020,
    withdraw: function (amount) {
        _this.balance -= amount;
    },
    deposit: function (amount) {
        this.balance += amount;
    }
};</pre>

<p>As you can see for withdraw arrow function TypeScript uses <em>var that=this </em>pattern/trick to preserve <em>this</em> of enclosing context (which is a global object = window  in this scenario). In other words we are trying to decrease balance property of a global object (<em>window.balance = window.balance - amount</em>). Because <em>window.balance</em> is undefined we end up assigning <em>NaN</em> to the property and no error is being thrown (another nice 'feature' of the language ;)).</p>

<p><em>Deposit</em> on the other hand is defined using standard function expression, this means that if we invoke it as a method (ie. <em>account.deposit(4000)</em>), <em>this</em> will point to <em>account </em>object and everything will work as expected.</p>

<p>Aforementioned behavior of arrow functions can be particularly useful when dealing with callbacks. Let's add following method to the interface and its implementation:  </p>

<pre class="lang:js decode:true">interface BankAccount {  
    balance: number;
    //(...)
    standingOrder: (amount:number) =&gt; void;
}

var account: BankAccount = {  
    balance: 2020,
    //...
    standingOrder: function (amount: number) {
        setTimeout(() =&gt; {
            this.balance -= amount;
            console.log('Standing order executed. Current balance is $' 
                                                    + this.balance);
        }, 700);
    }
}</pre>

<p>In the example above we are invoking <a href="https://developer.mozilla.org/en-US/docs/DOM/window.setTimeout">setTimeout </a>function that accepts a callback to be executed after a given delay. If we used standard function statement as the callback, then <em>this</em> would be bound to a global object and the code wouldn't work as expected. However, because we are using arrow function <em>this</em> will be preserved from enclosing context and will point to <em>account</em> object as intended.</p>]]></content:encoded></item><item><title><![CDATA[Getting started with OData services in ASP.NET Web API]]></title><description><![CDATA[<p>OData is an application-level protocol that has been designed to provide data interaction operations via HTTP. Besides basic data manipulation capabilities (such as adding, deleting and updating) it also provides more advanced mechanisms such as filtering and navigation between related entities.</p>

<p>In this post I am going to show how</p>]]></description><link>http://piotrwalat.net/getting-started-with-odata-services-in-asp-net-web-api/</link><guid isPermaLink="false">ced8b4bd-84b0-4ec4-836b-2f6e74b41a85</guid><category><![CDATA[Entity Framework]]></category><category><![CDATA[ASP.NET Web API]]></category><category><![CDATA[HTTP]]></category><category><![CDATA[OData]]></category><category><![CDATA[REST]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Tue, 22 Jan 2013 07:35:46 GMT</pubDate><content:encoded><![CDATA[<p>OData is an application-level protocol that has been designed to provide data interaction operations via HTTP. Besides basic data manipulation capabilities (such as adding, deleting and updating) it also provides more advanced mechanisms such as filtering and navigation between related entities.</p>

<p>In this post I am going to show how to leverage some of OData features introduced to ASP.NET Web API to build example service.</p>

<!--more-->  

<h3>OData</h3>  

<p>You may be wondering why would you need another http based protocol for your web apps. Aren't simple JSON or XML services good enough? Well in fact OData extends these and is not meant to be a replacement. It can use either XML (ATOM) or JSON to represent resources and what is important adheres to REST principles. In some sense it builds on top of 'simple' REST HTTP services with a very clear aim - to simplify and standardize the way we manipulate and query resources and data sets. If your application is data centric chances are you could benefit from OData. Also if you've struggled to create search/filter or paging api for your REST services, OData has a provides this as well.</p>

<p>Some examples of OData query syntax:  </p>

<ul>  
    <li><em>Entity set</em> - /Artists</li>
    <li><em>Entity by id - </em>/Artists(1)</li>
    <li><em>Sorting - </em>/Artists?$orderby=Name</li>
    <li><em>Filtering - </em>/Artists?$filter=Name eq 'Gridlock'</li>
</ul>  

<p>But that's just a tip of an iceberg.</p>

<p>Instead of talking more, let's write some code. Fortunately, ASP.NET Web API let's us create OData endpoints quite easily.  </p>

<h3>Creating the project</h3>  

<p>Let's start by creating a new ASP.NET Web API project. We won't need any MVC related things (views, js libraries, etc.). <br>
OData functionality is provided by a separate assembly that has to be installed separately. Please note that at the time of writing the package is still pre-release and the latest version available on official nuget repository is 0.3 RC (see <a href="http://blogs.msdn.com/b/alexj/archive/2012/12/07/odata-in-webapi-rc-release.aspx">this blogpost</a> for a detailed overview of the release). <br>
Unfortunately there is an issue when using this package with latest ODataLib preventing some functionality from working (eg. filtering). Moreover newer commits introduced some breaking changes in configuration API. Because there is no point learning deprecated API we will use nightly builds available at <a href="http://www.myget.org/F/aspnetwebstacknightly/">http://www.myget.org/F/aspnetwebstacknightly/</a> nuget source. If you are unsure how to configure nuget to get these, have a look <a href="http://blogs.msdn.com/b/henrikn/archive/2012/06/01/using-nightly-asp-net-web-stack-nuget-packages-with-vs-2012-rc.aspx">here</a>.</p>

<p>Once you set up nightly build nuget source, you can install latest Web API OData package using Manage NuGet Packages, jus make sure you select 'Include Prerelease' in the dropdown on the top.  </p>

<h3><a href="http://www.piotrwalat.net/getting-started-with-odata-services-in-asp-net-web-api/nightly-3/" rel="attachment wp-att-1110"><img class="alignnone  wp-image-1110" alt="Microsoft.AspNet.WebApi.OData nightly" src="http://www.piotrwalat.net/wp-content/uploads/2013/01/nightly-1024x682.png" width="614" height="409"></a></h3>  

<h3></h3>  

<p>Please have in mind that Web API OData support is still work in progress and may miss support for certain OData features. Having said this, it definitely provides an impressive set of functionality.  </p>

<h3>Data model</h3>  

<p>We need a simple model to operate on. I will use Entity Framework and SQL CE 4, but Web API's OData implementation does not constraint you to any particular data persistence technology.  </p>

<pre title="Database" class="lang:tsql decode:true">CREATE TABLE [Album]  
(
    [AlbumId] INT NOT NULL IDENTITY,
    [Title] NVARCHAR(160) NOT NULL,
    [ArtistId] INT NOT NULL,
    [GenreId] INT NOT NULL,
    [ReleaseDate] DATETIME,
    CONSTRAINT [PK_Album] PRIMARY KEY  ([AlbumId])
);

CREATE TABLE [Artist]  
(
    [ArtistId] INT NOT NULL IDENTITY,
    [Name] NVARCHAR(120),
    CONSTRAINT [PK_Artist] PRIMARY KEY  ([ArtistId])
);

CREATE TABLE [Genre]  
(
    [GenreId] INT NOT NULL IDENTITY,
    [Name] NVARCHAR(120),
    [Description] NVARCHAR(1020),
    CONSTRAINT [PK_Genre] PRIMARY KEY  ([GenreId])
);

ALTER TABLE [Album] ADD CONSTRAINT [FK_AlbumArtistId]  
    FOREIGN KEY ([ArtistId]) REFERENCES [Artist] ([ArtistId]) 
      ON DELETE NO ACTION ON UPDATE NO ACTION;

CREATE INDEX [IFK_AlbumArtistId] ON [Album] ([ArtistId]);

ALTER TABLE [Album] ADD CONSTRAINT [FK_AlbumGenreId]  
    FOREIGN KEY ([GenreId]) REFERENCES [Genre] ([GenreId]) 
      ON DELETE NO ACTION ON UPDATE NO ACTION;

CREATE INDEX [IFK_AlbumGenreId] ON [Album] ([GenreId]);</pre>  

<p>You can create a new SQL CE database in App_Data folder and use built in explorer to execute SQL code. Please note that it does not support execution of multiple statements so you will need to execute it one by one. Once database schema is in place we can generate Entity Data Model using the wizard provided (it should automatically detect the database created).</p>

<p><a href="http://www.piotrwalat.net/getting-started-with-odata-services-in-asp-net-web-api/entitydatamodel/" rel="attachment wp-att-1067"><img class="alignnone  wp-image-1067" title="Creating Entity Data Model" alt="EntityDataModel" src="http://www.piotrwalat.net/wp-content/uploads/2013/01/EntityDataModel.png" width="669" height="462"></a></p>

<p><a href="http://www.piotrwalat.net/getting-started-with-odata-services-in-asp-net-web-api/model/" rel="attachment wp-att-1076"><img class="alignnone size-full wp-image-1076" alt="Entities" src="http://www.piotrwalat.net/wp-content/uploads/2013/01/model.png" width="509" height="513"></a></p>

<p>In the end we should get a DbContext class that will be used to perform data operations.  </p>

<h3>$metadata endpoint and IEdmModel</h3>  

<p>As I mentioned previously OData standard defines a special metadata endpoint that contains a document defining the entity sets, relationships, entity types, and operations. This makes OData service self-describing and enables client libraries to generate client-side code to represent server types and simplify service access (for example by generating proxies). Metadata endpoint should be available under /$metadata. If you are familiar with SOAP services you can think about it roughly as a WSDL analogue.  </p>

<pre title="$metadata endpoint" class="lang:sh decode:true">GET <a href="http://services.odata.org/Northwind/Northwind.svc/$metadata">http://services.odata.org/Northwind/Northwind.svc/$metadata</a></pre>  

<p>Metadata document uses <a href="http://www.odata.org/media/30002/OData%20CSDL%20Definition.html" target="_blank">OData Common Schema Definition Language (CSDL)</a>. Fortunately ASP.NET Web API can expose  $metadata endpoint for us, as long as we supply a representation of our model in the form of <a href="http://msdn.microsoft.com/en-us/library/microsoft.data.edm.iedmmodel(v=vs.103).aspx" target="_blank">IEdmModel </a>object.  </p>

<pre title="ModelBuilder" class="lang:c# decode:true">public class ModelBuilder  
{
    public IEdmModel Build()
    {
        ODataModelBuilder modelBuilder = new ODataConventionModelBuilder();
        modelBuilder.EntitySet&lt;Album&gt;("Albums");
        modelBuilder.EntitySet&lt;Artist&gt;("Artists");
        modelBuilder.EntitySet&lt;Genre&gt;("Genres");

        return modelBuilder.GetEdmModel();
    }
}</pre>

<p>You can also build model representation explicitly using ODataModelBuilder to have more fine grained control over generated representation.  </p>

<pre title="Using ODataModelBuilder" class="lang:c# decode:true">public IEdmModel BuildExplicitly()  
{
    ODataModelBuilder modelBuilder = new ODataModelBuilder();
    EntitySetConfiguration&lt;Genre&gt; genres = modelBuilder.EntitySet&lt;Genre&gt;("Genres");
    EntityTypeConfiguration&lt;Genre&gt; genre = genres.EntityType;
    genre.HasKey(g =&gt; g.GenreId);
    genre.Property(g =&gt; g.Name);
    genre.Property(g =&gt; g.Description);

    //(...)

    return modelBuilder.GetEdmModel();
}</pre>

<h3>Enabling OData</h3>  

<p>Microsoft.AspNet.WebApi.OData package provides a set of classes that are supposed to plug into Web API extensibility points in order to provide OData support (formatters, path handling, etc.).  </p>

<blockquote>The RC version had a single HttpConfiguration.EnableOData(IEdmModel) helper method that did all this in one go. However this approach wasn't really well fitted for scenarios where we wanted to support different models in one application. Because of this latest versions use per route configuration (which is more flexible).</blockquote>  

<pre title="App_Start/WebApiConfig.cs" class="lang:c# decode:true">public static class WebApiConfig  
{
    public static void Register(HttpConfiguration config)
    {
        var modelBuilder = new ModelBuilder();
        IEdmModel model = modelBuilder.Build();
        config.Routes.MapODataRoute("OData", null, model);
        config.EnableQuerySupport();
    }
}</pre>

<p>This code (executed from Global.asax.cs) does two things:  </p>

<ul>  
    <li>registers our model representation (IEdmModel) with a route - we are using null for route prefix, meaning it will be the root route, but we could have specified something like 'albums' making OData Albums services available at /albums instead of / (and thanks to this we could serve more data models in one app)</li>
    <li>enables query support (EnableQuerySupport()) for actions returning IQueryable&lt;T&gt; (we will show what's that about when creating controllers)</li>
</ul>  

<p>Now our service should automagically know how to handle OData ~/$metadata request. Cool, isn't it :) ?  </p>

<pre title="$metadata" class="lang:xhtml decode:true">&lt;edmx:Edmx xmlns:edmx="<a href="http://schemas.microsoft.com/ado/2007/06/edmx">http://schemas.microsoft.com/ado/2007/06/edmx</a>" Version="1.0"&gt;  
  &lt;edmx:DataServices xmlns:m="<a href="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">http://schemas.microsoft.com/ado/2007/08/dataservices/metadata</a>" m:DataServiceVersion="1.0"&gt;
    &lt;Schema xmlns="<a href="http://schemas.microsoft.com/ado/2009/11/edm">http://schemas.microsoft.com/ado/2009/11/edm</a>" Namespace="Piotr.ODataWebApiService.Service.Models"&gt;
      &lt;EntityType Name="Album"&gt;...&lt;/EntityType&gt;
      &lt;EntityType Name="Artist"&gt;...&lt;/EntityType&gt;
      &lt;EntityType Name="Genre"&gt;...&lt;/EntityType&gt;
      &lt;Association 
        Name="Piotr_ODataWebApiService_Service_Models_Album_Artist_Piotr_ODataWebApiService_Service_Models_Artist_ArtistPartner"&gt;...&lt;/Association&gt;
      &lt;Association Name="Piotr_ODataWebApiService_Service_Models_Album_Genre_Piotr_ODataWebApiService_Service_Models_Genre_GenrePartner"&gt;...&lt;/Association&gt;
      &lt;Association Name="Piotr_ODataWebApiService_Service_Models_Artist_Albums_Piotr_ODataWebApiService_Service_Models_Album_AlbumsPartner"&gt;...&lt;/Association&gt;
      &lt;Association Name="Piotr_ODataWebApiService_Service_Models_Genre_Albums_Piotr_ODataWebApiService_Service_Models_Album_AlbumsPartner"&gt;...&lt;/Association&gt;
    &lt;/Schema&gt;
    &lt;Schema xmlns="<a href="http://schemas.microsoft.com/ado/2009/11/edm">http://schemas.microsoft.com/ado/2009/11/edm</a>" Namespace="Default"&gt;...&lt;/Schema&gt;
  &lt;/edmx:DataServices&gt;
&lt;/edmx:Edmx&gt;</pre>

<h3>Controllers</h3>  

<p>Now it's time to reap the reward. We need to add controllers that will actually expose our entities as OData resources. As you will see this is not very different from writing 'regular' CRUD controllers. It is very easy to expose OData entity set.  </p>

<pre title="ArtistsController exposing entity set" class="lang:c# decode:true">[ODataRouting]  
[ODataFormatting]
public class ArtistsController : ApiController  
{
    private AlbumsContext db = new AlbumsContext();

    // GET /Artists
    // GET /Artists?$filter=startswith(Name,'Grid')
    [Queryable]
    public IQueryable&lt;Artist&gt; Get()
    {
        return db.Artists;
    }        

    protected override void Dispose(bool disposing)
    {
        db.Dispose();
        base.Dispose(disposing);
    }
}</pre>

<p><em>/Artists</em> resource should now become available along with all the fancy filtering functionality for entity sets.</p>

<p>OData specific routing and formatting has been provided by controller attributes. Alternatively instead of deriving ArtistsController from ApiController we could have derived from ODataController which is a helper abstract class already decorated with all the necessary attributes (I would recommend this approach as additional functionality may be introduced in the future).</p>

<p>Now we can continue by providing other actions such as add (POST), update (PUT), partial update (PATCH) and delete.  </p>

<pre title="Artists controller" class="lang:c# decode:true">[ODataRouting]  
[ODataFormatting]
public class ArtistsController : ApiController  
{
    private AlbumsContext _db = new AlbumsContext();

    // GET /Artists
    // GET /Artists?$filter=startswith(Name,'Grid')
    [Queryable]
    public IQueryable&lt;Artist&gt; Get()
    {
        return _db.Artists;
    }

    // GET /Artists(2)
    public HttpResponseMessage Get([FromODataUri]int id)
    {
        Artist artist = _db.Artists.SingleOrDefault(b =&gt; b.ArtistId <mark> id);
        if (artist </mark> null)
        {
            return Request.CreateResponse(HttpStatusCode.NotFound);
        }

        return Request.CreateResponse(HttpStatusCode.OK, artist);
    }

    public HttpResponseMessage Put([FromODataUri] int id, Artist artist)
    {
        if (!_db.Artists.Any(a =&gt; a.ArtistId <mark> id))
        {
            return Request.CreateResponse(HttpStatusCode.NotFound);
        }
        //overwrite any existing id, as url is more explicit
        artist.ArtistId = id;
        _db.Entry(artist).State = EntityState.Modified;

        try
        {
            _db.SaveChanges();
        }
        catch (DbUpdateConcurrencyException)
        {
            return Request.CreateResponse(HttpStatusCode.NotFound);
        }

        return Request.CreateResponse(HttpStatusCode.NoContent);
    }

    public HttpResponseMessage Post(Artist artist)
    {
        var odataPath = Request.GetODataPath();
        if (odataPath </mark> null)
        {
            return Request.CreateErrorResponse(HttpStatusCode.BadRequest,
                "ODataPath not present in the request.");
        }

        var entitySetPathSegment
            = odataPath.Segments.FirstOrDefault() as EntitySetPathSegment;

        if (entitySetPathSegment <mark> null)
        {
            return Request.CreateErrorResponse(HttpStatusCode.BadRequest,
                "ODataPath does not start with entity set path segment");
        }

        Artist addedArtist = _db.Artists.Add(artist);
        _db.SaveChanges();
        var response = Request
            .CreateResponse(HttpStatusCode.Created, addedArtist);

        response.Headers.Location = new Uri(Url.ODataLink(
                              entitySetPathSegment,
                              new KeyValuePathSegment(ODataUriUtils
                            .ConvertToUriLiteral(addedArtist.ArtistId
                            , ODataVersion.V3))));
        return response;
    }

    public HttpResponseMessage Patch([FromODataUri] int id,
        Delta&lt;Artist&gt; artistPatch)
    {
        Artist artist = _db.Artists
            .SingleOrDefault(p =&gt; p.ArtistId </mark> id);
        if (artist <mark> null)
        {
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }

        artistPatch.Patch(artist);
        _db.SaveChanges();

        return Request.CreateResponse(HttpStatusCode.NoContent);
    }

    public HttpResponseMessage Delete([FromODataUri] int id)
    {
        Artist artist = _db.Artists.Find(id);
        if (artist </mark> null)
        {
            return Request.CreateResponse(HttpStatusCode.NotFound);
        }

        _db.Artists.Remove(artist);

        _db.SaveChanges();
        return Request.CreateResponse(HttpStatusCode.Accepted);
    }

    protected override void Dispose(bool disposing)
    {
        _db.Dispose();
        base.Dispose(disposing);
    }
}</pre>

<p>Please note that current builds use JSON as default formatting. You can change the format to XML by using appropriate Accept header when making the request.  </p>

<h3>Security and mass assignment</h3>  

<p>In the example above we expose our model directly to the user. We also assume that users can modify data entities including all their properties without restrictions (which in this particular case is not a security problem). <br>
In real life scenarios that involve authorization (and authentication) such an approach is very often unacceptable as we may obliviously expose properties that were never supposed to be modified by the given user. This is especially true when our model contains properties that directly affect authorization or authentication process (eg. the infamous isAdmin flag or a property denoting owner of an object). Usually in such case we need an extra layer of security and validation that will ensure a user does not execute an action he is not permitted to execute (eg. modify an object he does not own). If possible we also can introduce a flattened-out DTO model that is mapped to data model (eg. using Automapper). Such DTOs are meant to 'hide' persistable data model.  </p>

<h3>Testing the service</h3>  

<p>I will use Fiddler composer to test the service.</p>

<p><a href="http://www.piotrwalat.net/getting-started-with-odata-services-in-asp-net-web-api/genres_post/" rel="attachment wp-att-1136"><img class="alignnone size-full wp-image-1136" alt="POST Genre" src="http://www.piotrwalat.net/wp-content/uploads/2013/01/Genres_Post.png" width="486" height="399"></a></p>

<p>Note the <em>Content-Type: application/json </em>header. This should add a new genre. If we wanted to make a partial update to the entity we would PATCH verb as follows.</p>

<p><a href="http://www.piotrwalat.net/getting-started-with-odata-services-in-asp-net-web-api/patch/" rel="attachment wp-att-1140"><img class="alignnone size-full wp-image-1140" alt="Patch" src="http://www.piotrwalat.net/wp-content/uploads/2013/01/Patch.png" width="605" height="375"></a></p>

<p>Now the genre with id=3 will have an updated description.</p>

<p>Finally, let's issue a query against Artists entity set that will sort the results and return their count: <br>
<em><a href="http://localhost:2537/Artists?$orderby=Name&amp;$inlinecount=allpages">http://localhost:2537/Artists?$orderby=Name&amp;$inlinecount=allpages</a></em></p>

<p><a href="http://www.piotrwalat.net/getting-started-with-odata-services-in-asp-net-web-api/entitysetquery/" rel="attachment wp-att-1141"><img class="alignnone size-full wp-image-1141" alt="EntitySetQuery" src="http://www.piotrwalat.net/wp-content/uploads/2013/01/EntitySetQuery.png" width="665" height="508"></a></p>

<p>As you can see we didn't have to write any special logic to support this feature - all was provided by the framework. If we wanted we also could have provided custom actions in the controllers as we are not limited to OData specific crud operations.</p>

<p>OData is without a doubt an interesting protocol. It feels a little bit like REST services on steroids :)</p>

<p>The source code is available as usually on <a href="https://bitbucket.org/pwalat/piotr.odatawebapiservice/">bitbucket</a>.</p>]]></content:encoded></item><item><title><![CDATA[Simple framerate counter for MonoGame games]]></title><description><![CDATA[<p>Fluid and smooth user experience is a key element of any good Windows Store app. If you are writing a game, you most likely will want to measure and display framerate related data (current framerate, min. framerate, etc.) . In this example I am going to show how to add a</p>]]></description><link>http://piotrwalat.net/framerate-counter-in-monogame-windows-8-apps/</link><guid isPermaLink="false">b4337dd4-1b0e-4443-a6f5-5f66f3ebea24</guid><category><![CDATA[EnableFrameRateCounter]]></category><category><![CDATA[MonoGame]]></category><category><![CDATA[Performance]]></category><category><![CDATA[Windows 8 games]]></category><category><![CDATA[XAML]]></category><category><![CDATA[FPS counter]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Tue, 06 Nov 2012 10:00:50 GMT</pubDate><content:encoded><![CDATA[<p>Fluid and smooth user experience is a key element of any good Windows Store app. If you are writing a game, you most likely will want to measure and display framerate related data (current framerate, min. framerate, etc.) . In this example I am going to show how to add a simple FPS counter to a MonoGame powered game.</p>

<!--more-->  

<h3>XAML's DebugSettings.EnableFrameRateCounter</h3>  

<p>If you are familiar with XAML development, you might have used DebugSettings.EnableFrameRateCounter flag to display performance and framerate related statistics. This can be enabled in <em>App.xaml.cs</em> by adding one line to the constructor.  </p>

<pre class="lang:c# decode:true" title="DebugSettings.EnableFrameRateCounter = true;">public App()  
{
    InitializeComponent();
    Suspending += OnSuspending;
    DebugSettings.EnableFrameRateCounter = true;
}</pre>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/11/FramerateXaml.png"><img class="alignnone size-full wp-image-1030" title="FramerateXaml" src="http://www.piotrwalat.net/wp-content/uploads/2012/11/FramerateXaml.png" alt="" width="319" height="19"></a></p>

<p>It is a very useful feature, this however wont work when using full screen SwapChainBackgroundPanel(which is the default technique for MonoGame + XAML Windows Store apps).  </p>

<h3>Simple MonoGame framerate counter</h3>  

<p>Frame rate calculation algorithm we use is simple and is based on following principles:  </p>

<ul>  
    <li>every time a scene has been drawn using <em>Draw() </em>increase <em>FrameCounter</em> by 1,</li>
    <li>in <em>Update()</em> method measure <em>ElapsedTime </em>which is the time (in milliseconds) since last frame rate update,</li>
    <li>if <em>ElapsedTime </em>is more than 1 second (1000 ms) use <em>FrameCounter</em><em> </em>as the new FPS value.</li>
</ul>  

<div>Here is a class that will store frame-rate related data.</div>  

<pre class="lang:c# decode:true" title="FrameRate">public class FrameRate  
{
    /// &lt;summary&gt;
    /// Current FPS
    /// &lt;/summary&gt;
    public int Rate;

    /// &lt;summary&gt;
    /// Frame counter
    /// &lt;/summary&gt;
    public int Counter;

    /// &lt;summary&gt;
    /// Time elapsed from the last update
    /// &lt;/summary&gt;
    public float ElapsedTime;

    /// &lt;summary&gt;
    /// Min FPS
    /// &lt;/summary&gt;
    public int MinRate = 999;

    /// &lt;summary&gt;
    /// Seconds elapsed
    /// &lt;/summary&gt;
    public int SecondsElapsed;
}</pre>

<div>Instead of using a monolithic <em>DrawableGameComponent</em> (which would, update, draw and store the state), I usually like to separate update and draw logic into separate classes</div>  

<pre class="lang:c# decode:true" title="IUpdate">public interface IUpdate&lt;T&gt;  
        where T : class
{
    void Update(GameTime gameTime, T objectToUpdate);
}</pre>

<pre class="lang:c# decode:true" title="IDraw">public interface IDraw&lt;T&gt;  
        where T:class
{
    void LoadContent(ContentManager contentManager);
    void Draw(DrawingContext context, T objectToDraw);
}</pre>

<p>DrawingContext is just a placeholder for SpriteBatch (and GraphicsDevice if you need it). Feel free to pass in SpriteBatch directly - this will simplify IDraw signature a little bit  </p>

<pre title="DrawingContext">public class DrawingContext  
{
    public SpriteBatch SpriteBatch { get; set; }
    public GraphicsDevice GraphicsDevice { get; set; }
}</pre>

<p>Here is the logic that will be executed in<em> Game.Update()</em> method.  </p>

<pre class="lang:c# decode:true" title="FrameRateUpdater">public class FrameRateUpdater : IUpdate&lt;FrameRate&gt;  
{
    public void Update(GameTime gameTime, FrameRate frameRate)
    {
        frameRate.ElapsedTime 
            += (float)gameTime.ElapsedGameTime.TotalMilliseconds;

        if (frameRate.ElapsedTime &gt;= 1000f)
        {
            frameRate.ElapsedTime -= 1000f;
            frameRate.Rate = frameRate.Counter;
            if(frameRate.SecondsElapsed &gt; 0 
                &amp;&amp; frameRate.MinRate &gt; frameRate.Rate)
            {
                frameRate.MinRate = frameRate.Rate;
            }
            frameRate.Counter = 0;
            frameRate.SecondsElapsed++;
        }
    }
}</pre>

<pre class="lang:c# decode:true" title="FrameRateDrawer ">public class FrameRateDrawer : IDraw&lt;FrameRate&gt;  
{
    SpriteFont spriteFont;
    private const string FontName = "MapFont";
    private readonly Vector2 _fpsPositionBlack = new Vector2(20,20);
    private readonly Vector2 _fpsPositionWhite = new Vector2(20,20);
    private readonly Vector2 _minPositionBlack = new Vector2(54, 20);
    private readonly Vector2 _minPositionWhite = new Vector2(55, 20);

    public void LoadContent(ContentManager contentManager)
    {
        spriteFont = contentManager.Load&lt;SpriteFont&gt;(FontName);
    }

    public void Draw(DrawingContext context, FrameRate objectToDraw)
    {
        context.SpriteBatch.DrawString(spriteFont,
            objectToDraw.Rate.ToString(), _fpsPositionBlack, Color.Black);
        context.SpriteBatch.DrawString(spriteFont,
            objectToDraw.Rate.ToString(), _fpsPositionWhite, Color.White);

        context.SpriteBatch.DrawString(spriteFont,
            objectToDraw.MinRate.ToString(), _minPositionBlack, Color.Black);
        context.SpriteBatch.DrawString(spriteFont,
            objectToDraw.MinRate.ToString(), _minPositionWhite, Color.Orange);

        objectToDraw.Counter++;
    }
}</pre>

<p>Please note that instead of creating new <em>Vector2</em> objects used to position the strings every time we call <em> SpriteBatch.DrawString()</em>, we are reusing instances created earlier. This aims to reduce unnecessary garbage collection and even though this may seem obvious to many, such optimizations can be easily overlooked.</p>

<p>Draw method has twofold purpose - obviously it needs to draw frame-rate strings, but apart from that we need it to increase frame counter. To be more flexible you may want to measure the strings instead of using hard-coded position values.</p>

<p>Now, in your Game class you can add and instantiate the following fields:  </p>

<pre class="lang:c# decode:true" title="Game fields">private FrameRate _frameRate = new FrameRate();  
private FrameRateDrawer _frameRateDrawer = new FrameRateDrawer();  
private FrameRateUpdater _frameRateUpdater = new FrameRateUpdater();</pre>  

<pre class="lang:c# decode:true">protected override void Update(GameTime gameTime)  
{
    //...
    _frameRateUpdater.Update(gameTime, _frameRate);
    //...
}

protected override void Draw(GameTime gameTime)  
{
    GraphicsDevice.Clear(Color.Black);
    _spriteBatch.Begin();
    //...     
    _frameRateDrawer.Draw(_drawingContext, _frameRate);
    //...
    _spriteBatch.End();
    //...
}</pre>

<p>Also don't forget to load the sprite font and initialize content:  </p>

<pre class="lang:c# decode:true" title="Loading content">protected override void LoadContent()  
{
    _spriteBatch = new SpriteBatch(GraphicsDevice);
    _background = Content.Load&lt;Texture2D&gt;("background");
    _frameRateDrawer.LoadContent(Content);
    //..
    _drawingContext.SpriteBatch = _spriteBatch;
    _drawingContext.GraphicsDevice = _graphics.GraphicsDevice;
}</pre>

<p>&nbsp;</p>

<p>The end result will look somewhat like this, it is definitely a no frills solution, but its not something that is normally visible to the end-user anyway.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/11/MonoFps.png"><img class="alignnone size-full wp-image-1032" title="MonoFps" src="http://www.piotrwalat.net/wp-content/uploads/2012/11/MonoFps.png" alt="" width="235" height="70"></a></p>

<p>I imagine that frame-rate data (especially min/avg. FPS) could also be used to gather some performance data directly from the users running the game, especially if you cannot test it on all device types that the game is intended for (this shouldn't be that hard with Windows 8/Windows RT). Although this is a material for another post :)</p>

<p>The code is not Windows 8 specific in any way and should work on all MonoGame supported platforms, but I have only tested it in a Windows 8 XAML scenario.</p>]]></content:encoded></item><item><title><![CDATA[Client certificate authentication in ASP.NET Web API and Windows Store apps]]></title><description><![CDATA[<p>SSL over HTTPS provides a mechanism for mutual server-client authentication. This can be used as an alternative to more commonly used username/password based approach. In this post I am going to show how to set up client certificate authentication in ASP.NET Web API application and how to use</p>]]></description><link>http://piotrwalat.net/client-certificate-authentication-in-asp-net-web-api-and-windows-store-apps/</link><guid isPermaLink="false">1336bb82-b71a-48f4-93cb-10822dabb764</guid><category><![CDATA[certificates]]></category><category><![CDATA[client certificate authentication]]></category><category><![CDATA[delegating handlers]]></category><category><![CDATA[ImportPfxDataAsync]]></category><category><![CDATA[self-signed certificate]]></category><category><![CDATA[ssl]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Fri, 26 Oct 2012 09:00:47 GMT</pubDate><content:encoded><![CDATA[<p>SSL over HTTPS provides a mechanism for mutual server-client authentication. This can be used as an alternative to more commonly used username/password based approach. In this post I am going to show how to set up client certificate authentication in ASP.NET Web API application and how to use delegating handlers to provide custom logic that handles certificates and allows to introduce arbitrary authentication mechanism (eg. role based authentication). I will also show how to import client certificates into XAML Windows Store app and how to use it to authenticate to a HTTP service.</p>

<!--more-->

<p>You can skip next step if you already have certificates and do not need to create self-signed surrogates.  </p>

<h3>Generating certificates</h3>  

<p>In order to issue client certificate we will need to create a Certificate Authority (CA) using a self-signed certificate. This will be also used to create server certificate that is imported into IIS (makecert should be available in VS command prompt).  </p>

<pre class="lang:sh decode:true">makecert -r -pe -n "CN=Awesome CA" -ss CA -a sha1 -sky signature -cy authority -sv AwesomeCA.pvk AwesomeCA.cer  
makecert -pe -n "CN=127.0.0.1" -a sha1 -sky exchange -eku 1.3.6.1.5.5.7.3.1 -ic AwesomeCA.cer -iv AwesomeCA.pvk -sp "Microsoft RSA SChannel Cryptographic Provider"  -sy 12 -sv LocalServer.pvk LocalServer.cer  
pvk2pfx -pvk LocalServer.pvk -spc LocalServer.cer -pfx LocalServer.pfx</pre>  

<p>After running these commands and entering password for several times, you should end up with a couple of certificate files. Now you will need to tell your computer to trust the newly created CA (it is generally a good idea to remove that trust after you are finished with testing). To do that start mmc.exe as an Administrator Add/Remove Snap-In (Ctrl+M), and when prompted with certificate store option choose <em>Computer account. </em>Under Certificates (Local Computer) choose AllTasks and Import AwesomeCA.cer. It should become visible in the list along other Trusted Root CAs.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/ImportCA.png"><img class="alignnone size-full wp-image-985" title="ImportCA" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/ImportCA.png" alt="" width="628" height="422"></a>  </p>

<h3>Setting up IIS</h3>  

<p>The next step consists of setting up SSL in IIS. I am using Windows 8 that runs IIS 8, but instructions for 7/7.5 should be very similar/the same.</p>

<p>Open IIS Manager and go to Server Certificates panel. Then click Import.. and ... import your LocalServer.pfx.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/ssliis.png"><img class="alignnone size-full wp-image-986" title="ssliis" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/ssliis.png" alt="" width="508" height="187"></a></p>

<p>Then under Default Web Site go to Bindings and make sure that the https binding is properly set up (if it doesn't exist, create it) and that newly created certificate is mapped to that binding.</p>

<p>Now create a basic ASP.NET Web API application - the template provides ValuesController by default. Make sure that the user that you are running Visual Studio as has sufficient permissions to create new virtual directories in IIS.</p>

<p>Go to Project properties and make sure you use IIS as server (not IIS Express), also use https:// instead of http:// for Project Url option, create virtual directory if necessary. After you do this go back to IIS Manager and under SSL Settings for a newly create virtual directory check 'Require SSL' and 'Require client certificates'. I am using 127.0.0.1 as host address here, but in real life you probably would use a domain name and not an IP address. Remember that the certificate has been issued for that particular name (for example localhost and not 127.0.0.1).</p>

<p>Now when you go to the newly created ASP.NET Web API service using your web browser you will see 403 error, this is because the application requires client to present a SSL certificate and the browser does not have one.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/sslrequired1.png"><img class="alignnone size-full wp-image-997" title="sslrequired" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/sslrequired1.png" alt="" width="794" height="222"></a></p>

<p>&nbsp;</p>

<p>Let's create client cert signed by our CAs:  </p>

<pre class="lang:sh decode:true">makecert -pe -n "CN=piotr@piotrwalat.net" -a sha1 -sky exchange -eku 1.3.6.1.5.5.7.3.2 -ic AwesomeCA.cer -iv AwesomeCA.pvk -sv Client.pvk Client.cer  
pvk2pfx -pvk Client.pvk -spc Client.cer -pfx Client.pfx -po PASSWORD</pre>  

<p>You should end up with Client.pfx certificate that can be imported to your users store (and later on to Windows 8 app certificate store). Import the file (make sure to mark is as exportable - we will need this later) and navigate to /api/values endpoint - this time you should see the data.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/browserauth.png"><img class="alignnone size-full wp-image-993" title="browserauth" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/browserauth.png" alt="" width="551" height="197"></a>  </p>

<h3>Client certificates in Windows 8</h3>  

<p>Now, lets move on and create a sample Windows 8 XAML app that will consume the service. Windows 8 apps run in sand-boxed environment - this also means that they get their own certificate stores. By enabling Shared User Certificates capability in Package.appxmanifest you allow app to reach for certificates outside of its store, which is useful for example when dealing with smartcards.</p>

<p>You can include certificates that should ship with your app in Certificates declaration inside of Package.appxmanifest. This is an xml file so you can either edit the source directly or use the designer that ships with VS 2012. Copy over CA .cer file to the project folder and add it to Certificates declaration, use "Root" as store name.  </p>

<pre class="lang:xhtml decode:true">  &lt;Capabilities&gt;  
    &lt;Capability Name="sharedUserCertificates" /&gt;
    &lt;Capability Name="enterpriseAuthentication" /&gt;
    &lt;Capability Name="privateNetworkClientServer" /&gt;
    &lt;Capability Name="internetClient" /&gt;
  &lt;/Capabilities&gt;
  &lt;Extensions&gt;
    &lt;Extension Category="windows.certificates"&gt;
      &lt;Certificates&gt;
        &lt;Certificate StoreName="Root" Content="AwesomeCA.cer" /&gt;
        &lt;SelectionCriteria AutoSelect="true" /&gt;
      &lt;/Certificates&gt;
    &lt;/Extension&gt;
  &lt;/Extensions&gt;</pre>

<p>&nbsp;</p>

<p>There is a subtle bug in the way that windows 8 xaml apps handle certificates that we need to apply workaround for (jpsanders mentions it <a href="http://social.msdn.microsoft.com/Forums/en-US/winappswithcsharp/thread/0d005703-0ec3-4466-b389-663608fff053">here</a>). Make sure to add the following certigicate policy OID to the cleint cert (i am using mmc.exe + cert snap-in to do that).</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/oid.png"><img class="alignnone size-full wp-image-1002" title="oid" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/oid.png" alt="" width="416" height="394"></a></p>

<p>Also make sure that <em>Client authentication </em>is selected as one of certificate purposes. Export the certificate as .pfx and copy it to the project directory.</p>

<p>Here is the code that can be used to import the file to app certificate storage.  </p>

<pre class="lang:c# decode:true">StorageFolder packageLocation = Windows.ApplicationModel.Package.Current.InstalledLocation;  
StorageFolder certificateFolder = await packageLocation.GetFolderAsync("Certificates");  
StorageFile certificate = await certificateFolder.GetFileAsync("Client.pfx");

IBuffer buffer = await Windows.Storage.FileIO.ReadBufferAsync(certificate);  
string encodedString = Windows.Security.Cryptography.CryptographicBuffer.EncodeToBase64String(buffer);

await CertificateEnrollmentManager.ImportPfxDataAsync(  
    encodedString,
    "PASSWORD",
    ExportOption.NotExportable,
    KeyProtectionLevel.NoConsent,
    InstallOptions.None,
    "Client certificate");</pre>

<p>In order for HttpClient to be able to use the certificate, we need to create an instance of HttpClientHandler and tell it to pick the certificate automatically.  </p>

<pre class="lang:c# decode:true">HttpClientHandler messageHandler = new HttpClientHandler();  
messageHandler.ClientCertificateOptions = ClientCertificateOption.Automatic;  
HttpClient httpClient = new HttpClient(messageHandler);  
HttpResponseMessage result = await httpClient.GetAsync("<a href="https://127.0.0.1/Piotr.Win8CertAuth.Api/api/values">https://127.0.0.1/Piotr.Win8CertAuth.Api/api/values</a>");</pre>  

<p>Once you run this code you should be able to successfully connect to ASP.NET Web API service and retrieve the data.</p>

<p>&nbsp;</p>

<h3>Adding delegating handler</h3>  

<p>So far the mechanism wasn't really ASP.NET Web API specific and would have really worked in any ASP.NET application. It is also pretty basic, without any logic to really extend certificate validation or provide any kind of certificate-to-user mapping.</p>

<p>So how do we actually retrieve the certificate in ASP.NET Web API? Actually it turns out that's really easy as there is <em>HttpRequestMessage.GetClientCertificate();</em> extension method that returns certificate object. Let's create a delegating handler that will intercept the request and inject certificate related logic into the pipeline.  </p>

<pre class="lang:c# decode:true">public interface IValidateCertificates  
{
    bool IsValid(X509Certificate2 certificate);
    IPrincipal GetPrincipal(X509Certificate2 certificate2);
}

public class BasicCertificateValidator : IValidateCertificates  
{
    public bool IsValid(X509Certificate2 certificate)
    {
        return certificate.Issuer <mark> "CN=Awesome CA"
               &amp;&amp; certificate.GetCertHashString() </mark> "B04AED3DA6CB4BD2F817EE2C726183C00035F4C6";
        //make a better check here (eg. against mapping, verify the chain etc)
    }

    public IPrincipal GetPrincipal(X509Certificate2 certificate2)
    {
        return new GenericPrincipal(
            new GenericIdentity(certificate2.Subject), new[] { "User" });
    }
}

public class CertificateAuthHandler : DelegatingHandler  
{
    public IValidateCertificates CertificateValidator { get; set; }

    public CertificateAuthHandler()
    {
        CertificateValidator = new BasicCertificateValidator();
    }

    protected override System.Threading.Tasks.Task&lt;HttpResponseMessage&gt;
        SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
    {
        X509Certificate2 certificate = request.GetClientCertificate();
        if (certificate == null || !CertificateValidator.IsValid(certificate))
        {
            return Task&lt;HttpResponseMessage&gt;.Factory.StartNew(
                () =&gt; request.CreateResponse(HttpStatusCode.Unauthorized));

        }
        Thread.CurrentPrincipal = CertificateValidator.GetPrincipal(certificate);
        return base.SendAsync(request, cancellationToken);
    }
}</pre>

<p>Now, we can add the handler instance to global configuration object.  </p>

<pre class="lang:c# decode:true">GlobalConfiguration.Configuration.MessageHandlers.Add( new CertificateAuthHandler());</pre>  

<p>From now on for every user that has a valid client certificate we will create IPrincipal object and assign it to current thread. This means that you can use Authorize attribute to provide more granular authorization to your services.  </p>

<pre class="lang:c# decode:true">[Authorize(Roles = "User")]  
public IEnumerable&lt;string&gt; Get()  
{
    return new string[] { "value1", "value2" };
}</pre>

<p>Hope you find this post helpful (source code coming soon).</p>

<p>&nbsp;</p>]]></content:encoded></item><item><title><![CDATA[ASP.NET Web API file download service with resume support]]></title><description><![CDATA[<p>ASP.NET Web API provides out of the box support for streaming binary files to the client. However for more advanced scenarios you need to add custom logic to handle pause/resume functionality by handling appropriate HTTP Headers. In this post I will try to address this problem and build</p>]]></description><link>http://piotrwalat.net/file-download-service-with-resume-support-using-asp-net-web-api/</link><guid isPermaLink="false">c1109cf3-8e4f-47da-8f16-e82eb4620d01</guid><category><![CDATA[ASP.NET Web API]]></category><category><![CDATA[ASP.NET]]></category><category><![CDATA[file download]]></category><category><![CDATA[HEAD verb]]></category><category><![CDATA[memory mapped files]]></category><category><![CDATA[Range]]></category><category><![CDATA[resume]]></category><category><![CDATA[Accept-Ranges]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Thu, 18 Oct 2012 08:00:42 GMT</pubDate><content:encoded><![CDATA[<p>ASP.NET Web API provides out of the box support for streaming binary files to the client. However for more advanced scenarios you need to add custom logic to handle pause/resume functionality by handling appropriate HTTP Headers. In this post I will try to address this problem and build a resume-supporting file download service using two different approaches:  </p>

<ul>  
    <li>stream wrapper for FileStream that can return partial data,</li>
    <li>memory mapped files.</li>
</ul>  

<div>Memory mapped files seem to be an interesting candidate as they may offer performance benefits such as memory caching and optimized file access managed by virtual memory manager.</div>  

<div><!--more--></div>  

<p>&nbsp;</p>

<h3>Simple file download service</h3>  

<p>Thanks to StreamContent class, creating basic file download service in ASP.NET Web API is a relatively straightforward task. Let's start by implementing a basic scenario, where files are served from a directory. <br>
Instead of dealing with file system access directly in controllers I usually like to encapsulate this functionality in a dedicated object, which makes unit testing/mocking easier and makes code tidier. For our examples we will create <em>IFileProvider</em> interface that exposes three operations:  </p>

<pre title="IFileProvider" class="lang:c# decode:true">public interface IFileProvider  
{
    bool Exists(string name);
    FileStream Open(string name);
    long GetLength(string name);
}</pre>

<p>The actual implementation will use app settings in web.config file to configure storage folder location.  </p>

<pre title="IFileProvider implementation using app settings" class="lang:c# decode:true">public class FileProvider : IFileProvider  
{
    private readonly string _filesDirectory;
    const string DefaultFileLocation = "Files";
    private const string AppSettingsKey = "FileProvider.FilesLocation";

    public FileProvider()
    {
        _filesDirectory = DefaultFileLocation;
        var fileLocation = ConfigurationManager.AppSettings[AppSettingsKey];
        if(!String.IsNullOrWhiteSpace(fileLocation))
        {
            _filesDirectory = fileLocation;
        }
    }

    public bool Exists(string name)
    {
        //make sure we dont access directories outside of our store for security reasons
        string file = Directory.GetFiles(_filesDirectory, name, SearchOption.TopDirectoryOnly)
                .FirstOrDefault();
        return file != null;
    }

    public FileStream Open(string name)
    {
        return File.Open(GetFilePath(name), 
            FileMode.Open, FileAccess.Read);
    }

    public long GetLength(string name)
    {
        return new FileInfo(GetFilePath(name)).Length;
    }

    private string GetFilePath(string name)
    {
        return Path.Combine(_filesDirectory, name);
    }
}</pre>

<p>&nbsp;</p>

<pre title="FileProvider.FilesLocation appSettings entry" class="lang:xhtml decode:true">&lt;appSettings&gt;  
    &lt;!-- (...) --&gt;
    &lt;add key="FileProvider.FilesLocation" value="H:\Storage" /&gt;
&lt;/appSettings&gt;</pre>

<p>With file access logic ready we can write code that actually serves the data. A simple Web API controller that streams files will look like this:  </p>

<pre title="Basic file streaming" class="lang:c# decode:true">public class SimpleFilesController : ApiController  
{
    public IFileProvider FileProvider { get; set; }

    public SimpleFilesController()
    {
        FileProvider = new FileProvider();
    }

    public HttpResponseMessage Get(string fileName)
    {
        if (!FileProvider.Exists(fileName))
        {
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }

        FileStream fileStream = FileProvider.Open(fileName);
        var response = new HttpResponseMessage();
        response.Content = new StreamContent(fileStream);
        response.Content.Headers.ContentDisposition
            = new ContentDispositionHeaderValue("attachment");
        response.Content.Headers.ContentDisposition.FileName = fileName;
        response.Content.Headers.ContentType
            = new MediaTypeHeaderValue("application/octet-stream");
        response.Content.Headers.ContentLength 
                = FileProvider.GetLength(fileName);
        return response;
    }
}</pre>

<p>It is a basic version, yet it seems to work fine. If you wanted to use it in more advanced scenarios however, there are a couple of potential problems to face. <br>
First of all when the transfer is interrupted for whatever reason, the client has to start downloading from the beginning. This is unacceptable when serving large files and would be a major annoyance for people using mobile connections that drop often. Another problem is that the implementation above is not very client friendly in terms of http support (eg. HEAD verb).  </p>

<h3></h3>  

<h3>Adding resume support</h3>  

<p>There are two main areas that we need to add more logic to in order to introduce pause/resume functionality:  </p>

<ul>  
    <li>extend HTTP protocol support - most importantly by handling <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html">Range</a> header properly,</li>
    <li>use a Stream that is capable of  returning file portion (from byte A to byte B).</li>
</ul>  

<div>Why should we implement <a href="http://tools.ietf.org/html/rfc2616#section-9.4">HEAD</a> verb in the controller? Let's imagine we were to write software that downloads large files over HTTP using our service. Ideally we would like to have a mechanism that could tell us how big is the file (by returning Content-Length header) and whether or not the service can serve us partial data (by returning <a href="http://tools.ietf.org/html/rfc2616#section-14.5">Accept-Ranges</a> header) without actually getting the data. This is exactly what HEAD does as it is designed to be identical to GET except that the server must not return the body (headers only).</div>  

<div>Accept-Ranges is returned by the server in order to indicate that it can return bytes ranges of a requested resource. Moreover if partial content has been returned, the server should return <a href="http://tools.ietf.org/html/rfc2616#section-10.2.7">206 Partial Content</a> status code along with <a href="http://tools.ietf.org/html/rfc2616#section-14.16">Content-Range</a> header that contains ranges returned. If the client requests range that is out of bounds for a given resource <a href="http://tools.ietf.org/html/rfc2616#section-10.4.17">416 Requested Range Not Satisfiable</a> status should be returned.</div>  

<div></div>  

<div>Here is an example added for clarity.</div>  

<pre class="lang:c# decode:true">HEAD <a href="http://localhost/Piotr.AspNetFileServer/api/files/data.zip">http://localhost/Piotr.AspNetFileServer/api/files/data.zip</a> HTTP/1.1  
User-Agent: Fiddler  
Host: localhost

HTTP/1.1 200 OK  
Content-Length: 1182367743  
Content-Type: application/octet-stream  
Accept-Ranges: bytes  
Server: Microsoft-IIS/8.0  
Content-Disposition: attachment; filename=data.zip</pre>  

<pre class="lang:c# decode:true">HEAD <a href="http://localhost/Piotr.AspNetFileServer/api/files/data.zip">http://localhost/Piotr.AspNetFileServer/api/files/data.zip</a> HTTP/1.1  
User-Agent: Fiddler  
Host: localhost  
Range: bytes=0-999

HTTP/1.1 206 Partial Content  
Content-Length: 1000  
Content-Type: application/octet-stream  
Content-Range: bytes 0-999/1182367743  
Accept-Ranges: bytes  
Server: Microsoft-IIS/8.0  
Content-Disposition: attachment; filename=data.zip</pre>  

<p>This is a helper class used to store some information passed in HTTP headers.  </p>

<pre class="lang:c# decode:true" title="ContentInfo">public class ContentInfo  
{
    public long From;
    public long To;
    public bool IsPartial;
    public long Length;
}</pre>

<p>The controller itself can look like this:  </p>

<pre class="lang:c# decode:true" title="FilesController ">public class FilesController : ApiController  
{
    public IFileProvider FileProvider { get; set; }

    public FilesController()
    {
        FileProvider = new FileProvider();
    }

    public HttpResponseMessage Head(string fileName)
    {
        if (!FileProvider.Exists(fileName))
        {
            //if file does not exist return 404
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        long fileLength = FileProvider.GetLength(fileName);
        ContentInfo contentInfo = GetContentInfoFromRequest(this.Request, fileLength);

        var response = new HttpResponseMessage();
        response.Content = new ByteArrayContent(new byte[0]);
        SetResponseHeaders(response, contentInfo, fileLength, fileName);
        return response;
    }

    public HttpResponseMessage Get(string fileName)
    {
        if (!FileProvider.Exists(fileName))
        {
            //if file does not exist return 404
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        long fileLength = FileProvider.GetLength(fileName);
        ContentInfo contentInfo 
           = GetContentInfoFromRequest(this.Request, fileLength);
        var stream = new PartialReadFileStream(FileProvider.Open(fileName), 
                                               contentInfo.From, contentInfo.To);
        var response = new HttpResponseMessage();
        response.Content = new StreamContent(stream);
        SetResponseHeaders(response, contentInfo, fileLength, fileName);
        return response;
    }

    private ContentInfo GetContentInfoFromRequest(HttpRequestMessage request, long entityLength)
    {
        var result = new ContentInfo 
                    {
                        From = 0, To = entityLength - 1, 
                        IsPartial = false, Length = entityLength
                    };
        RangeHeaderValue rangeHeader = request.Headers.Range;
        if (rangeHeader != null &amp;&amp; rangeHeader.Ranges.Count != 0)
        {
            //we support only one range
            if (rangeHeader.Ranges.Count &gt; 1)
            {
                //we probably return other status code here
                throw new HttpResponseException(HttpStatusCode.RequestedRangeNotSatisfiable);
            }
            RangeItemHeaderValue range = rangeHeader.Ranges.First();
            if (range.From.HasValue &amp;&amp; range.From &lt; 0 
                || range.To.HasValue &amp;&amp; range.To &gt; entityLength - 1)
            {
                throw new HttpResponseException(HttpStatusCode.RequestedRangeNotSatisfiable);
            }

            result.From = range.From ?? 0;
            result.To = range.To ?? entityLength - 1;
            result.IsPartial = true;
            result.Length = entityLength;
            if (range.From.HasValue &amp;&amp; range.To.HasValue)
            {
                result.Length = range.To.Value - range.From.Value + 1;
            }
            else if (range.From.HasValue)
            {
                result.Length = entityLength - range.From.Value + 1;
            }
            else if (range.To.HasValue)
            {
                result.Length = range.To.Value + 1;
            }
        }

        return result;
    }

    private void SetResponseHeaders(HttpResponseMessage response, ContentInfo contentInfo,
                                    long fileLength, string fileName)
    {
        response.Headers.AcceptRanges.Add("bytes");
        response.StatusCode = contentInfo.IsPartial ? HttpStatusCode.PartialContent
                                  : HttpStatusCode.OK;
        response.Content.Headers.ContentDisposition 
          = new ContentDispositionHeaderValue("attachment");
        response.Content.Headers.ContentDisposition.FileName 
          = fileName;
        response.Content.Headers.ContentType 
          = new MediaTypeHeaderValue("application/octet-stream");
        response.Content.Headers.ContentLength = contentInfo.Length;
        if (contentInfo.IsPartial)
        {
            response.Content.Headers.ContentRange
                = new ContentRangeHeaderValue(contentInfo.From, contentInfo.To, fileLength);
        }
    }
}</pre>

<p>Another important part of this solution is a stream implementation that can return byte range from a file. This is actually a wrapper for FileStream, please note that this code is largely untested, although gives an idea about approach - you have been warned ;)  </p>

<pre class="lang:c# decode:true" title="PartialReadFileStream ">internal class PartialReadFileStream : Stream  
{
    private readonly long _start;
    private readonly long _end;
    private long _position;
    private FileStream _fileStream;
    public PartialReadFileStream(FileStream fileStream, long start, long end)
    {
        _start = start;
        _position = start;
        _end = end;
        _fileStream = fileStream;

        if (start &gt; 0)
        {
            _fileStream.Seek(start, SeekOrigin.Begin);
        }
    }

    public override void Flush()
    {
        _fileStream.Flush();
    }

    public override long Seek(long offset, SeekOrigin origin)
    {
        if (origin <mark> SeekOrigin.Begin)
        {
            _position = _start + offset;
            return _fileStream.Seek(_start + offset, origin);
        }
        else if (origin </mark> SeekOrigin.Current)
        {
            _position += offset;
            return _fileStream.Seek(_position + offset, origin);
        }
        else
        {
            throw new NotImplementedException("Seeking from SeekOrigin.End is not implemented");
        }
    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        int byteCountToRead = count;
        if (_position + count &gt; _end)
        {
            byteCountToRead = (int)(_end - _position) + 1;
        }
        var result = _fileStream.Read(buffer, offset, byteCountToRead);
        _position += byteCountToRead;
        return result;
    }

    public override IAsyncResult BeginRead(byte[] buffer, int offset, int count,
       AsyncCallback callback, object state)
    {
        int byteCountToRead = count;
        if (_position + count &gt; _end)
        {
            byteCountToRead = (int)(_end - _position);
        }
        var result = _fileStream.BeginRead(buffer, offset,
                           count, (s) =&gt;
                                      {
                                          _position += byteCountToRead;
                                          callback(s);
                                      }, state);
        return result;
    }

    public override int EndRead(IAsyncResult asyncResult)
    {
        return _fileStream.EndRead(asyncResult);
    }

    public override int ReadByte()
    {
        int result = _fileStream.ReadByte();
        _position++;
        return result;
    }

    // ...

    protected override void Dispose(bool disposing)
    {
        if (disposing)
        {
            _fileStream.Dispose();
        }
        base.Dispose(disposing);
    }
}</pre>

<p>If  you think about this performance-wise, its not the most optimal approach as every time a file is being requested we need  to read it from the disk and disks are very slow (compared to RAM) and disk access may become bottleneck very fast. It becomes evident that for more advanced scenarios some kind of a caching mechanism would be a good optimization.  </p>

<h3>Using memory-mapped files</h3>  

<p>Memory mapped file is a portion of virtual memory that has been mapped to a file. This is not a new concept and has been around in Windows (and other OSes) for many years, but just recently (from NET 4 that is) has been made available to C# programmers as a managed API. Memory mapped files allow processes to modify and read files as if they were reading and writing to the memory. If my memory serves me well IPC in Windows is actually implemented using this feature.</p>

<p><img class="alignnone" title="Memory mapped files" src="http://i.msdn.microsoft.com/dynimg/IC378559.png" alt="Memory mapped files" width="317" height="330"></p>

<p>Please note that the files are <em>mapped </em>and not <em>copied</em> into virtual memory, but from program's perspective its transparent as Windows loads parts of physical files as they are accessed by application. Another advantage of MMF is that the system performs transfers in 4K chunks of data (pages) and virtual-memory manager (VMM) decides when it should free those pages up. Windows is highly optimized for page-related IO operations, and it tries to minimize the number of times the hard disk head has to move. In other words by using MMF you have a guarantee that the OS will optimize disk access and additionally you get a form of memory cache.</p>

<p>Because files are mapped to virtual memory, to serve big files we need to run our application in 64 bit mode, otherwise it wouldn't be able to address all space needed.For this example, make sure to change <strong>target platform to x64</strong> in project properties.  </p>

<pre class="lang:c# decode:true" title="MemMappedFilesController">public class MemMappedFilesController : ApiController  
{
    private const string MapNamePrefix = "FileServerMap";

    public IFileProvider FileProvider { get; set; }

    public MemMappedFilesController()
    {
        FileProvider = new FileProvider();
    }

    private ContentInfo GetContentInfoFromRequest(HttpRequestMessage request, long entityLength)
    {
        //...
    }

    private void SetResponseHeaders(HttpResponseMessage response, ContentInfo contentInfo,
        long fileLength, string fileName)
    {
        //...
    }

    public HttpResponseMessage Head(string fileName)
    {
        //string fileName = GetFileName(name);
        if (!FileProvider.Exists(fileName))
        {
            //if file does not exist return 404
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        long fileLength = FileProvider.GetLength(fileName);
        ContentInfo contentInfo = GetContentInfoFromRequest(this.Request, fileLength);

        var response = new HttpResponseMessage();
        response.Content = new ByteArrayContent(new byte[0]);
        SetResponseHeaders(response, contentInfo, fileLength, fileName);
        return response;
    }

    public HttpResponseMessage Get(string fileName)
    {
        if (!FileProvider.Exists(fileName))
        {
            //if file does not exist return 404
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        long fileLength = FileProvider.GetLength(fileName);
        ContentInfo contentInfo = GetContentInfoFromRequest(this.Request, fileLength);
        string mapName = GenerateMapNameFromName(fileName);

        MemoryMappedFile mmf = null;
        try
        {
            mmf = MemoryMappedFile.OpenExisting(mapName, MemoryMappedFileRights.Read);
        }
        catch (FileNotFoundException)
        {
            //every time we use an exception to control flow a kitten dies

            mmf = MemoryMappedFile
                .CreateFromFile(FileProvider.Open(fileName), mapName, fileLength,
                                MemoryMappedFileAccess.Read, null, HandleInheritability.None,
                                false);
        }
        using (mmf)
        {
            Stream stream
                = contentInfo.IsPartial
                ? mmf.CreateViewStream(contentInfo.From, 
                contentInfo.Length, MemoryMappedFileAccess.Read)
                : mmf.CreateViewStream(0, fileLength, 
                MemoryMappedFileAccess.Read);

            var response = new HttpResponseMessage();
            response.Content = new StreamContent(stream);
            SetResponseHeaders(response, contentInfo, fileLength, fileName);
            return response;
        }
    }

    private string GenerateMapNameFromName(string fileName)
    {
        return String.Format("{0}_{1}", MapNamePrefix, fileName);
    }
}</pre>

<p>I've removed code that is identical to FilesController. Please note that we have 1-1 relationship between a file (or its name to be more precise) and a map name. This means we use same map for all requests asking for the same filename.</p>

<p>Both controllers should provide pause/resume function.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/download.jpg"><img class="alignnone size-full wp-image-967" title="download" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/download.jpg" alt="" width="707" height="163"></a></p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/download2.png"><img class="alignnone size-full wp-image-968" title="download2" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/download2.png" alt="" width="707" height="163"></a></p>

<p>Hope you find this post useful, complete source code is available as usually on <a href="https://bitbucket.org/pwalat/piotr.aspnetfileserver">bitbucket</a>. Enjoy!</p>]]></content:encoded></item><item><title><![CDATA[Getting started with Bing Maps SDK for Windows Store apps]]></title><description><![CDATA[<p>Bing Maps SDK for Windows Store enables Windows 8 developers to include a rich mapping experience in C#, C++, VB.NET and JavaScript applications. After a couple of beta releases, the SDK finally reached RTM status that allows for Windows Store submissions. This is an introductory post that is meant</p>]]></description><link>http://piotrwalat.net/getting-started-with-bing-maps-sdk-for-windows-store-apps/</link><guid isPermaLink="false">9980df4a-dbc8-4f08-9265-e5b907ed0aa4</guid><category><![CDATA[Bing maps]]></category><category><![CDATA[MapPolygon]]></category><category><![CDATA[MapShapeLayer]]></category><category><![CDATA[Windows 8]]></category><category><![CDATA[Windows Store Apps]]></category><category><![CDATA[XAML]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Mon, 08 Oct 2012 00:00:12 GMT</pubDate><content:encoded><![CDATA[<p>Bing Maps SDK for Windows Store enables Windows 8 developers to include a rich mapping experience in C#, C++, VB.NET and JavaScript applications. After a couple of beta releases, the SDK finally reached RTM status that allows for Windows Store submissions. This is an introductory post that is meant to investigate basic functionality provided in Bing Maps components for C#/XAML Windows Store apps.</p>

<p>In particular I will look into following topics:  </p>

<ul>  
    <li>using  Bing Maps SDK for Windows Store apps in Visual Studio 2012 projects,</li>
    <li>zooming and centering map on user's current location using geolocation service,</li>
    <li>adding pushpins,</li>
    <li>drawing polygons,</li>
    <li>adding other UIElements to the map.</li>
</ul>  

<div><!--more--></div>  

<h3>Prerequisites</h3>  

<p>First of all we will need to get Bing Maps SDK for Windows Store apps which is located <a href="http://visualstudiogallery.msdn.microsoft.com/bb764f67-6b2c-4e14-b2d3-17477ae1eaca">here</a>. After installing the extension Bing Maps SDK will be available in Windows Store apps in Reference Manager.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/Reference.png"><img class="alignnone size-full wp-image-750" title="Reference" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/Reference.png" alt="" width="318" height="235"></a></p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/Reference21.png"><img class="alignnone size-full wp-image-759" title="Reference Manager" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/Reference21.png" alt="" width="800" height="550"></a></p>

<p>Make sure to select<em> Microsoft Visual C++ Runtime Package</em> as it required by maps components. Because Bing Maps native implementation, we need to change <em>Active solution configuration</em> to either ARM, x86 or x64. Otherwise the project will not compile.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/ActiveConfig.png"><img class="alignnone size-full wp-image-761" title="Active solution platform" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/ActiveConfig.png" alt="" width="782" height="492"></a></p>

<p>There is one last step required to start using the components - you will need a <em><a href="http://msdn.microsoft.com/en-us/library/ff428642.aspx">Bing Maps Key for Windows Store apps</a>. </em>There are three types of keys available - Trial, Basic and Enterprise, you can learn about differences between them <a href="http://www.bing.com/community/site_blogs/b/maps/archive/2012/07/25/changes-to-bing-maps-keys.aspx">here</a>. I am using a Trial key, but when writing real world Windows 8 apps you are most likely to use Basic.</p>

<p>In order to generate the key go to <a href="https://www.bingmapsportal.com/">https://www.bingmapsportal.com/</a>, login with your Live ID (if you dont have it you will need to create one) and click on <em>Create or view keys </em>under <em>My Account</em> section.</p>

<p>Now we are good to go - let's create a new Windows Store app (I am using Blank App template).  </p>

<pre class="font-size:15 lang:xhtml decode:true">&lt;Page  
    (...)
    xmlns:mc="<a href="http://schemas.openxmlformats.org/markup-compatibility/2006">http://schemas.openxmlformats.org/markup-compatibility/2006</a>"
    xmlns:Maps="using:Bing.Maps"
    mc:Ignorable="d"&gt;
    &lt;Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}"&gt;
        &lt;Maps:Map Credentials="INSERT_YOUR_KEY" x:Name="BingMap"&gt;
        &lt;/Maps:Map&gt;
    &lt;/Grid&gt;
&lt;/Page&gt;</pre>

<p>Instead of hardcoding the key in pages, it is usually better to have it defined in a resource dictionary (eg. in App.xaml).  </p>

<h3></h3>  

<h3>Centering and zooming in on current location</h3>  

<p>Let's start with the basics and show how to center and zoom the map based on current location. Before running this example make sure that <em>Location</em> capability is enabled in <em>Package.appxmanifest.</em>  </p>

<pre class="font-size:15 lang:c# decode:true" title="Center and zoom on current location">protected async override void OnNavigatedTo(NavigationEventArgs e)  
{
    Location location = await GetCurrentLocationAsync();
    CenterOnLocation(location);
}

private async Task&lt;Location&gt; GetCurrentLocationAsync()  
{
    var geolocator = new Geolocator();
    Geoposition currentGeoposition = await geolocator.GetGeopositionAsync();
    var location = new Location()
    {
        Latitude = currentGeoposition.Coordinate.Latitude,
        Longitude = currentGeoposition.Coordinate.Longitude,
    };
    return location;
}

private void CenterOnLocation(Location location)  
{
    BingMap.Center = location;
    BingMap.ZoomLevel = ZoomLevel;
}</pre>

<p>Using geolocation features of Windows 8 is really straightforward and is further simplified by async/await pattern. After the page gets navigated to, map control should be automatically centered on current location.  </p>

<h3></h3>  

<h3>Adding pushpins</h3>  

<p>Pushpins can be added to the map either in XAML or imperatively in code-behind.  </p>

<pre class="font-size:15 lang:xhtml decode:true" title="Pushpin in XAML">&lt;Maps:Map Credentials="YOUR_KEY_HERE"  
          x:Name="BingMap"&gt;
    &lt;Maps:Map.Children&gt;
        &lt;Maps:Pushpin Text="1"&gt;
            &lt;Maps:MapLayer.Position&gt;
                &lt;Maps:Location Latitude="50.0104955"
                                          Longitude="21.9888709"/&gt;
            &lt;/Maps:MapLayer.Position&gt;
        &lt;/Maps:Pushpin&gt;
    &lt;/Maps:Map.Children&gt;            
&lt;/Maps:Map&gt;</pre>

<p>Please note that in order to define pushpin position on the map we are setting MapLayer.Position attached property.  </p>

<pre class="font-size:15 lang:c# decode:true" title="Adding a pushpin in code">private void AddPushpin(Location location, string text)  
{
    var pushpin = new Pushpin()
                          {
                              Text = text,
                          };
    MapLayer.SetPosition(pushpin, location);
    BingMap.Children.Add(pushpin);
}</pre>

<p>Default pushpin template should look like this:</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/Pushpin1.png"><img class="alignnone size-full wp-image-819" style="border: 1px solid #999;" title="Pushpin" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/Pushpin1.png" alt="" width="799" height="629"></a></p>

<p>In order to add some interactivity to the pushpin, we can handle <em>Tapped</em> event like this:  </p>

<pre class="font-size:15 lang:xhtml decode:true" title="Tapped event">&lt;Maps:Pushpin Text="1" Tapped="PushpinTapped"&gt;  
    &lt;Maps:MapLayer.Position&gt;
        &lt;Maps:Location Latitude="50.0104955" 
                       Longitude="21.9888709"/&gt;
    &lt;/Maps:MapLayer.Position&gt;
&lt;/Maps:Pushpin&gt;</pre>

<pre class="font-size:15 lang:c# decode:true">private async void PushpinTapped(object sender, TappedRoutedEventArgs e)  
{            
    var dialog 
        = new MessageDialog("Congratulations. You tapped a pushpin.");
    await dialog.ShowAsync();
}</pre>

<h3></h3>  

<h3>Drawing polygons</h3>  

<p>Bing Maps groups shapes (such as polygons and polylines) into layers represented by <em>MapShapeLayer</em> objects. Polygons are drawn using <em>MapPolygon</em> objects with vertices specified in Locations property. Following example creates a semi-transparent overlay for New Mexico state.  </p>

<pre class="font-size:15 lang:xhtml decode:true">&lt;Maps:Map Credentials="YOUR_KEY_HERE"  
          x:Name="BingMap"&gt;    
    &lt;Maps:Map.ShapeLayers&gt;
        &lt;Maps:MapShapeLayer&gt;
            &lt;Maps:MapShapeLayer.Shapes&gt;
                &lt;Maps:MapPolygon FillColor="#5000ff00"&gt;
                    &lt;Maps:MapPolygon.Locations&gt;
                        &lt;Maps:Location Latitude="36.9971" Longitude="-109.0448"/&gt;
                        &lt;Maps:Location Latitude="31.3337" Longitude="-109.0489"/&gt;
                        &lt;Maps:Location Latitude="31.3349" Longitude="-108.2140"/&gt;
                        &lt;Maps:Location Latitude="31.7795" Longitude="-108.2071"/&gt;
                        &lt;Maps:Location Latitude="31.7830" Longitude="-106.5317"/&gt;
                        &lt;Maps:Location Latitude="32.0034" Longitude="-106.6223"/&gt;
                        &lt;Maps:Location Latitude="31.9999" Longitude="-103.0696"/&gt;
                        &lt;Maps:Location Latitude="36.9982" Longitude="-103.0023"/&gt;
                        &lt;Maps:Location Latitude="36.9982" Longitude="-109.0475"/&gt;
                    &lt;/Maps:MapPolygon.Locations&gt;
                &lt;/Maps:MapPolygon&gt;
            &lt;/Maps:MapShapeLayer.Shapes&gt;
        &lt;/Maps:MapShapeLayer&gt;
    &lt;/Maps:Map.ShapeLayers&gt;
&lt;/Maps:Map&gt;</pre>

<p>And here is the result: <br>
<a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/New_Mexico.jpg"><img class="alignnone  wp-image-839" style="border: 1px solid #999;" title="New_Mexico" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/New_Mexico.jpg" alt="" width="790" height="503"></a>  </p>

<h3></h3>  

<h3>Adding other UIElements to the map</h3>  

<p>Map.Children property is actually a <em>MapUIElementCollection </em>object, which means we can add any UIElement to it. This opens a whole new set of possibilities, especially if you think of all the neat features XAML has to offer. To position elements we need to set <em>MapLayer.Position</em> attached property, just like in pushpin example (Pushpin is a UIElement as well). Here is an example that uses Image control.  </p>

<pre class="font-size:15 lang:xhtml decode:true">&lt;Maps:Map.Children&gt;  
    &lt;Maps:Pushpin Text="1" Tapped="PushpinTapped"&gt;
        &lt;Maps:MapLayer.Position&gt;
            &lt;Maps:Location Latitude="50.0104955" 
                           Longitude="21.9888709"/&gt;
        &lt;/Maps:MapLayer.Position&gt;
    &lt;/Maps:Pushpin&gt;
    &lt;Image Source="Assets/Rain.png" Stretch="None"&gt;
        &lt;Maps:MapLayer.Position&gt;
            &lt;Maps:Location Latitude="47.9097" 
                           Longitude="-122.6331"/&gt;
        &lt;/Maps:MapLayer.Position&gt;
    &lt;/Image&gt;
&lt;/Maps:Map.Children&gt;</pre>

<p>This will add a rain icon over Seattle. No pun intended.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/RainImage2.jpg"><img class="alignnone size-full wp-image-867" style="border: 1px solid #999;" title="RainImage" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/RainImage2.jpg" alt="" width="791" height="578"></a></p>

<p>Hope you will find this post helpful!</p>]]></content:encoded></item><item><title><![CDATA[Using TypeScript with AngularJS and Web API]]></title><description><![CDATA[<p>In this post I will show how to use TypeScript together with AngularJS and ASP.NET Web API to write a simple web application that implements a CRUD scenario. TypeScript provides a set of features that enable developers to structure the code and write maintanable JavaScript applications more easily. It</p>]]></description><link>http://piotrwalat.net/using-typescript-with-angularjs-and-web-api/</link><guid isPermaLink="false">f8228946-72f2-45f6-b06c-089880ed8875</guid><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Thu, 04 Oct 2012 08:12:33 GMT</pubDate><content:encoded><![CDATA[<p>In this post I will show how to use TypeScript together with AngularJS and ASP.NET Web API to write a simple web application that implements a CRUD scenario. TypeScript provides a set of features that enable developers to structure the code and write maintanable JavaScript applications more easily. It can also integrate with existing third party libraries as shown in the following demo. The simple HTML application will allow users to add, delete and retrieve products from a HTTP service backed by ASP.NET Web API. <br>
The main highlights are as follows:  </p>

<ul>  
    <li>use of TypeScript to create AngularJS controllers,</li>
    <li>communication with Web API services from TypeScript using AngularJS ajax functionality,</li>
    <li>use of strongly typed TypeScript declarations for AngularJS objects.</li>
</ul>  

<!--more-->

<p>I am using Visual Studio 2012/Sublime Text 2 as my IDE in this example. You are fine using any text editor, and if you want to have syntax highlighting / auto-completion features you will need to download packages separately.</p>

<p>For Visual Studio plugin and windows compiler binary (tsc.exe, part of installer package) visit <a href="http://www.typescriptlang.org/#Download">http://www.typescriptlang.org/#Download</a>. If you prefer to use a different editor (eg. Sublime Text or Vim) get the goodies from <a href="http://aka.ms/qwe1qu">http://aka.ms/qwe1qu</a>. The only thing that you really need is TypeScript compiler binary.</p>

<p>The services are implemented using ASP.NET Web API (.NET framework for building HTTP services, which i personally like very much), but you can use any other technology as long as it produces valid JSON.  </p>

<h3>Http services using ASP.NET Web API</h3>  

<p>For this example I've chosen to deal with a simple model consisting of one entity - Product.  </p>

<pre class="font-size:15 lang:c# decode:true" title="Model">public abstract class Entity  
{
    public Guid Id { get; set; }
}

public class Product : Entity  
{
    public string Name { get; set; }
    public decimal Price { get; set; }
}</pre>

<p>We will need a persistence mechanism to store the entities. I am using in-memory storage (thread-safe collection) and a repository pattern. Feel free to change it to anything that suits you.  </p>

<pre class="font-size:15 lang:c# decode:true" title="IRepository">public interface IRepository&lt;TEntity&gt;  
    where TEntity : Entity
{
    TEntity Add(TEntity entity);
    TEntity Delete(Guid id);
    TEntity Get(Guid id);
    TEntity Update(TEntity entity);
    IQueryable&lt;TEntity&gt; Items { get; }
}</pre>

<p>&nbsp;</p>

<pre class="font-size:15 lang:c# decode:true" title="InMemoryRepository">public class InMemoryRepository&lt;TEntity&gt; : IRepository&lt;TEntity&gt; where TEntity : Entity  
{
    private readonly ConcurrentDictionary&lt;Guid, TEntity&gt; _concurrentDictionary 
        = new ConcurrentDictionary&lt;Guid, TEntity&gt;();

    public TEntity Add(TEntity entity)
    {
        if (entity <mark> null)
        {
            //we dont want to store nulls in our collection
            throw new ArgumentNullException("entity");
        }

        if (entity.Id </mark> Guid.Empty)
        {
            //we assume no Guid collisions will occur
            entity.Id = Guid.NewGuid();
        }

        if (_concurrentDictionary.ContainsKey(entity.Id))
        {
            return null;
        }

        bool result = _concurrentDictionary.TryAdd(entity.Id, entity);

        if (result <mark> false)
        {
            return null;
        }
        return entity;
    }

    public TEntity Delete(Guid id)
    {
        TEntity removed;
        if (!_concurrentDictionary.ContainsKey(id))
        {
            return null;
        }
        bool result = _concurrentDictionary.TryRemove(id, out removed);
        if (!result)
        {
            return null;
        }
        return removed;
    }

    public TEntity Get(Guid id)
    {
        if (!_concurrentDictionary.ContainsKey(id))
        {
            return null;
        }
        TEntity entity;
        bool result = _concurrentDictionary.TryGetValue(id, out entity);
        if (!result)
        {
            return null;
        }
        return entity;
    }

    public TEntity Update(TEntity entity)
    {
        if (entity </mark> null)
        {
            throw new ArgumentNullException("entity");
        }
        if (!_concurrentDictionary.ContainsKey(entity.Id))
        {
            return null;
        }
        _concurrentDictionary[entity.Id] = entity;
        return entity;
    }

    public IQueryable&lt;TEntity&gt; Items
    {
        get { return _concurrentDictionary.Values.AsQueryable(); }
    }
}</pre>

<p>Once we have a persistence mechanism in place, we can create a HTTP service that will expose a basic set of operations. Because I am using ASP.NET Web API this means I need to create a new controller.  </p>

<pre class="font-size:15 lang:c# decode:true" title="Product HTTP service">public class ProductsController : ApiController  
{
    public static IRepository&lt;Product&gt; ProductRepository
        = new InMemoryRepository&lt;Product&gt;();

    public IEnumerable&lt;Product&gt; Get()
    {
        return ProductRepository.Items.ToArray();
    }

    public Product Get(Guid id)
    {
        Product entity = ProductRepository.Get(id);
        if (entity <mark> null)
        {
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        return entity;
    }

    public HttpResponseMessage Post(Product value)
    {
        var result = ProductRepository.Add(value);
        if (result </mark> null)
        {
            // the entity with this key already exists
            throw new HttpResponseException(HttpStatusCode.Conflict);
        }
        var response = Request.CreateResponse&lt;Product&gt;(HttpStatusCode.Created, value);
        string uri = Url.Link("DefaultApi", new { id = value.Id });
        response.Headers.Location = new Uri(uri);
        return response;
    }

    public HttpResponseMessage Put(Guid id, Product value)
    {
        value.Id = id;
        var result = ProductRepository.Update(value);
        if (result <mark> null)
        {
            // entity does not exist
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        return Request.CreateResponse(HttpStatusCode.NoContent);
    }

    public HttpResponseMessage Delete(Guid id)
    {
        var result = ProductRepository.Delete(id);
        if (result </mark> null)
        {
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        return Request.CreateResponse(HttpStatusCode.NoContent);
    }
}</pre>

<p>We are trying to adhere to HTTP standard, hence additional logic to handle responses. After this step we should have a fully functional CRUD HTTP service.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/fiddler.png"><img class="alignnone  wp-image-674" title="fiddler" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/fiddler.png" alt="" width="682" height="334"></a></p>

<p>&nbsp;</p>

<h3>Starting with AngularJS and TypeScript</h3>  

<p>With services ready for action, we can continue with creating the actual website. This will be plain HTML/CSS/JavaScript (TypeScript is compiled to JavaScript). The template I've started with looks like this:  </p>

<pre class="font-size:15 lang:xhtml decode:true">&lt;!DOCTYPE html&gt;  
&lt;html ng-app&gt;
&lt;head&gt;
    &lt;title&gt;Product list&lt;/title&gt;
    &lt;link rel="stylesheet" href="Content/bootstrap.css"/&gt;
    &lt;script type="text/javascript" src="Scripts/bootstrap.js"&gt;&lt;/script&gt;
    &lt;script type="text/javascript" src="Scripts/angular.js"&gt;&lt;/script&gt;
    &lt;script type="text/javascript" src="Scripts/Controllers/ProductsController.js"&gt;&lt;/script&gt;
&lt;/head&gt;
    &lt;body&gt;
        &lt;div ng-controller="Products.Controller"&gt;
        &lt;/div&gt;
    &lt;/body&gt;
&lt;/html&gt;</pre>

<p>Please note that ProductsController.js file will be generated from ProductsController.ts TypeScript source using <em>tsc.exe</em> compiler (I am using command line for compilation step). Let's create the file that will contain AngularJS controller for our page.  </p>

<pre class="font-size:15 lang:c# decode:true" title="ProductsController.ts">module Products {  
    export interface Scope {
        greetingText: string;
    }

    export class Controller {
        constructor ($scope: Scope) {
            $scope.greetingText = "Hello from TypeScript + AngularJS";
        }
    }
}</pre>

<p>The Products module will be compiled into a JavaScript namespace and our controller will be available as Products.Controller. Now we can bind greetingText in our page. Please note that in order to leverage TypeScript's features we define contract for $scope object in the form of Scope interface. TypeScript would let us use <em>any</em> type instead as well, but personally I prefer more strict approach as it can help you catch errors at compile time (VS2012 will even underline errors in red as you edit your code, which is very nice). Now we have to compile ProductsController.ts, reference ProductsController.js in html and modify the view to display the message.  </p>

<pre class="font-size:15 lang:xhtml decode:true">&lt;div ng-controller="Products.Controller"&gt;  
    &lt;p&gt;{{greetingText}}&lt;/p&gt;
&lt;/div&gt;</pre>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/fiddler1.png"><img class="alignnone size-full wp-image-685" title="fiddler" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/fiddler1.png" alt="" width="330" height="103"></a></p>

<p>With AngularJS controller stub in place, let's move on and add some additional contract declarations.  </p>

<h3>Creating model module</h3>  

<p>Let's create a module called Model that will contain Product class used as a DTO (serialized to JSON) with our HTTP services.  </p>

<pre class="font-size:15 lang:js decode:true">module Model {  
    export class Product {
        Id: string;
        Name: string;
        Price: number;
    }
}</pre>

<p>This simple module contains a definition of one type that is exported and ready to use in page controller class.  </p>

<h3>Ambient declarations</h3>  

<p>In order to leverage ajax functionality provided by AngularJS we will use $http service passed in to controller constructor:  </p>

<pre class="font-size:15 lang:js decode:true">class Controller {  
    private httpService: any;

    constructor ($scope: Scope, $http: any) {
        this.httpService = $http;
        //...
    }
    //...
}</pre>

<p>Because we declared <em>httpService</em> as <em>any </em>type, the compiler will not be able to help us catch potential errors in compile time. To address this we can use <em>ambient declarations. </em>Ambient declarations are used to tell the compiler about elements that will be introduced to the program by external means (in our case through AngularJS) and no JavaScript code will be emitted from them. In other words think of them as a contract for 3rd party libraries. <em>Declaration source files </em>(.d.ts extension) are restricted to contain ambient declarations only.  Here is angular.d.ts file that defines two interfaces used by us to call HTTP services.  </p>

<pre class="font-size:15 lang:c# decode:true" title="angular.d.ts">declare module Angular {  
    export interface HttpPromise {
        success(callback: Function) : HttpPromise;
        error(callback: Function) : HttpPromise;
    }
    export interface Http {
        get(url: string): HttpPromise;
        post(url: string, data: any): HttpPromise;
        delete(url: string): HttpPromise;
    }
}</pre>

<p>Declare keyword is optional, as it is implicitly inferred in all .d.ts files.  </p>

<h3>TypeScript and AngularJS</h3>  

<p>In order for compiler to know about newly introduced modules (Model and Angular) we need to add two reference statements to ProductsController.ts. Moreover we want to define Scope interface to include all properties and functions used by the view.</p>

<p>The page will consist of an input form for adding a new Product (name, price textboxes and a button) and will also display a list of all Products with the ability to delete individual entities. For brevity I am skipping, update scenario, but once we have other operations in place update implementation is really easy.</p>

<p>The Scope interface that provides this can look like this.  </p>

<pre class="font-size:15 lang:js decode:true">/// &lt;reference path='angular.d.ts' /&gt;  
/// &lt;reference path='model.ts' /&gt;

module Products {

    export interface Scope {
        newProductName: string;
        newProductPrice: number;
        products: Model.Product[];
        addNewProduct: Function;
        deleteProduct: Function;
    }
    // ...
}</pre>

<p>Any time a product is added or deleted we want to refresh the list by getting all products from the server. Thanks to declarations introduced earlier we can use strongly typed  Angular.Http and Angular.HttpPromise interfaces.</p>

<p>The controller will contain private methods to communicate with our web service (getAllProducts, addProduct and deleteProduct).  </p>

<pre class="font-size:15 lang:js decode:true">export class Controller {  
    private httpService: any;

    constructor ($scope: Scope, $http: any) {
        this.httpService = $http;

        this.refreshProducts($scope);

        var controller = this;

        $scope.addNewProduct = function () {
            var newProduct = new Model.Product();
            newProduct.Name = $scope.newProductName;
            newProduct.Price = $scope.newProductPrice;

            controller.addProduct(newProduct, function () {
                controller.getAllProducts(function (data) {
                    $scope.products = data;
                });
            });
        };

        $scope.deleteProduct = function (productId) {
            controller.deleteProduct(productId, function () {
                controller.getAllProducts(function (data) {
                    $scope.products = data;
                });
            });
        }
    }

    getAllProducts(successCallback: Function): void{
        this.httpService.get('/api/products').success(function (data, status) {
            successCallback(data);
        });
    }

    addProduct(product: Model.Product, successCallback: Function): void {
        this.httpService.post('/api/products', product).success(function () {
            successCallback();
        });
    }

    deleteProduct(productId: string, successCallback: Function): void {
        this.httpService.delete('/api/products/'+productId).success(function () {
            successCallback();
        });
    }

    refreshProducts(scope: Scope) {
        this.getAllProducts(function (data) {
                    scope.products = data;
                });
    }

}</pre>

<p>What's really nice is that we don't have to introduce any custom serialization logic. When retrieving products we treat data returned by $http service as a collection of strongly typed Products. Same applies to add operation - we simply pass a Product, it gets serialized and consumed by service in the end.  </p>

<h3>Creating the view</h3>  

<p>As a last step we need to create the view that will leverage new controller features. I am using <a href="http://twitter.github.com/bootstrap/">bootstrap</a> to make it a little bit easier.  </p>

<pre class="font-size:15 lang:xhtml decode:true">&lt;!DOCTYPE html&gt;  
&lt;html ng-app&gt;
&lt;head&gt;
    &lt;title&gt;Product list&lt;/title&gt;
    &lt;link rel="stylesheet" href="Content/bootstrap.css" /&gt;
    &lt;script type="text/javascript" src="Scripts/angular.js"&gt;&lt;/script&gt;
    &lt;script type="text/javascript" src="Scripts/Controllers/model.js"&gt;&lt;/script&gt;
    &lt;script type="text/javascript" src="Scripts/Controllers/productsController.js"&gt;&lt;/script&gt;
&lt;/head&gt;
&lt;body&gt;
    &lt;div ng-controller="Products.Controller"&gt;
        &lt;form class="form-horizontal" ng-submit="addNewProduct()"&gt;
            &lt;input type="text" ng-model="newProductName" size="30"
                placeholder="product name"&gt;
            &lt;input type="text" ng-model="newProductPrice" size="5"
                placeholder="product price"&gt;
            &lt;button class="btn" type="submit" value="add"&gt;
                &lt;i class="icon-plus"&gt;&lt;/i&gt;
            &lt;/button&gt;
        &lt;/form&gt;
        &lt;table class="table table-striped table-hover" style="width: 500px;"&gt;
            &lt;thead&gt;
                &lt;tr&gt;
                    &lt;th&gt;Name&lt;/th&gt;
                    &lt;th&gt;Price&lt;/th&gt;
                    &lt;th&gt;&lt;/th&gt;
                &lt;/tr&gt;
            &lt;/thead&gt;
            &lt;tbody&gt;
                &lt;tr ng-repeat="product in products"&gt;
                    &lt;td&gt;{{product.Name}}&lt;/td&gt;
                    &lt;td&gt;${{product.Price}}&lt;/td&gt;
                    &lt;td&gt;
                        &lt;button class="btn-small" ng-click="deleteProduct(product.Id)"&gt;
                            &lt;i class="icon-trash"&gt;&lt;/i&gt;
                        &lt;/button&gt;
                    &lt;/td&gt;
                &lt;/tr&gt;
            &lt;/tbody&gt;
        &lt;/table&gt;
    &lt;/div&gt;
&lt;/body&gt;
&lt;/html&gt;</pre>

<p>The end result should look like this</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/end.png"><img title="end" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/end.png" alt="" width="542" height="364"></a></p>

<p>Now our page should be functional and communication with HTTP service should work as expected.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/10/created.png"><img class="alignnone size-full wp-image-699" title="created" src="http://www.piotrwalat.net/wp-content/uploads/2012/10/created.png" alt="" width="617" height="482"></a></p>

<p>As usually code is available to browse and download on <a href="https://bitbucket.org/pwalat/piotr.productlist">bitbucket</a>.</p>

<p><em>Edit: It seems that Wordpress Android app managed to somehow overwrite this post with an old, incomplete version. Sorry for that as well as for reposting.</em></p>]]></content:encoded></item><item><title><![CDATA[Downloading files in Windows 8 apps using Background Transfer feature]]></title><description><![CDATA[<p>In this blog post I am going to show how to use Background Transfer feature to download files over HTTP in a Windows Store C#/XAML app. Background Transfer has several advantages over using HttpClient and is much better for long running transfers. I am going to create a simple</p>]]></description><link>http://piotrwalat.net/downloading-files-in-windows-8-apps-using-background-transfer-feature/</link><guid isPermaLink="false">8857afc4-dbb4-4ac6-a4a8-6a605d41042c</guid><category><![CDATA[Background Transfer]]></category><category><![CDATA[BackgroundDownloader]]></category><category><![CDATA[C#]]></category><category><![CDATA[HTTP]]></category><category><![CDATA[Metro]]></category><category><![CDATA[Windows 8]]></category><category><![CDATA[Windows Store]]></category><category><![CDATA[WinRT]]></category><category><![CDATA[XAML]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Mon, 01 Oct 2012 09:02:45 GMT</pubDate><content:encoded><![CDATA[<p>In this blog post I am going to show how to use Background Transfer feature to download files over HTTP in a Windows Store C#/XAML app. Background Transfer has several advantages over using HttpClient and is much better for long running transfers. I am going to create a simple app, that initiates download over the Internet, tracks progress of the download and supports re-attaching transfers after the app is closed.</p>

<!--more-->  

<h3></h3>  

<p>&nbsp;</p>

<h3>HttpClient vs Background Transfer</h3>  

<p>C# developers writing Windows Store apps have two main options to download files from network locations over HTTP:  </p>

<ul>  
    <li>HttpClient class (and its <em>GetAsync</em> method)</li>
    <li>Background Transfer functionality</li>
</ul>  

<p>Using HttpClient is only viable when dealing with relatively small files (MSDN suggests <em>'a couple of KB'</em>). By default HttpClient response content buffer size is set to 64KB and if response is bigger than we would get an error. Buffer size can be increased if necessary (MaxResponseContentBufferSize property<strong>)</strong>, but if you need to do that it is likely that you should use Background Transfer feature instead.</p>

<p>Background Transfer has several other advantages. First of all it runs outside of calling application which means that if the app is suspended, the download can still continue in the background. It also provides inherent support for pause, resume operations and can cope with sudden network status changes automatically. When app terminates, existing downloads will paused and persisted. Moreover it also plays nicely with power management (OS can disable downloads when it deems it necessary) and metered networks (via BackgroundDownloader.CostPolicy property) - after all user may not want to download 1GB+ file over roamed data connection. These two features are very important in mobile scenarios and make Background Transfer even more appealing.</p>

<p>&nbsp;</p>

<h3> Starting a download</h3>  

<p>Let's create a simple C#/XAML app that will download a big file from a remote location and store it in the local app data store. Start off with a blank Windows Store app template and modify MainPage.xaml to look like this.  </p>

<pre class="font-size:15 lang:xhtml decode:true">&lt;Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}"&gt;  
    &lt;Button Content="Download" Margin="0,150,0,0"
            HorizontalAlignment="Center"  VerticalAlignment="Center" 
            Height="58" Width="145" FontSize="17" Grid.Row="2" 
            Click="DownloadClick"/&gt;
    &lt;ProgressBar HorizontalAlignment="Center"
                 Height="30" Margin="0,-30,0,0" 
                 VerticalAlignment="Center" 
                 Width="400" x:Name="DownloadProgress"/&gt;
&lt;/Grid&gt;</pre>

<p>The interface is very simple and consists of a button to initiate download as well as a progress bar to indicate progress.</p>

<p>Here is DownloadClick event handler implementation.  </p>

<pre class="font-size:15 lang:c# decode:true">private async void DownloadClick(object sender, RoutedEventArgs e)  
{
    const string fileLocation
     = "<a href="http://download.thinkbroadband.com/100MB.zip">http://download.thinkbroadband.com/100MB.zip</a>";
    var uri = new Uri(fileLocation);
    var downloader = new BackgroundDownloader();
    StorageFile file = await ApplicationData.Current.LocalFolder.CreateFileAsync("100MB.zip",
        CreationCollisionOption.ReplaceExisting);
    DownloadOperation download = downloader.CreateDownload(uri, file);
    await StartDownloadAsync(download);
}

private void ProgressCallback(DownloadOperation obj)  
{
    double progress 
        = ((double) obj.Progress.BytesReceived / obj.Progress.TotalBytesToReceive);
    DownloadProgress.Value = progress * 100;
    if(progress &gt;= 1.0)
    {
        _activeDownload = null;
        DownloadButton.IsEnabled = true;
    }
}

private async Task StartDownloadAsync(DownloadOperation downloadOperation)  
{
    DownloadButton.IsEnabled = false;
    _activeDownload = downloadOperation;
    var progress = new Progress&lt;DownloadOperation&gt;(ProgressCallback);
    await downloadOperation.StartAsync().AsTask(progress);
}</pre>

<p>First of all we need to obtain <em>StorageFile</em> (or more precisely <em>IStorageFile</em> implementation) - in our scenario we use <em>ApplicationData.Current.LocalFolder.CreateFileAsync()</em> to create a file in local app data store. Please note that we use async/await pattern, hence the event handling method is marked as <em>async</em>. We also need to create <em>BackgroundDownloader</em> class instance that is used to actually create new download using <em>CreateDownload()</em> method.</p>

<p>Every download created using Background Tranfser feature is encapsulated in <em><strong>DownloadOperation </strong></em>object. These objects provide basic operations used to manipulate the download. In example above we start the download using <em>StartAsync()</em> that returns <em>IAsyncOperationWithProgress</em>. We make use of <em>AsTask()</em> extension method to cast the returned value to Task and provide progress callback used to update ProgressBar control.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/09/1.png"><img class="alignnone  wp-image-630" title="BackgroundTransfer_1" src="http://www.piotrwalat.net/wp-content/uploads/2012/09/1.png" alt="" width="790" height="463"></a></p>

<p>&nbsp;</p>

<h3>Handling existing downloads</h3>  

<p>Previous example is a very simple scenario. Once we terminate the app, we lose track of our download, even though its being paused and persisted by Background Transfer automatically. To address this problem we can use <em>BackgroundDownloader.GetCurrentDownloadsAsync() </em>method to retrieve all active <em>DownloadOperation</em> objects for our applciation. Once we do this we can easily re-attach progress handler and any logic handling completed downloads.  </p>

<pre class="font-size:15 lang:c# decode:true">async void MainPageLoaded(object sender, RoutedEventArgs e)  
{
    await LoadActiveDownloadsAsync();
}

private async Task LoadActiveDownloadsAsync()  
{
    IReadOnlyList&lt;DownloadOperation&gt; downloads = null;
    downloads = await BackgroundDownloader.GetCurrentDownloadsAsync();
    if(downloads.Count &gt; 0)
    {
        //for simplicity we support only one download
        await ResumeDownloadAsync(downloads.First());
    }
}</pre>

<pre class="font-size:15 lang:c# decode:true">private async Task ResumeDownloadAsync(DownloadOperation downloadOperation)  
{
    DownloadButton.IsEnabled = false;
    _activeDownload = downloadOperation;
    var progress = new Progress&lt;DownloadOperation&gt;(ProgressCallback);
    await downloadOperation.AttachAsync().AsTask(progress);
}</pre>

<p>Once we have this code in place our download will reattach every time we load the page. For simplicity we support only one active download.</p>

<p>You can find the source on <a href="https://bitbucket.org/pwalat/piotr.backroundtransfer/src">bickbucket</a> .</p>]]></content:encoded></item><item><title><![CDATA[Preventing modification of JavaScript objects]]></title><description><![CDATA[<p>Due to its dynamic nature, JavaScript makes it extremely easy to modify objects that you do not own. It also means that anyone can easily modify objects that you have written. This is seemingly a very powerful feature and many developers may be tempted to use it in order to</p>]]></description><link>http://piotrwalat.net/preventing-javascript-object-modification/</link><guid isPermaLink="false">23f4a6e6-858c-4ed4-abc8-4a2091a6d17b</guid><category><![CDATA[HTML5]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Object.Freeze]]></category><category><![CDATA[patterns]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Thu, 13 Sep 2012 02:00:50 GMT</pubDate><content:encoded><![CDATA[<p>Due to its dynamic nature, JavaScript makes it extremely easy to modify objects that you do not own. It also means that anyone can easily modify objects that you have written. This is seemingly a very powerful feature and many developers may be tempted to use it in order to extend or modify behavior of objects (even DOM methods such as document.getElementById() can be overwritten). Such practice should generally be avoided as it can lead to maintenance problems and can produce hard to find bugs. ECMAScript 5 introduced a bunch of methods that allow programmers to restrict modification of objects. This new language feature can be very helpful when writing libraries or when writing code in bigger teams.</p>

<!--more-->  

<h3>If you don't own it, don't modify it</h3>  

<p>A good JavaScript rule says that you shouldn't modify objects you don't own. For example if you decide to override a method chances are that you are breaking libraries that depend on it and you are generating a lot of confusion among other developers.  </p>

<pre class="lang:js decode:true">window.originalAlert = window.alert;  
window.alert = function(msg) {  
    if (typeof msg === "string") {
        return console.log(msg);
    }
    return window.originalAlert(msg);
};

alert('ooh so awesome'); // console  
alert(3.14); //alert</pre>  

<p>Here we modify windows.alert to log all string values to console instead displaying a message box. For other types we invoke the original function. Regardless of our motivation, the result will be a lot of confusion among developers using alert function. Of course playing with DOM objects and methods like getElementById() can lead to far severe consequences (nasty, nasty bugs).</p>

<p>Modifying objects by adding new methods can also be harmful.  </p>

<pre class="lang:js decode:true">Math.cube = function(n) {  
    return Math.pow(n, 3);
};
console.log(Math.cube(2)); // 8</pre>  

<p>The biggest problem with this approach are naming collisions that may happen in the future. Even if Math object does not contain cube method now, next iteration of JavaScript standard may introduce it (even though this is unlikely). It will mean that we replace native (that is possible faster, better or simply behaving differently) implementation without even knowing this. This may painful and costly maintenance problems in your applications. A real world example of this problem is document.getElementsByClassName() introduced by Prototype library, just to be later included as a part of the standard.</p>

<p>Unfortunately there is no guarantee that other developers will leave your objects alone. If you provide something that in your opinion should be closed to modifications, you can use new JavaScript feature described below.  </p>

<h3> Object.preventExtensions()</h3>  

<p>You can use Object.preventExtensions() to prevent new methods or properties being added to the object. Please note that existing members can be modified or even deleted. To determine whether an object is extensible or not use Object.isExtensible() method.  </p>

<pre class="lang:js decode:true">var song = {  
    title: 'Hope Leaves',
    artist: 'Opeth'
};

console.log(Object.isExtensible(song)); //true  
Object.preventExtensions(song);  
console.log(Object.isExtensible(song)); //false  
//(...)

song.play = function() {  
    console.log('ahh soo awesome');
}; //silently fails
song.album = 'Damnation'; //silently fails

console.log(song.title);  // Hope Leaves  
console.log(song.artist); //Opeth  
delete song.artist;       // we can delete and modify  
console.log(song.artist); // undefined  
console.log(song.album);  // undefined  
song.play(); //error no play() method defined</pre>  

<p>In this example if you try to add new members to a locked down object the operation will silently fail. This behavior will be changed if we use strict mode.  </p>

<pre class="lang:js decode:true">"use strict";

var song = {  
    title: 'Hope Leaves',
    artist: 'Opeth'
};
Object.preventExtensions(song);  
// (...)
song.album = 'Damnation'; //TypeError</pre>  

<p>In strict mode an error will be thrown when we try to add new member.  </p>

<h3>Object.seal()</h3>  

<p>Use Object.seal() to seal an object. Every sealed object is non-extensible (so it acts as if we acted with Object.preventExtension() on it ), but additionally none of its existing properties or methods can be removed.  </p>

<pre class="lang:js decode:true">var song = {  
    title: 'Hope Leaves',
    artist: 'Opeth'
};

Object.seal(song);  
console.log(Object.isExtensible(song)); //false  
console.log(Object.isSealed(song)); //true  
//(...)

song.album = 'Damnation'; //silently fails in non-strict mode  
delete song.artist;       //silently fails in non-strict mode  
console.log(song.artist); // 'Opeth'  
console.log(song.album);  // undefined</pre>  

<p>Again, in strict mode silent failures will be replaced by errors.  </p>

<h3>Object.freeze()</h3>  

<p>Frozen objects are considered to be sealed (and non-extensible as well). The additional constraint is that no modifications to existing properties or methods can occur.  </p>

<pre class="lang:js decode:true">var song = {  
    title: 'Hope Leaves',
    artist: 'Opeth',
    getLongTitle: function() {
        return this.artist + " - " + this.title;
    }
};

Object.freeze(song);  
console.log(Object.isExtensible(song)); //false  
console.log(Object.isSealed(song)); //true  
console.log(Object.isFrozen(song)); //true  
//(...)
song.album = 'Damnation'; //silently fails in non-strict mode  
delete song.artist; //silently fails in non-strict mode  
song.getLongTitle = function() {  
    return "foobar";
}; //silently fails in non-strict mode
console.log(song.getLongTitle()); // Opeth - Hope Leaves  
console.log(song.artist); // 'Opeth'  
console.log(song.album); // undefined</pre>  

<p>In strict mode instead of silent failures we would see errors.</p>

<p>This methods should be supported by any recent version of all major browsers:  </p>

<ul>  
    <li> IE 9+ (this means it should work in WinJS, but haven't tested it),</li>
    <li> Firefox 4+</li>
    <li> Safari 5.1+</li>
    <li> Chrome 7+</li>
    <li> Opera 12+</li>
</ul>  

<p>&nbsp;</p>]]></content:encoded></item><item><title><![CDATA[Consuming ASP.NET Web API services in Windows 8 C# XAML apps]]></title><description><![CDATA[<p>If you are writing Windows 8 app, chances are that you will need to communicate with some sort of service to retrieve or send data. <br>
In this blog post I will show how to set up a basic CRUD ASP.NET Web API REST like service and how to consume</p>]]></description><link>http://piotrwalat.net/consuming-asp-net-web-api-services-in-a-windows-8-c-xaml-app/</link><guid isPermaLink="false">cd600304-a414-45ab-8452-afaca23aded4</guid><category><![CDATA[ASP.NET]]></category><category><![CDATA[Web API]]></category><category><![CDATA[Windows 8]]></category><category><![CDATA[WinRT]]></category><category><![CDATA[XAML]]></category><dc:creator><![CDATA[Piotr Walat]]></dc:creator><pubDate>Mon, 10 Sep 2012 02:02:06 GMT</pubDate><content:encoded><![CDATA[<p>If you are writing Windows 8 app, chances are that you will need to communicate with some sort of service to retrieve or send data. <br>
In this blog post I will show how to set up a basic CRUD ASP.NET Web API REST like service and how to consume that service from a C# Windows XAML 8 app ('modern style' aka 'M<em>*</em>* style' that is :)). I will also show how to build a simple user interface in XAML powered by data retrieved from our service and how to leverage MVVM pattern to make the code a little bit more maintainable.</p>

<!--more-->

<p>Let's start off by creating a simple model for our example.  </p>

<pre class="lang:c# decode:true">public abstract class Entity  
{
    public Guid Id { get; set; }
}</pre>

<pre class="lang:c# decode:true">public class Expense : Entity  
{
    public string Name { get; set; }
    public DateTime Date { get; set; }
    public decimal Amount { get; set; }
    public string Type { get; set; }
    public string Notes { get; set; }
    public string Account { get; set; }
}</pre>

<p>We also want a way to persist our objects, instead of using a database I will just store them in memory (using thread safe collections introduced in .NET 4) and use a repository pattern to abstract actual persistence mechanism.  </p>

<pre class="lang:c# decode:true">public interface IRepository&lt;TEntity&gt;  
        where TEntity : Entity
{
    TEntity Add(TEntity entity);
    TEntity Delete(Guid id);
    TEntity Get(Guid id);
    TEntity Update(TEntity entity);
    IQueryable&lt;TEntity&gt; Items { get; }
}</pre>

<h3>CRUD service in Web API</h3>  

<p>The next step is to create a simple Web API service that will provide basic CRUD operations. <br>
HTTP/1.1 protocol defines a set of common methods that map to these operations in a following way:  </p>

<ul>  
    <li>GET  /api/expenses - get a list of all expenses</li>
    <li>GET  /api/expenses/id - get expense by id</li>
    <li>POST  /api/expenses - create a new expense</li>
    <li>PUT  /api/expenses/id - update an expense</li>
    <li>DELETE  /api/expenses/id - delete an expense</li>
</ul>  

<p>If you have been implementing REST services before it may seem to be quite obvious. <br>
The tricky part is (at least in my opinion) to make our service as HTTP compliant as possible by returning appropriate responses and status codes.  </p>

<pre class="lang:c# decode:true">public class ExpensesController : ApiController  
{
    public static IRepository&lt;Expense&gt; ExpenseRepository
        = new InMemoryRepository&lt;Expense&gt;();

    public IEnumerable&lt;Expense&gt; Get()
    {
        var result = ExpenseRepository.Items.ToArray();
        return result;
    }

    public Expense Get(Guid id)
    {
        Expense entity = ExpenseRepository.Get(id);
        if(entity <mark> null)
        {
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        return entity;
    }

    public HttpResponseMessage Post(Expense value)
    {
        var result = ExpenseRepository.Add(value);
        if(result </mark> null)
        {
            // the entity with this key already exists
            throw new HttpResponseException(HttpStatusCode.Conflict);
        }
        var response = Request.CreateResponse&lt;Expense&gt;(HttpStatusCode.Created, value);
        string uri = Url.Link("DefaultApi", new { id = value.Id });
        response.Headers.Location = new Uri(uri);  
        return response;
    }

    public HttpResponseMessage Put(Guid id, Expense value)
    {
        value.Id = id;
        var result = ExpenseRepository.Update(value);
        if(result <mark> null)
        {
            // entity does not exist
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        return Request.CreateResponse(HttpStatusCode.NoContent);
    }

    public HttpResponseMessage Delete(Guid id)
    {
        var result = ExpenseRepository.Delete(id);
        if(result </mark> null)
        {
            throw new HttpResponseException(HttpStatusCode.NotFound);
        }
        return Request.CreateResponse(HttpStatusCode.NoContent);
    }
}</pre>

<p>You can find HTTP/1.1 spec document <a href="http://www.ietf.org/rfc/rfc2616.txt">here</a> (<em>cough</em><a href="http://twitter.com/frystyk">@frystyk</a> is a co-author of this document <em>cough</em>). For example a section on POST method states that for resources that can be identified by a URI "t<em>he response SHOULD be 201 (Created) and contain an entity which describes the status of the request (...) and </em>[contain]<em> a Location header". </em>Because expenses can be identified by a URL /api/expenses/id (it is a resource),  ideally we should follow this specification. Fortunately HTTP is a first class citizen in Web API and tasks like setting Location header in response are simple. In real world scenario we would also get rid of an unbounded Get() method and use paging or other result limiting mechanism.  </p>

<h3>Windows 8 app</h3>  

<p>For the sake of demonstration I am keeping the application very simple (and ugly unfortunately). It will consist of one screen and will be capable of:  </p>

<ul>  
    <li>getting and displaying a list of all expenses,</li>
    <li>adding a new expense (with randomly generated properties),</li>
    <li>deleting selected expense,</li>
    <li>modifying selected expense.</li>
</ul>  

<p>All these operations will call our new shiny Web API HTTP service to retrieve and manipulate data.</p>

<p><a href="http://www.piotrwalat.net/wp-content/uploads/2012/09/Tacos.png"><img class="alignnone  wp-image-482" title="Expenses app" src="http://www.piotrwalat.net/wp-content/uploads/2012/09/Tacos.png" alt="Win 8 Expenses app" width="872" height="490"></a></p>

<p>&nbsp;</p>

<h3>Communicating with Web API</h3>  

<p>To consume HTTP service from Windows 8 app we can use an instance of HttpClient or HttpWebRequest class (the latter provides a little bit more features) . It's worth mentioning that out of the box, ASP.NET Web API supports both xml and json as media types for messages being exchanged with clients (you can use accept header to specify which one you want to use). Windows 8 also supports serialization/deserialization for both formats (through XmlSerializer and DataContractJsonSerializer classes respectively), without a need for external libraries. In this app I am going to use HttpClient and json as a message format.</p>

<p>But where should we put the logic to actually communicate with our server and serialize/deserialize entities? <br>
The simplest solution would be to place it directly in code behind in event handlers associated with particular user actions. Even though suitable for such a simple demo app, this approach would backfire on us in more complex scenarios (imagine for example that we wanted to add authorization to our service in the future - we would need update code in many places). Because of this lets abstract actual data manipulation (and communication) logic.  </p>

<pre class="lang:c# decode:true">public interface IExpenseService  
{
    Task&lt;IEnumerable&lt;Expense&gt;&gt; GetAll();
    Task Add(Expense expense);
    Task Delete(Guid id);
    Task Update(Expense expense);
}</pre>

<p>This interface provides basic data manipulation operations, its implementation that uses Json (DataContractJsonSerializer)  and HTTP to communicate with Web API service will look like this:  </p>

<pre class="lang:c# decode:true">public class ExpenseService : IExpenseService  
{
    private const string ServiceUrl = "<a href="http://localhost:12898/api/expenses">http://localhost:12898/api/expenses</a>";
    private readonly HttpClient _client = new HttpClient();

    public async Task&lt;IEnumerable&lt;Expense&gt;&gt; GetAll()
    {
        HttpResponseMessage response = await _client.GetAsync(ServiceUrl);
        var jsonSerializer = CreateDataContractJsonSerializer(typeof(Expense[]));
        var stream = await response.Content.ReadAsStreamAsync();
        return (Expense[])jsonSerializer.ReadObject(stream);
    }

    public async Task Add(Expense expense)
    {
        var jsonString = Serialize(expense);
        var content = new StringContent(jsonString, Encoding.UTF8, "application/json");
        var result = await _client.PostAsync(ServiceUrl, content);
    }

    public async Task Delete(Guid id)
    {
        var result = await _client.DeleteAsync(String.Format("{0}/{1}"
                , ServiceUrl, id.ToString()));
    }

    public async Task Update(Expense expense)
    {
        var jsonString = Serialize(expense);
        var content = new StringContent(jsonString, Encoding.UTF8, "application/json");
        var result = await _client.PutAsync(String
            .Format("{0}/{1}", ServiceUrl, expense.Id), content);
    }

    private static DataContractJsonSerializer CreateDataContractJsonSerializer(Type type)
    {
        const string dateFormat = "yyyy-MM-ddTHH:mm:ss.fffffffZ";
        var settings = new DataContractJsonSerializerSettings
                            {
                                DateTimeFormat = new DateTimeFormat(dateFormat)
                            };
        var serializer = new DataContractJsonSerializer(type, settings);
        return serializer;
    }

    private string Serialize(Expense expense)
    {
        var jsonSerializer = CreateDataContractJsonSerializer(typeof(Expense));
        byte[] streamArray = null;
        using (var memoryStream = new MemoryStream())
        {
            jsonSerializer.WriteObject(memoryStream, expense);
            streamArray = memoryStream.ToArray();
        }
        string json = Encoding.UTF8.GetString(streamArray, 0, streamArray.Length);
        return json;
    }
}</pre>

<p>The main benefit of DataContractJsonSerializer  is that it is being provided by the platform (no concerns about using 3rd party library). On the other hand it requires a little bit of code to set up and its api is a somewhat clunky.</p>

<p>Here is IExpenseService implementation that uses <a href="http://json.codeplex.com/">Json.NET</a>, which is a very popular Json serialization library for .NET. If you want to use it the easiest way is to get a NuGet package.  </p>

<pre class="lang:c# decode:true crayon-selected">public class JsonNetExpenseService : IExpenseService  
{
    private const string ServiceUrl = "<a href="http://localhost:12898/api/expenses">http://localhost:12898/api/expenses</a>";
    private readonly HttpClient _client = new HttpClient();

    public async Task&lt;IEnumerable&lt;Expense&gt;&gt; GetAll()
    {
        HttpResponseMessage response = await _client.GetAsync(ServiceUrl);
        var jsonString = await response.Content.ReadAsStringAsync();
        return JsonConvert.DeserializeObject&lt;Expense[]&gt;(jsonString);
    }

    public async Task Add(Expense expense)
    {
        var jsonString = Serialize(expense);
        var content = new StringContent(jsonString, Encoding.UTF8, "application/json");
        var result = await _client.PostAsync(ServiceUrl, content);
    }

    public async Task Delete(Guid id)
    {
        var result = await _client.DeleteAsync(String.Format("{0}/{1}"
                                                            , ServiceUrl, id.ToString()));
    }

    public async Task Update(Expense expense)
    {
        var jsonString = Serialize(expense);
        var content = new StringContent(jsonString,
                                        Encoding.UTF8, "application/json");
        var result = await _client.PutAsync(String.Format("{0}/{1}",
                                                            ServiceUrl, expense.Id), content);
    }

    private string Serialize(Expense expense)
    {
        return JsonConvert.SerializeObject(expense);
    }
}</pre>

<p>I like it better :).</p>

<p>Please note that we are leveraging async/await support provided by the platform. It will help us reduce code complexity later on.  </p>

<h3>XAML</h3>  

<p>IExpenseService  enables us to communicate with our Web API service, but we need to call this logic somewhere and we obviously need UI as well. As I said before, to keep things simple lets use one page only.  </p>

<pre class="lang:xhtml decode:true">&lt;Page&gt;  
    &lt;Page.BottomAppBar&gt;
        &lt;AppBar IsSticky="True" IsOpen="True"&gt;
            &lt;Grid&gt;
                &lt;Grid.ColumnDefinitions&gt;
                    &lt;ColumnDefinition/&gt;
                    &lt;ColumnDefinition/&gt;
                &lt;/Grid.ColumnDefinitions&gt;
                &lt;StackPanel Orientation="Horizontal"&gt;
                    &lt;Button Style="{StaticResource RefreshAppBarButtonStyle}" 
                            AutomationProperties.Name="Refresh Data" Command="{Binding RemoveCommand}"/&gt;
                    &lt;Button Style="{StaticResource AddAppBarButtonStyle}" 
                            AutomationProperties.Name="Add Item" Command="{Binding AddCommand}"/&gt;
                &lt;/StackPanel&gt;
                &lt;StackPanel Grid.Column="1" HorizontalAlignment="Right" Orientation="Horizontal"&gt;
                    &lt;Button Style="{StaticResource EditAppBarButtonStyle}" 
                            AutomationProperties.Name="Update Selected Item" Command="{Binding UpdateCommand}"/&gt;
                    &lt;Button Style="{StaticResource DeleteAppBarButtonStyle}"
                            AutomationProperties.Name="Delete Selected Item" Command="{Binding DeleteCommand}"/&gt;
                &lt;/StackPanel&gt;
            &lt;/Grid&gt;
        &lt;/AppBar&gt;
    &lt;/Page.BottomAppBar&gt;

    &lt;Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}"&gt;
        &lt;ListView SelectedItem="{Binding SelectedItem, Mode=TwoWay}" 
                  ItemsSource="{Binding Items}"&gt;
            &lt;ListView.ItemTemplate&gt;
                &lt;DataTemplate&gt;
                    ...                    
                &lt;/DataTemplate&gt;
            &lt;/ListView.ItemTemplate&gt;
        &lt;/ListView&gt;
        &lt;ProgressRing IsActive="{Binding IsBusy}" Width="70" Height="70"/&gt;
    &lt;/Grid&gt;    
&lt;/Page&gt;</pre>

<p>Please note that I am using MVVM pattern, but if you dont feel comfortable with it you can just stick IExpenseService calls in code-behind. Expense list will be displayed by ListView. We use bottom AppBar to provide user with buttons and there is a simple activity indicator (ProgressRing) which will be displayed while we are retrieving data from the server.  </p>

<h3>View model</h3>  

<p>We also need to create a view model that the page will bind to and that will encapsulate view related logic.  </p>

<pre class="lang:c# decode:true">public class MainViewModel : INotifyPropertyChanged  
{
    // ...
    private const string ServiceUrl = "<a href="http://localhost:12898/api/expenses">http://localhost:12898/api/expenses</a>";

    public IExpenseService ExpenseService { get; set; }

    public Expense SelectedItem
    {
        get { return _selectedItem; }
        set
        {
            if (_selectedItem != value)
            {
                _selectedItem = value;
                DeleteCommand.RaiseCanExecuteChanged();
                UpdateCommand.RaiseCanExecuteChanged();
                OnPropertyChanged("SelectedItem");
            }
        }
    }

    public bool IsBusy
    {
        // ...
    }

    public IEnumerable&lt;Expense&gt; Items
    {
        // ...
    }

    #region commands

    public RelayCommand AddCommand { get; set; }
    public RelayCommand RefreshCommand { get; set; }
    public RelayCommand DeleteCommand { get; set; }
    public RelayCommand UpdateCommand { get; set; }

    #endregion commands

    public MainViewModel()
    {
        CreateCommands();
        ExpenseService = new ExpenseService();
    }

    private void CreateCommands()
    {
        AddCommand = new RelayCommand(o =&gt; AddHandler());
        RefreshCommand = new RelayCommand(o =&gt; RefreshHandler());
        DeleteCommand = new RelayCommand(o =&gt; DeleteHandler(),
                                         () =&gt; SelectedItem != null);
        UpdateCommand = new RelayCommand(o =&gt; UpdateHandler(),
                                         () =&gt; SelectedItem != null);
    }

    private void UpdateHandler()
    {
        if(SelectedItem != null)
        {
            var random = new Random();
            var amount = GenerateAmount(random);
            SelectedItem.Amount = amount;
            IsBusy = true;
            ExpenseService.Update(SelectedItem);
            IsBusy = false;
            RefreshData();
        }
    }

    private async void DeleteHandler()
    {
        if(SelectedItem != null)
        {
            IsBusy = true;
            await ExpenseService.Delete(SelectedItem.Id);
            IsBusy = false;
            RefreshData();
        }
    }

    private void RefreshHandler()
    {
        RefreshData();
    }

    private async void AddHandler()
    {
        Random random = new Random();
        int account = random.Next(1, 999);
        var amount = GenerateAmount(random);
        Expense newExpense = new Expense()
                                 {
                                     Account = account.ToString(),
                                     Date = DateTime.UtcNow,
                                     Amount = amount,
                                     Name = GenerateName(random),
                                     Notes = "Some notes",
                                     Type = GenerateType(random),
                                 };
        IsBusy = true;
        await ExpenseService.Add(newExpense);
        IsBusy = false;
        RefreshData();
    }

    // ...

    public void Load()
    {
        RefreshData();
    }

    private async void RefreshData()
    {
        IsBusy = true;
        Items = await ExpenseService.GetAll();
        IsBusy = false;
    }

    public event PropertyChangedEventHandler PropertyChanged;

    protected virtual void OnPropertyChanged(string propertyName)
    {
        PropertyChangedEventHandler handler = PropertyChanged;
        if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
    }
}</pre>

<p>I have removed less important code to make it smaller and easier to read. I am also generating random data instead of providing user editable forms.</p>

<p>There are two patterns worth mentioning here. First of all we are using commands (RelayCommand is an ICommand implementation and you can find it in source code) to react to user actions. For delete and update operations we provide canExecute predicate that determines whether a command can be executed or not. This is used to disable buttons when no item is selected.</p>

<p>The other pattern is the use of IsBusy property to indicate that view model is 'busy' - in our scenario that we are in progress of sending or retrieving data from server. Thanks to async/await support in .NET 4.5 and C# Windows 8 apps we don't have to worry about thread marshaling and the code looks as if it was synchronous.</p>

<p>You can download or view full source code (for both services and client app) on <a href="https://bitbucket.org/pwalat/piotr.expensetracker/">bitbucket</a> (alternatively zip archive <a href="https://bitbucket.org/pwalat/piotr.expensetracker/get/master.zip">here</a>).</p>]]></content:encoded></item></channel></rss>