CRM 2015 and Cortana

December 11th, 2014

As a Windows Phone user, I love Cortana. In my truck – at stoplights of course – I will use Cortana integrated with Bluetooth to start playing music from my XBox Music collection (both on device and cloud collection) by saying something like “Play album ‘Train of Thought'” or maybe send a quick text to say “I’m on my way.” Cortana isn’t just for for fun, though. You can now use Cortana with Microsoft CRM 2015.

With Cortana, you can do things such as schedule a task in CRM, set up meetings and create records. Below is a video, that while a little cheesy, gives you an idea of some of the things you can do with Cortana on Microsoft CRM 2015.

Rumors say that Cortana is going to be a part of Windows 10 as well, so you will eventually be able to take advantage of these features on a tablet, too!

Microsoft is really gaining some steam in terms of making all their products work together . Looking forward to seeing what’s next!

Enabling CORS from Dynamic List of Origins (Web API 2)

December 1st, 2014

I was building an API recently using .NET Web API 2. When the project was sent to the client, they revealed to me that they would be making client-side AJAX calls from a different server than the one the API was deployed on. If calls are attempted, an error will be presented that looks something like “XMLHttpRequest cannot load.” No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin http://mysite.com is therefore not allowed access. The reason this isn’t possible is because there is strict security in place when it comes to cross-origin resource sharing (CORS). Allowing any client access to your server can obviously be very dangerous, but in some cases we want to allow outside access – while still restricting where that access comes from. There a few different ways to enable CORS, but it is made particularly easy by using the CORS Nuget package:

Install-Package Microsoft.AspNet.WebApi.Cors

After the install, it’s really quite easy to implement on a class or method level:

WebApiConfig.cs:

using System.Web.Http;

public static class WebApiConfig
{
	public static void Register(HttpConfiguration config)
	{
		//Enable CORS - Add this line
		config.EnableCors();

		// Web API routes
		config.MapHttpAttributeRoutes();

		config.Routes.MapHttpRoute(
			name: "DefaultApi",
			routeTemplate: "api/{controller}/{action}/{id}",
			defaults: new { id = RouteParameter.Optional }
		);
	}
}

TestController.cs:

using System.Net.Http;
using System.Web.Http;
using System.Web.Http.Cors;

[EnableCors(origins: "http://mysite.com", headers: "*", methods: "*")]
public class TestController : ApiController
{
	// ...
}

With these changes, calls to any of the methods in TestController from http://mysite.com will now be allowed. The only issue now is that in our case, the client would be deploying to multiple environments and these origins will need to be changed. We decided to store them as a comma delimited list in the web.config, but we are unable to specify a web.config value in the EnableCors attribute – so what now? The next step was to create a custom CORS policy, that could then be used to replace that attribute. Below is the policy, adding an origin for each server listed in the web.config:

[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, AllowMultiple = false)]
public class MyCorsPolicy : Attribute, ICorsPolicyProvider
{
	private CorsPolicy _policy;

	public MyCorsPolicy()
	{
		// Create a CORS policy.
		_policy = new CorsPolicy
		{
			AllowAnyMethod = true,
			AllowAnyHeader = true
		};

		// Add allowed origins.
		var origins = ConfigurationManager.AppSettings["CORSOrigin"].Split(',');

		foreach (var origin in origins)
		{
			_policy.Origins.Add(origin);
		}
	}

	public Task<CorsPolicy> GetCorsPolicyAsync(HttpRequestMessage request, CancellationToken cancellationToken)
	{
		return Task.FromResult(_policy);
	}
}

Now we can simply reference our policy on the controller, and access will be given to each server in our web.config file.

[MyCorsPolicy]
public class TestController : ApiController
{
	// ...
}

This post was adapted and condensed from this great article by Mike Wasson.

Testing Websites with Selenium

November 20th, 2014

Selenium is a tool that supports developing automated tests for web applications. Since its introduction in 2004, it has gained a lot of popularity among developers and QA teams and is an essential part of the QA process for many companies today. It integrates seamlessly into common unit test projects and allows direct control of common browser functionality including access and manipulation of the displayed HTML document. It supports mimicking user interactions like entering a value into a text field or clicking on buttons which makes it a very useful tool for writing tests that focus on the end-user experience. Selenium is also compatible with all major browser including Chrome, Internet Explorer and Firefox and provides libraries for most common programming languages like C# or  Java. Selenium can be downloaded from http://www.seleniumhq.org/

Setting up the test project

Let’s start by creating a new Unit Test project in Visual Studio.

In order to start coding with Selenium, we need to download the Selenium WebDriver library and include it in our test project. The easiest way to do this is via NuGet:

We also need additional driver packages depending on what browsers we want to use for testing. Fortunately corresponding packages exist on NuGet as well, so let’s go ahead and install the drivers for Chrome and Internet Explorer from NuGet as well.

Drivers Nuget

You will notice two new exe files in the root of our test project: chromedriver.exe and IEDriverServer.exe. Those executables are essentials for Selenium to communicate with the corresponding browsers and will be executed behind the scenes when our Selenium tests run.

 Testing with Chrome

Let’s start writing our first Selenium test. Since Chrome requires no additional setup on your machine to work nicely with Selenium, we are going to chose Chrome as our first target browser. We create a new test class with one test method and instantiate a ChromeDriver object. All driver objects implement the IWebDriver interface which allows us to design a flexible framework for a Selenium based test suite that can easily support multiple browser vendors.

[TestClass]
public class SeleniumTests
{
  [TestMethod]
  public void MyFirstTest()
  {
    IWebDriver driver = new ChromeDriver();
  }
}

When we run the test we notice that a new Chrome instance starts up and the test passes. Let’s proceed by doing something interesting with our browser instance by navigating to a specific URL (e.g. www.wikipedia.com). The Navigate() method of IWebDriver returns an INavigation interface that exposes common ways to browse between various pages (e.g. back, forward, refresh). In our case we want to go the Wikipedia homepage, so we are going to use the GoToUrl() method.

public void MyFirstTest()
{
  IWebDriver driver = new ChromeDriver();
  driver.Navigate().GoToUrl("http://www.wikipedia.com");
}

The next step requires us to interact with the DOM by performing a simple search operation. In order to do that, we need to interact with two elements on the page: the search bar and the search button. We therefore need a way to find a way to identify those two DOM elements. By using the Inspect Element feature in Google Chrome, we notice that the search bar has an attached id attribute “searchInput”. We aren’t that lucky with the search button though. All we get from this element is a class “formBtn” which unfortunately is shared with another input element on that page (the search button for the language search form). In real life, we would alter the HTML mark-up to ensure every DOM element that is involved in a Selenium test has an id attribute. The alternative would be to locate an element with an XPath expression (which is also supported by Selenium), but that approach is less flexible and harder to maintain. For this tutorial, we will take advantage of the fact that our target search button appears first in the DOM hierarchy, but in a real life scenario I would recommend staying away from ambiguous element identifications in Selenium.

With our newly gained knowledge about the Wikipedia start page, we can wire up the elements we need and automate our search:

public void MyFirstTest()
{
  IWebDriver driver = new ChromeDriver();
  driver.Navigate().GoToUrl("http://www.wikipedia.com");
  // Locate elements by using their ids or classes
  var searchBar = driver.FindElement(By.Id("searchInput"));
  // Searching by class name can be ambigious, so we should use
  // the FindElements methods
  var searchButtons = driver.FindElements(By.ClassName("formBtn"));

  // Target button appears first in the DOM
  var searchButton = searchButtons.First();

  searchBar.SendKeys("Selenium");
  searchButton.Click();
}

We notice that the IWebDriver interface provides us with the FindElement and Find Elements methods that return a single IWebElement and a collection of IWebElements respectively. Those methods require a search criteria which is represented by a By object. By objects are retrieved by calling the static factory methods provided by that class. In a future article, I will explain how we can utilize that concept in our test suite design. IWebElement has a lot of methods that allow us to either gather more information about that element (e.g. the inner text, visibility etc.) or manipulate the element directly. In our case, we want to simulate the user typing a query into the search bar. With SendKeys, we can accomplish this task easily by passing our input as a string to that method. In our case, we are attempting to perform a search on the term “Selenium”. Clicking on an element is simply calling the Click method on that particular element.

By running the test, we can observe how the browser starts up, navigates to the Wikipedia page, enters “Selenium” into search box and clicks on the search button. When the test finishes we see the browser showing the Wikipedia page for Selenium.

SeleniumWiki

In order make our test a useful unit test we need to add some validation logic. For this tutorial, let’s verify that Wikipedia brought us to the correct result page by validating that the title of the document says indeed “Selenium”. By analyzing the DOM, we notice that the title is embedded within a generic span tag, which itself is contained by an h1 element with the id “firstHeading”.

<h1 id="firstHeading" class="firstHeading" lang="en">
  <span dir="auto">Selenium</span>
</h1>

If we let Selenium return the h1 element using the techniques pointed out above and look at its Text property, we notice that it automatically resolved its DOM sub-tree to the visible text and has therefore a value of “Selenium”. We could, in theory, just use that property and verify that it’s equal to “Selenium” and call it a day. However, we want to be more precise and make sure that the “Selenium” text is actually embedded inside a span element. In order to do that, we take advantage of the fact that IWebElement implements the FindElement(s) methods as well that allow us to search the element’s sub-tree. So, we are going to search for a span element within the h1 element. We do an assert and verify that it contains the correct text. Finally, we close the browser by calling Quit() on our IDriver object.

Here is the final version of our first Selenium test:

public void MyFirstTest()
{
  IWebDriver driver = new ChromeDriver();
  driver.Navigate().GoToUrl("http://www.wikipedia.com");
  // Locate elements by using their ids or classes
  var searchBar = driver.FindElement(By.Id("searchInput"));
  // Searching by class name can be ambigious, so we should use
  // the FindElements methods
  var searchButtons = driver.FindElements(By.ClassName("formBtn"));

  // Target button appears in the DOM first
  var searchButton = searchButtons.First();

  searchBar.SendKeys("Selenium");
  searchButton.Click();

  // Find the span element that contains the page title
  var titleH1 = driver.FindElement(By.Id("firstHeading"));
  var titleSpan = titleH1.FindElement(By.TagName("span"));

  Assert.AreEqual("Selenium", titleSpan.Text);

  driver.Quit();
}

Testing with Internet Explorer

Getting IE to play nicely with Selenium requires some additional configuration on the browser side. We have to either enable or disable Protected mode on all zones in the Internet Options Security tab.

Internet Options

It doesn’t matter if Protected Mode is enabled or disabled as long as that setting is consistent across all zones.

Last, but not least, we have to instantiate InternetExplorerDriver instead of ChromeDriver in the first line of our test method and Selenium will run the test inside IE.

IWebDriver driver = new InternetExplorerDriver();

Conclusion

We are now able to write simple browser tests with Selenium in different browsers. We can interact and analyze elements of the DOM by using various search conditions that are provided by the framework. By using the techniques we’ve discussed in this article, we can already automate a large number of test scenarios. In my next article, we are going to build a reusable test architecture on top of Selenium and discuss some problems that might occur when dealing with dynamic webpages that perform a lot of JavaScript based DOM manipulations.

Entity Framework Set Common Entity Fields in SaveChanges

November 14th, 2014

In leveraging EntityFramework for a recent project, all the entities had four properties used to track the user and the date a record was created and/or modified: CreateDate, ModifyDate, CreateUser and ModifyUser. Since this information was the same for each entity, I didn’t want to manually set those properties on each entity before calling SaveChanges on the database context. I wanted a way to have it done automatically. Julie Lerman had a nice tip on Visual Studio Toolbox Entity Framework Tips and Tricks a while back that addressed this very issue with Modified dates. I changed it slightly to handle Create dates as well as Users.

In order to do this, you need to override the SaveChanges method (Listing 3) in the data context so that any time a save is being performed, the User and Date properties are set accordingly. If you look at one of the entities I am using (Listing 1), you’ll see the Createxxx and Modifyxxx properties. Given that these properties are on all the entities, I extracted them out to an interface (Listing 2) and had the entities implement that interface. You could have used a base class for the properties as well, but I already had the properties on the entities, so extracting the interface was easier.

public class CensusHeader : ITrackingInfo
{
    public int CensusHeaderId { get; set; }
    public string CreateUser { get; set; }
    public DateTime CreateDate { get; set; }
    public string ModifyUser { get; set; }
    public DateTime ModifyDate { get; set; }
    public string Name { get; set; }
    public DateTime EffectiveDate { get; set; }
    public string Notes { get; set; }

    public virtual ICollection<CensusDetail> CensusDetail { get; set; }

    public CensusHeader()
    {
        CensusDetail = new List<CensusDetail>();
    }
}

List 1. CensusHeader Entity

public interface ITrackingInfo
{
    DateTime CreateDate { get; set; }
    DateTime ModifyDate { get; set; }
    string CreateUser { get; set; }
    string ModifyUser { get; set; }
}

List 2. ITrackingInfo Interface

Now when I call the overridden SaveChanges method it can look for any entity that implements ITrackingInfo, check its State and set the user and date values as needed. In my case, the CensusContext is being generated from a T4 Template, so I created a partial class with the override logic so that it’s not impacted with subsequent regeneration of the context. Also, it’s an Intranet application so I am leveraging the System.Environment class to access the currently logged in user and domain information.

public partial class CensusContext
{
    public override int SaveChanges()
    {
        ApplyTrackingRules();
        return base.SaveChanges();
    }

    /// <summary>
    /// Adds the Create and Modify, User and Date values to any entity that
    /// implements ITrackingInfo.
    /// </summary>
    private void ApplyTrackingRules()
    {
        var currentDate = DateTime.Now;
        var currentUser = Environment.UserDomainName + "\\" + Environment.UserName;

        foreach(var entry in ChangeTracker.Entries().Where(e =>
            e.Entity is ITrackingInfo &&
            (e.State == EntityState.Added || e.State == EntityState.Modified)))
        {
            var e = entry.Entity as ITrackingInfo;

            if (entry.State == EntityState.Added)
            {
                e.CreateDate = currentDate;
                e.CreateUser = currentUser;
            }

            e.ModifyDate = currentDate;
            e.ModifyUser = currentUser;
        }
    }
}

Listing 3. CensusContext Partial Class

Here is a sample of a census header record that was saved to the database.
CensusHeader

With this simple change in place, all the entities that implement ITrackingInfo will now be saved with the user and date values automatically. Be sure to check out Julie’s session for more information.

Can’t Register CRM 2013 Workflow Assembly

November 7th, 2014

I was getting an error attempting to register a workflow assembly using the 6.1.1 version of the CRM 2013 SDK. Every time I attempted to register the assembly I would get the following error message:

No plugins have been selected from the list. Please select at least one and try again.

I found a thread on community.dynamics.com that had the solution. If you look at the screen shot in the thread, you’ll notice that something appears to be missing when compared to what it normally looks like:

WorkflowRegistration
Click the image for a larger version

If you look at the “verified answer” on the thread by user “CY C“, the solution is to use an older version of the SDK to register your workflow. User Guido Preite has a response with a link to previous versions of the SDK shared on his OneDrive. (The links were working at the time this post was written.) I grabbed version “CRM2013SDK-6.1.0-v2″ and used the plugin registration tool to successfully register my custom workflow.

Error Installing CRM 2013 Outlook Client Service Pack 1

October 30th, 2014

Please note that the solution I suggest involves editing the registry which can be dangerous if you don’t know what you’re doing. If you have to ask yourself “Should I try this?”, the answer is no, you shouldn’t. Proceed at your own risk!

Now that the disclaimer is out of the way, here’s the problem I encountered. I was attempting to install the CRM 2013 Outlook client Service Pack 1 update to my local machine and kept encountering an error. The error was:

Action Microsoft.Crm.UpdateWrapper.SetPatchToUninstallable failed.
Couldn’t find Registry Key: SOFTWARE\Microsoft\MSCRM\
Couldn’t find Registry Key: SOFTWARE\Microsoft\MSCRM\

Given the error message, I looked in HKLM\SOFTWARE\Microsoft\MSCRM as well as HKCU\SOFTWARE\Microsoft\MSCRM\ and either the key was there or I created it. I rebooted, tried again and still encountered the same error message. What to do, what to do?

Aha! Process Monitor! I ran Process Monitor and had it track registry key access while the install was running. When it failed, I searched for the text “MSCRM” in the path and found the key below with the result “NAME NOT FOUND”. (Click the image for a larger view.)

Click for larger image

The problem was that the registry key was in a different location than I had expected – it was in the Wow6432Node. Once I realized that’s where it was looking, the fix was easy. All I needed to do was to create a new key at HKLM\SOFTWARE\Wow6432Node\Microsoft\ with the name “MSCRM”. Once I did that, everything ran fine and the update was installed successfully.

Hope this helps!

CRM 2013 & Chrome: Browser Upgrade Woes

October 23rd, 2014

Beginning with CRM 2011 and continuing with CRM 2013, Microsoft has made some major strides in supporting Dynamics with non-IE browsers, but a couple recent Chrome updates have thrown a wrench in the works.

With Google Chrome 37, an error displays when changing Status Reasons. As mentioned in this KB article, An error occurs in Microsoft Dynamics CRM when adding or editing Status Reasons using Google Chrome, the cause for this is because showModalDialog() has been turned off. Fortunately, the article lists the steps to re-enable the JavaScript function, providing a workaround for the issue.

However, with Google Chrome 38, Lookups began to have problems. As described in the KB article, Microsoft Dynamics CRM lookup fields fail to save or provide results when using Chrome 38, you receive “An error occurred” message in CRM 2013 when clicking on a Lookup and a different error in CRM 2011 when trying to save the form where a Lookup has been changed. While there is currently no fix or workaround for this problem, Aaron Richards has mentioned on his MS Dynamics CRM Community blog in the post Dynamics CRM lookup fields fail with Chrome 38 that CRM Online will be fixed in Service Update 6, while On-Premise should be addressed in the next few weeks.

CRM 2013 OData Queries With Related Entities and LINQPad

October 16th, 2014

I’ve been a huge user of LINQPad for several years now. It’s a wonderful product and if you haven’t used it I recommend that you do and also recommend that you get the pro or premium version.

And now, on to today’s topic. I needed to write some OData for a CRM 2013 project that I’ve been working on. In the past I’ve used a tool designed specifically for CRM OData queries. The tool I’ve used before has not been officially updated for CRM 2013 so I started looking for other options. As I searched I found something that said I could use LINQPad. What!? The app I love and use supports OData in MS CRM? Yes, please!

Connecting LINQPad to the CRM web service

Here’s how to get it working. In a new LINQPad query, click “Add connection”. Select “WCF Data Services 5.5 (OData 3). Click “Next”. Enter the URI for the CRM organization data service, something like: http://server/orgName/XRMServices/2011/OrganizationData.svc. Enter the user name as “domain\userName” and your password. You can test or just click “OK”.

Setting up the Necessary CRM Entities

To set up the example I’m going to work with in this post, create a new entity named new_widget, that has an N:1 relationship to contact.(Or, if it’s clearer, a 1:N from contact to new_widget.) I created the relationship name as “new_contact_widget”. For a contact, create a few associated new_widget records.

Basic OData Query – Single Entity

Let’s say that you want to get the OData for a basic query to get all widgets that contain the name “Drum Widget 1″. The query looks like this:

var widgets = (from w in new_widgetSet
               where w.new_name.Contains("Drum Widget 1")
               select w).Dump();

Here’s the output from LinqPad (click the image for a larger version):
Basic Query

What’s really great is that LINQPad outputs the actual OData query, making it easy to put in your CRM JavaScript. In the LINQPad results pane look at the “Request Log” tab. You’ll see output like this:

http://server/orgName/XRMServices/2011/OrganizationData.svc/new_widgetSet()?$filter=substringof(‘Drum Widget 1′,new_name)

Query and Return a Related Entity

Now, let’s say that in addition to the widget record, you want to get the related contact info. You can do that, but you can’t do a join as you normally would in LINQ to get the contact because OData won’t support that. This StackOverflow post was helpful to me in figuring that out. Instead of a join you have to use the expand OData function, but LINQPad makes it easy. Also, since you can’t do a join, you obviously can’t select from the joined entity. Instead, you have to use the expand functionality and projection in LINQ. So, the query would look like this:

var widgets = (from w in new_widgetSet
               where w.new_name.Contains("Drum Widget 1")
               select new {w.new_contact_widget}).Dump();

and the result looks like this:

QueryRelatedEntityProjection1

One thing to keep in mind: you cannot filter based on attributes in the related entity, so something like this will not work:

var widgets = (from w in new_widgetSet
               where w.new_name.Contains("Drum Widget 1")
               && w.new_contact_widget.FirstName.Contains("mike")
               select new {w.new_contact_widget.FullName}).Dump();

If you try it, you’ll get an error “filter conditions of different entity types, in the same expression, are not supported. See exception below for more details.”

Returing Attributes from the Primary and Related Entity

If you want to bring back attributes from both the primary entity and the related entity, you again have to use projection. If I want the widget name and it’s associated contact, it looks like this:

var widgets = (from w in new_widgetSet
               where w.new_name.Contains("Drum Widget 1")
               select new {w.new_name,  w.new_contact_widget.FullName}).Dump();

and the result looks like this:
QueryMainAndRelatedEntity

Hopefully this helps you understand how you can use LINQPad the CRM web service to generate OData web queries. It’s going to make my life a lot easier!

ASP.NET MVC Convert ViewModel to Client-Side ViewModel

October 9th, 2014

ASP.NET MVC allows a controller to return a model/viewmodel to the View for data-binding. That works just fine, but its one-way and I wanted to use Knockout two-way data binding to simplify making changes and posting them back to the server. Since I already had the model available on the View, I just needed a way to serialize it to JSON so that it could be used to create a client-side viewmodel and leverage Knockout to handle the bindings.

The application I am using allows a Header record to be created with one or more Detail records and uses the two viewmodels below. The HeaderViewModel is what’s passed to the View.

public class HeaderViewModel
{
    public HeaderViewModel()
    {
        Detail = new List<DetailViewModel>();
        DetailItemsToDelete = new List<int>();
    }

    public int HeaderId { get; set; }
    public string Name { get; set; }
    public string Comments { get; set; }
    public List<DetailViewModel> Detail { get; set; }
    public List<int> DetailItemsToDelete { get; set; }
}

public class DetailViewModel
{
    public int DetailId { get; set; }
    public int HeaderId { get; set; }
    public string EmpId { get; set; }
    public string Name{ get; set; }
    public string Email { get; set; }
    public string Division { get; set; }
    public DateTime? HireDate { get; set; }
    public string Active { get; set; }
}

There are a few of ways handle the serialization such as, Json.Encode(model), new JavaScriptSerializer().Serialize(model) or JsonConvert.SerializeObject(Model). I found JsonConvert simpler and more flexible as I also needed to format the serialized dates in the mm/dd/yyyy format. The process is done in two steps with the first being the serialization of the server-side viewmmodel to JSON and then using resulting JSON to create the client-side viewmodel. The serialization logic can be added to the View like this.

@using Newtonsoft.Json
@using Newtonsoft.Json.Converters
@model HeaderViewModel

@{
    ViewBag.Title = "Create Header";
    // Convert the model to JSON
    var data = JsonConvert.SerializeObject(Model, 
        new IsoDateTimeConverter { DateTimeFormat = "MM/dd/yyyy" });
}

@section scripts
{
    <script src="~/Scripts/knockout-3.1.0.js"></script>
    <script src="~/Scripts/knockout.mapping-latest.js"></script>
    <script src="~/Scripts/jquery.validate.js"></script>
    <script src="~/Scripts/app/membermodule.js"></script>
    <script>
        // Convert the server-side viewmodel to a client-side viewmodel...
        var headerViewModel = new MemberModule.HeaderViewModel(@Html.Raw(data));
        // and bind it to the view
        ko.applyBindings(headerViewModel);
    </script>
}

@Html.Partial("_EditPartial")

The call to MemberModule.HeaderViewModel is where the client-side viewmodel is created leveraging Knockout Mapping Plugin to simplify the creation of observable properties and is shown below. The Detail mapping, viewmodel and a some of the other methods were collapsed for brevity.

var MemberModule = (function() {
    // Mapping definition for the child records
    var detailMapping = {};

    // Child View Model
    var detailViewModel = function(data) {};

    // Parent View Model
    var headerViewModel = function(data) {
        var self = this;
        ko.mapping.fromJS(data, detailMapping, self);

        self.save = function() {
            $.ajax({
                url: "/Member/Save",
                type: "POST",
                data: ko.toJSON(self),
                contentType: "application/json",
                success: function(result) {
                    if (result.viewModel != null) {
                        ko.mapping.fromJS(result.viewModel, {}, self);
                    }
                }
            });
        },
        self.flagHeaderAsEdited = function() {},
        self.addDetail = function() {},
        self.deleteDetail = function(detail) {};
    };

    return {
        HeaderViewModel: headerViewModel
    }
})();

The View can then use the client-side viewmodel for two-way binding as shown below.

<h2>@ViewBag.Title</h2>

<form>
    <div class="form-group">
        <label for="Name" class="control-label">Name:</label>
        <input name="Name" id="Name" class="form-control" data-bind="value: Name, event: {change: flagHeaderAsEdited}, hasfocus: true" />
    </div>
    <div class="form-group">
        <label for="Comments" class="control-label">Comments:</label>
        <input name="Comments" id="Comments" class="form-control" data-bind="value: Comments, event: {change: flagHeaderAsEdited}" />
    </div>
    <table class="table table-striped">
        <tr>
            <th>Id</th>
            <th>Name</th>
            <th>Email</th>
            <th>Division</th>
            <th>Hire Date</th>
            <th>Active</th>
            <th><button data-bind="click: addDetail" class="btn btn-info btn-xs">Add</button></th>
        </tr>
        <tbody data-bind="foreach: Detail">
            <tr>
                <td class="form-group"><input name="EmpId" class="form-control" data-bind="attr: {'id': 'EmpId_' + $index()}, value: EmpId, event: {change: flagDetailAsEdited}" /></td>
                <td class="form-group"><input name="Name" class="form-control" data-bind="attr: {'id': 'Name_' + $index()}, value: Name, event: {change: flagDetailAsEdited}" /></td>
                <td class="form-group"><input name="Email" class="form-control" data-bind="attr: {'id': 'Email_' + $index()}, value: Email, event: {change: flagDetailAsEdited}" /></td>
                <td class="form-group"><input name="Division" class="form-control" data-bind="attr: {'id': 'Division_' + $index()}, value: Division, event: {change: flagDetailAsEdited}" /></td>
                <td class="form-group"><input name="HireDate" class="form-control" data-bind="attr: {'id': 'HireDate_' + $index()}, value: HireDate, event: {change: flagDetailAsEdited}" /></td>
                <td class="form-group"><input name="Active" class="form-control" data-bind="attr: {'id': 'Active_' + $index()}, value: Active, event: {change: flagDetailAsEdited}" /></td>
                <td class="form-group"><button data-bind="click: $parent.deleteDetail" class="btn btn-danger btn-xs">Delete</button></td>
            </tr>
        </tbody>
    </table>
    <p><a href="/" class="btn btn-default btn-sm">&laquo;Back to List</a></p>

    <p><button type="submit" class="btn btn-primary">Save</button></p>
</form>

Here are few shots of the UI.

Header listing with options to Create, Edit, view Details and Delete.

Create a new Header.

Edit a Header showing the client-side viewmodel in Chrome’s developer tools.

The data for the View can certainly be retrieved and return as JSON via Ajax, but in this case the viewmodel was already being passed, so it was easy enough to convert it to a client-side viewmodel. In addition, leveraging Knockout’s two-way data-binding synchronizes the changes made via the UI with the underlying viewmodel and makes posting changes back to the server a snap.

Generate and Export a String-only CSV File from WebAPI

September 29th, 2014

Recently one of my tasks was to allow part of my model to be exported as a downloadable CSV attachment. On top of that, the CSV files were likely to be opened in Excel for modification, and sometimes depending on data types, Excel would apply formats to the data that we did not want. So, we also needed to be able to force the format of the values to be pure strings when opened in Excel.

In WebAPI, we are going to create a method that returns a HttpResponseMessage type. I am also passing an id parameter that I use to get my data.

public HttpResponseMessage GetCsv(string id)
{
	// ...
}

The model I will be using looks like this (various data types):

public class Record
{
	public string Name { get; set; }
	public string Phone { get; set; }
	public string Email { get; set; }
	public DateTime BirthDate { get; set; }
	public int Tickets { get; set; }
}

First I’ll retrieve my data. Then, to generate my CSV contents, I use the StringBuilder class. To start, we’ll need to append our header row (includes new line):

var records = GetRecords(id); //irrelevant
var sb = new StringBuilder();

sb.Append("Name,Phone,Email,Birth Date,Tickets\r\n");

In order to force string values in Excel, we need to wrap the contents. For example, for a value of John Smith we will actually be saving as =”John Smith”. The AppendFormat method is helpful for this part:

foreach (var record in records)
{
	sb.AppendFormat("=\"{0}\",", record.Name);
	sb.AppendFormat("=\"{0}\",", record.Phone);
	sb.AppendFormat("=\"{0}\",", record.Email);
	sb.AppendFormat("=\"{0}\",", detail.BirthDate.ToShortDateString());
	sb.AppendFormat("=\"{0}\"\r\n", detail.Tickets.ToString()); //no comma for the last item, but a new line
}

That’s it for building our CSV file, all that’s left is to return the object for download:

HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);

result.Content = new StringContent(sb.ToString());
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment"); //attachment will force download
result.Content.Headers.ContentDisposition.FileName = "RecordExport.csv";

return result;

To start the download just add a link with the path to your method as the url:

<a href='../api/Tools/GetCsv/12345'>Export</a>

Now when our file is opened in Excel, you can see that even integer values are wrapped and no additional formatting was automatically applied.

Excel Screenshot