Archive for August, 2011

Manipulate file names with the Path class

During a recent .NET project I was tasked with building a UI that would allow users to upload a file to the server which would later be parsed and imported into a database table. The process was simple enough:

  1. Select a file to import:
    D:\Renewals\August2011.xml
  2. Copy selected file to a shared folder on the server:
    \\DevServer\Attachments
  3. Pass the file name to an import routine for processing:
    \\DevServer\Attachments\August2011.xml

However, the file selected in step 1, includes the root path which has to be modified before passing to step 3 so that the source file for importing is retrieved from the server and not the user’s system. It’s basic string manipulation and it’s easy to start parsing manually: search for the last backslash, extract the file name or the path and combine/rebuild as needed. I started doing just that when I remembered that the System.IO.Path class has a large assortment of static methods for dealing with strings containing file and path information.

For example, let’s take the file name and folder mentioned above and apply a few of the common methods from the Path class.

var selectedFile = @"D:\Renewals\August2011.xml";
var targetFolder = @"\\DevServer\Attachments";

// Gets the file name: August2011.xml
var fileName = Path.GetFileName(selectedFile);

// Gets the directory name: D:\Renewals
var directoryName = Path.GetDirectoryName(selectedFile);

// Gets the root path: D:\
var root = Path.GetPathRoot(selectedFile);

// Changes the extension of the file: D:\Renewals\August2011.txt
var newFileName = Path.ChangeExtension(selectedFile, "txt");

// Combines a series of strings into a path:
// \\DevServer\Attachments\August2011.xml
var sourceFile = Path.Combine(targetFolder, fileName);

A nice feature of the Combine method is that it’s overloaded to take put to 4 strings (or an Array of strings) and it will add a trailing backlash between the strings if needed. For example:

Path.Combine(@"D:\Renewals", "August2011.xml") 
    returns "D:\Renewals\August2011.xml"

Path.Combine(@"D:\Renewals", "Attachments", "August2011.xml") 
    returns "D:\Renewals\Attachments\August2011.xml"

Working with strings that include file or directory path information is much easier with path class and you no longer have to resort to manual parsing.

P.S. Rick Strahl posted a great article titled Making Sense of ASP.NET Paths that summarizes the path options along with descriptions for the current request, control and application.

Tags: , , ,


New SQL “Denali” is hereNuevo SQL Denali esta aqui

Technology users are growing around the world, as well as data and device dependency , whether it be pc, laptop , tablet or smart phone. Even when the economy doesn’t look very promising, generally speaking there are some other sectors that are not being hit as hard as others, technology is one of them. In fact many websites such as CNMBC.com, USNEWS.com and others are ranking technology carriers as the most growing carriers in the market right now. Demand on this career is growing exponentially as well as tools to supply such demand.

One of the new tools just releases for testing is “Denali”. This is new version of Microsoft SQL Server. This new version is cloud-ready information platform and looks pretty processing. It has the ability to interact with OData and sync information between multiple platforms.

Microsoft is also promising that the OS patching and downtime can be reduced significantly by 50-60% less by running SQL server on Windows Server Core.

Regarding the performance, Microsoft is introducing In-Memory Column Store or better known as “Apollo” on this version. Basically Apollo brings together the Vertipaq technology that was developed in Analysis Services and a new quory excecution paradigm called batch processing to provide speed up for common data warehouse queries.

Some other features that new Denali brings up are:

  • “Crescent”, is a reporting services project. According to Microsoft, it is easy to use and provides with highly interactive web-based data exploration, visualization, and presentation experience to users of all levels.
  • BI Semantic Model which is a single model for user with multiple ways of building business intelligence solutions.
  • “Juneau”, is the project for SQL Server Developer Tools. Actually “Juneau” unifies the development for database development across Database, BI and web and supports SQL Server and SQL Azure.

In order to install Denali there are some requirements you have to take in consideration:

  • First Denali does not install or enable Windows PowerShell; however Windows PowerShell 2.0 is an installation prerequisite.
  • .Net 3.5 PS1 is a requirement and you have to enable .NET framework 3.5 SP1 before you install Denali if you are installing it on a Windows Server 2008 R2 .Net 4.0 is a requirement too.

Regarding hardware and OS Denali can be installed on a Windows Server 2008 and 2008 R2 SP1, Windows 7 SP1 and Vista SP2. It requires a minimum of 1 GB on memory and a 64 version requires a 1.4Ghz process or a 1.0 GHz process if you install the 32-bit version.

Installing Denali was not a big problem. However it took more than 30 minutes installing.

When I start using it I didn’t noticed a big difference between this version and 2008 R2 Version. Here is an email of the new Management Studio UI.

The new UI is powered by Visual Studio so you will fell that you are working on it instead of regular SQL Management Studio.
In future blogs I will discuss about new TSQL features and Reporting tools as well as BI tools that Denali provides.

If you are interested on download and Install Denali you can download it from: Microsoft
Also if you want to learn more about it this video may help you. It talks about new Denali features. The speaker is Dandy Weyn and I can tell that he does a great job presenting the product.
Take a look at it. Microsoft Technet Videos

Tags: , , , ,


Random Error Message: The request for [this] procedure failed because [it] is a table valued function object

While working on a project recently, I ended up needing to call a “Table-valued Function” in SQL Server. Had I been writing my process from scratch, I would probably not have chosen to use a Table-valued function, but I was asked to tap into previously existing code that was, so I didn’t have much choice. Initially, I didn’t think that this would be a problem, however, when I started trying to actually call a Table-valued Function like you would a stored procedure or a Scalar-valued Function, I ended up getting the following error:

The request for procedure ‘Procedure X’ failed because ‘Procedure X’ is a table valued function object
 
 

This error is actually pretty self-explanatory. You can’t call a Table-valued function from ASP.NET. The problem was … I needed to call one. I tried all kinds of things. My last resort was going to be to create a new stored procedure (which I CAN call from ASP.NET) whose only function is to turn around and call the Table-valued function. This did not seem to be a very effective solution to me, if for no other reason than that it would require editing the new stored procedure every time the Table-value function changed. Those kinds of things are easy to get out of sync.

I did search for this error on line and I didn’t find much in the way of solutions. I finally found something that worked though and I thought that I would write a quick blog post about it. In essence, Table-valued Functions are called in the same way that one would run a query against a table. In other words, you use the function as if it were a table. So, if we want to select all the episodes from the BurnNotice Episodes table from season 2 where the rating is 5 stars, then our query would look like this:

SELECT * FROM episodes WHERE rating = 5 AND season = 2

NOTE: This would return a list of all episodes from season 2 as they are all awesome

 

However, if the rating of a given episode were an aggregate of a variety of factors and happened to include several values from different places, we could write a Table-valued Function called “EpisodeRatings” accepts two parameters (@Rating and @Season) that returns all the values that we need. Rather than an EXEC statement which we would use for a Scalar-valued Function, our call would look like this:

SELECT * FROM episoderatings(5, 2)
 

Knowing this difference is actually the key to solving the error that is the subject of this post. Rather than try to call the Table-valued Function as if it were a stored procedure, we will build a parameterized text string in our C# code. Using a parameterized string allows you to dynamically build the SQL Statement in your C# code, but it also uses strongly typed input parameters which would thwart attempts at SQL Injection attacks. I don’t think that this is the ideal solution, but for one of those time in which the project’s requirements or prior design constrain your options, it certainly works. So, below I have an example of some code that will build the SQL String, add in the parameters and then execute the Table-valued Function and return a data table.

public static DataTable ExecuteTableValueFunction(string database, List<SprocParameter> parameterList)
{

	StringBuilder sb = new StringBuilder();

	// Build the parameterized sql string
	sb.Append("SELECT * ");
	sb.Append("FROM episoderatings(");
	sb.Append("@Rating");
	sb.Append(",@Season");
	sb.Append(")");

	DataTable dt = new DataTable();
	Database db = DatabaseFactory.CreateDatabase("BurnNotice");
	DbCommand cmd = db.GetSqlStringCommand(sb.ToString());

	cmd.CommandType = CommandType.Text;
	cmd.CommandTimeout = timeout;

	foreach (SprocParameter param in parameterList) {
		if (param.Value == null) {
			db.AddInParameter(cmd, param.Name, param.DataType, DBNull.Value);
		} else {
			db.AddInParameter(cmd, param.Name, param.DataType, param.Value);
		}
	}

	db.CreateConnection().Open();
	dt.Load(db.ExecuteReader(cmd));
	db.CreateConnection().Close();

	return dt;
}
 
 

So, to conclude … I must say that I did not love solving this issue in this manner. I don’t like building SQL in my code using strings, still it is parameterized and will guard against SQL Injection attacks. In the end, I thought that it was a better option that creating extra procedures simply to call the table valued function. If you are having this error and have no other choice than to barrel forward using a table valued function, I hope that this post helps you solve the problem.


Adding Client-side Values to CheckBoxLists

ASP.NET CheckBoxLists have been broken for years, but, up until recently, it never really mattered to me that they don’t render item values in the HTML. As long as I could get the value during a postback, I was happy. Of course, tons of postbacks make for a lousy user experience, so I was bound to run into this problem, eventually.

One of our projects has a client-side business rule engine that relies on the “value” attribute of the various controls it interacts with, so some newly-added CheckBoxLists were not behaving as expected. I toyed with the idea of simply using the text of each item as a substitute for the value, but this would have caused problems elsewhere since the server-side version of the business rule engine could get the values just fine and we didn’t want to create a situation where we would need two versions of each rule that touched a CheckBoxList.

I found a few articles online that recommended that I loop through the control’s items when the page loads and manually add each value as a new client-side attribute. While this approach definitely works as long as you’re looking for the correct attribute on the client-side, I wanted something a bit more modular. Not that setting up this loop wherever I needed it would be much of a pain, but I was looking for a fix that I could implement once, in one place, then never have to worry about again.

Fortunately, before I broke down and tried to do it from scratch, I came across an awesome solution on Evan Freeman’s blog, Client Side Values for the CheckBoxList . In it, he extends the CheckBoxList class to render the client-side values and doesn’t really touch anything else, so the control behaves just like you would expect of an unbroken CheckBoxList. Microsoft may not be interested in ever fixing this bug, but I’m awfully glad someone went out of their way to do it for them.

Tags: , , ,


WCF, Entity Framework and N-Tier Solutions – Part 2


I wrote a post a few weeks ago about an issue that I ran into while programming my first WCF application. If you would like to read it that post first (it might be a good idea) you can click here. Since it was posted, I have noticed that a few people visit the post every day and they are probably annoyed since I identified the problem, identified the solution and then gave no explanation as to how the solution should actually be implemented. So, as promised, I am going to go over the path that we followed the solve the problem step by step. This will be a long post, but if you are trying to solve this particular problem, I hope that it is helpful.

 

Create the Solution

I want to create a new Burn Notice Silverlight project to keep track of all the characters and their spy skills. In case you have not read any of my previous posts and have not seen my “Burn Notice” examples (and if you don’t know what Burn Notice is) … you are missing out. Click here to learn more. So to begin with, let’s create a new solution and then add projects to it that will house the different tiers of our application. Using Visual Studio, create an empty solution and add in the following projects (in parentheses I have added what I will call each project in my Burn Notice solution):

  • Model Layer – This layer will contain the POCO objects (BurnNotice.Model)
  • Data Layer – This layer will interact with the database and will contain our Entity Framework file (BurnNotice.Data)
  • Business Layer – This layer calls the data layer and implements business rules if applicable (BurnNotice.Business)
  • Service Layer – This layer contains the actual WCF service that can be accessed by external websites and our silverlight project (BurnNotice.Service)
  • Silverlight Layer – This layer contains the silverlight project that references the service layer (BurnNotice.UI)
  • Web Layer – This is a website that will host the silverlight project (BurnNotice.UI.Web)

This is what it looks like in my solution explorer:

You, of course, can name your projects whatever you want as long as you know what each layer is for. Model, Data, Business etc. are just naming conventions that we have used. In my case, I have also created a database with a few tables in it. I have included a small screenshot of the table names.

 

Entity Framework Model and POCO’s

Now that your database is created, and all the table relationships set up properly, right click on the Data project in your solution and select Add > New Item … When the dialog box appears, click on “Data” in the “Installed Templates” section. Once that has been done, click on the “ADO.NET Entity Data Model”, name the file whatever you would like and then save it by clicking “Add”.

At this point, the “Entity Data Model Wizard” appears. Do the following:

  1. Select “Generate From Database” on the “Choose Model Contents” page.
  2. Click Next
  3. Click on “New Connection” and then use the “Connection Properties” popup to build the connection string to your database. I recommend that you use “SQL Server Authentication” with an account that is set up with only the necessary permissions for your applications
  4. Click on “Test Connection”, if the popup says “Test connection succeeded” then click on the “OK” button
  5. Click on the radio button labeled “Yes, include the sensitive data in the connection string” (this can be encrypted later if you wish)
  6. Click the checkbox labeled “Save entity connection settings in App.Config as” and then give the connection a name (whatever you want to call it, but remember what it is)
  7. Click Next
  8. Select all the database objects that you want to include in the Entity Framework (Most likely tables)
  9. Enter a name for the Model Namespace (whatever you want to call it)
  10. Click Finish

Now that the Entity Framework Model is created, we are ready to get to the meat of the presentation which is creating POCOs (Plain Old CLR Objects) which can then be exposed over a WCF service without exposing the Entity Framework itself and all that goes with it. Once you are done, your Entity Framework Model should look something like this:

So now that you are looking at the Entity Framework file that you just created, right click on the canvas of the Entity Framework. You will see several options in the menu that pops up. We want to select “Add Code Generation Item …”.

This will bring up another window showing the “Installed Templates” as shown in the screenshot below. Once the “Add New Item” window pops us, we want to select the template called “ADO.NET POCO Entity Generator”. This also can be called whatever you like although I tend to keep the name the same as what I chose for the Entity Framework itself.

Incidentally, if you cannot find this template installed in your copy of Visual Studio, you can find instructions on searching for and installing templates here. Once you save the “ADO.NET POCO Entity Generator”, two new files will be created in your data project. In my project, they are: “BurnNotice.Context.tt” and “BurnNotice.tt”.

At this point, if you try to build the project it should compile without any problems, but there are a couple more things to notice. First of all, underneath the new template files (those with a .tt extension), you will see that several new classes have been added to your data project, as shown in the graphic below.

Each of these new classes maps to an object in your Entity Framework model, which, in turn, map to a table in your database. These files should not be edited directly (they are autogenerated each time that the template file is saved), but if you open one up you will see that it is a fairly standard class with properties defining each column in the table. You will also notice that they are partial classes. That is not relevant to this post, but I have found it useful in other situations.

 

N-Tier Structure

Now, everything is working, however, any data access would have to go through directly to our data layer. Problems start to crop up when you try to start making it into a fully functional n-Tier solution. Mostly this is caused when you start adding in the references to the various projects in the solution. Typically making calls to the database in an n-Tier solution follows the following pattern.

UI Layer calls the Business Layer which then calls the Data Layer. Once the data has been retrieved from the database, the Data Layer passes it back to the Business Layer, where any rules are applied. The Business Layer then sends the data back to the UI Layer where it is presented to the user. In our project we also have the service layer sitting between the UI Layer and the Business Layer, but the concept is the same. However, if you want to pass actual objects or collections of objects back and forth between these layers (as opposed to say … an array or a data table), all of your layers need to have access to the place in which the objects are defined. If they are defined in the Data Layer (as our test project is currently set up) then all other layers need to have a reference to it. You will find that this makes it very easy to have circular reference errors in your project and it also makes it harder to use those object with a service. At TopLine, we typically have the Model project sitting off to the side and referenced by all other projects in the solution as demonstrated in the screenshot below:

By doing this, all of the projects can pass around the commonly defined objects found in the model project. It is here where the rubber hits the road, so to speak. For an N-Tier structure to work with WCF and Entity Framework, we have created the POCO’s and we will now move them over to our model project so that they can be the common objects that we are passing back and forth. We then have the benefits of using the Entity Framework for data access, but using the POCO’s to pass around and expose externally via the service.

To move our POCO’s from the Data Project to the Model Project is fairly easy to do. Right click on the template in the Data Project that does NOT have the word “context” in the name (BurnNotice.tt in my example). Select “Cut” from the context menu, then right click on the Model project and select paste. Both the template and all of its associated classes should now have moved to the Model project. Our only problem now is that by moving the template we have broken the link between it and the Entity Framework model upon which it depends to create the POCO classes. Luckily, this is easy to fix. To do so, open the template file that is now in the Model project. Incidentally, there is currently no intellisense support for template files in Visual Studio. It will just be plain old black and white. Don’t let that fool you though. These templates are powerful tools. Once it is open … look for the line of code that reads something like this:

    string inputFile = @"BurnNotice.edmx";

And change it to a path where it can find the Entity Framework model over in the data project, like so:

    string inputFile = @"..\BurnNotice.Data\BurnNotice.edmx";

When you click save, you will know immediately if this worked. If it did, the class files for your database objects will be regenerated as they should be. If the path that you entered was incorrect, however, you will receive a huge ugly error that looks like this:

OK … a few more things to do and we are home free. First of all, there is another item that we need to move to the Model project and that is the interface that defines our BurnNoticeService. This needs to be in our Model project for the same reason that the POCO’s do. The interface is going to be implemented by classes in the Business and Service layers of the solution. In order for that to happen, all projects need to have access to it. Just as we moved the BurnNotice template into the Model project, right click on the Interface in the Service Project (“IBurnNoticeService” in my project), select “Cut”, then right click on the Model Project and select “Paste”. Once it has been moved, you will need to make a couple more changes to it so that the solution will compile. First of all, open the interface and change the namespace of it from the Service Project namespace to the Model namespace. Second, you will need to add a reference to the “System.ServiceModel” namespace in your Model Project (This was automatically added to the service project when you created it, but it has to be manually added to the Model Project).

 

Project References

Now we are ready to start adding in our project references. Just going down the list, your references should look like this:

Data Project – References To
Model Project

Business Project – References To
Data Project
Model Project

Service Project – References To
Business Project
Model Project

Model Project – References To
No Project References

At this point, we need to make one more change to a template file. This is necessary because the Context template file should be pointing at our POCO objects, which are now in a different project. We already have the reference to the Model Project added to our Data Project, so now we just need a reference to the namespace in the Context class itself. Go back to the Data project where you have the Entity Framework Model and a template file that includes the word “Context” in the title. Open it and find the following code:

//--------------------------------------------------------------------
//
//     This code was generated from a template.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
//
//--------------------------------------------------------------------

using System;
using System.Data.Objects;
using System.Data.EntityClient;

Now add a Using statement here that points to the Model Project, like So:

//--------------------------------------------------------------------
//
//     This code was generated from a template.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
//
//--------------------------------------------------------------------

using System;
using System.Data.Objects;
using System.Data.EntityClient;
using BurnNotice.Model;
 
 

In the same template find the method named “WriteLazyLoadingEnabled. It looks like this:

private void WriteLazyLoadingEnabled(EntityContainer container)
{
   string lazyLoadingAttributeValue = null;
   string lazyLoadingAttributeName = MetadataConstants.EDM_ANNOTATION_09_02 + ":LazyLoadingEnabled";
   if(MetadataTools.TryGetStringMetadataPropertySetting(container, lazyLoadingAttributeName, out lazyLoadingAttributeValue))
   {
       bool isLazyLoading = false;
       if(bool.TryParse(lazyLoadingAttributeValue, out isLazyLoading))
       {
#>
        this.ContextOptions.LazyLoadingEnabled = <#=isLazyLoading.ToString().ToLowerInvariant()#>;
<#+
       }
   }
}
#>
 

and add one line to it (highlighted below) This will prevent your project from creating duplicate proxy classes that interfere with execution.

private void WriteLazyLoadingEnabled(EntityContainer container)
{
   string lazyLoadingAttributeValue = null;
   string lazyLoadingAttributeName = MetadataConstants.EDM_ANNOTATION_09_02 + ":LazyLoadingEnabled";
   if(MetadataTools.TryGetStringMetadataPropertySetting(container, lazyLoadingAttributeName, out lazyLoadingAttributeValue))
   {
       bool isLazyLoading = false;
       if(bool.TryParse(lazyLoadingAttributeValue, out isLazyLoading))
       {
#>
        this.ContextOptions.LazyLoadingEnabled = <#=isLazyLoading.ToString().ToLowerInvariant()#>;
        this.ContextOptions.ProxyCreationEnabled = false;
<#+
       }
   }
}
#>

Once you save it, the Context class will be recreated with the Model Project reference included in it.

 

WCF Service

We are now ready to wire up the WCF service and to have it function in an N-Tier manner. Obviously this can be a lot more complex than I will demonstrate here, but for the sake of simplicity, I will add a new class to the Data, Business and Service Projects. In my case I will simply call each one “BurnNotice.cs”, you may call it whatever you like. Next, I will open up the BurnNotice interface and put in some methods that I want used every time this interface is implemented. Even though this interface is in the Model project, we need to add the Service Contract attributes to it just as we would do if it were still in the service project, mine looks like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.Text;

namespace BurnNotice.Model
{
    [ServiceContract]
    public interface IBurnNoticeService
    {
        [OperationContract]
        Character Character_GetById(int characterId);

        [OperationContract]
        IEnumerable<Character> Characters_GetAll();

        [OperationContract]
        IEnumerable<Character> Characters_GetByEpisode(int episodeId);

        [OperationContract]
        IEnumerable<Character> Characters_GetBySpySkill(int spySkillId);

        [OperationContract]
        IEnumerable<SpySkill> SpySkills_GetByCharacterId(int characterId);

        [OperationContract]
        Episode Episode_GetById(int episodeId);

        [OperationContract]
        IEnumerable<Episode> Episode_GetAll();
    }
}

Now that we have the interface ready, I will go into each instance of the “BurnNotice.cs” class that I created in the Data, Business and Service projects and implement the interface. In case you are unfamiliar with interface implementation, in C# it is done by putting a colon and then the name of the interface that you want to implement after the class name in the class file (For a technical, but very good explanation of interfaces, how they work and why they are desirable click here. You have to have an interface for WCF to work properly anyway, but using it in the rest of the application is an excellent idea too.) It is implemented like this:

 
public class BurnNotice : IBurnNoticeService
 

If you were to try to compile the solution at this point, however, it will fail because even though you have added a reference to the interface, it has not technically been implemented. A quick shortcut to remedy this is to right click on the name of the interface after the colon in the class and then select “Implement Interface” > “Implement Interface”. This will add all of the required methods to the class which is implementing the interface, which you can then alter to meet your project’s needs. Do this on each of the classes that you just created. By the way, technically speaking you do not need the interface in the Business and Data projects to create methods that will return data from the database, but using interfaces is very helpful in making sure that all required functionality is present in your application. The Service Project on the other hand is a different matter. In order to expose it as a WCF service you must have the interface defined, which is what will allow any application that consumes your service to be aware of the methods and the formats of the data that it will be dealing with. I prefer then to implement the interface in all classes that will use the data.

I mentioned in my last post on this subject that I found many great tutorials, but none showed exactly what I needed to get an n-tier solution up and running. In fairness to them, I don’t think that was really their focus. Just to make sure that you can see the entire project structure, in the screenshot below you will see an almost complete graphic of my Solution Explorer. My hope is that this will prevent anyone from being frustrated at not knowing which project references go where and/or which namespaces need to be included with which project.

 

Implementation

After all of this, we can now use the service, but have it call our different layers as we would in a standard n-tier solution. In the App.Config file, you will need to add in the connection string to your database in an Entity Framework approved format like this:

<connectionStrings>
    <add name="BurnNotice_Database" connectionString="metadata=res://*/BurnNotice.csdl|res://*/BurnNotice.ssdl|res://*/BurnNotice.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=LocalHost;Initial Catalog=BurnNotice;Persist Security Info=True;User ID=fiona;Password=Sam!Axe*;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" />
</connectionStrings>

From the Service Layer then, the code to call the Business Layer looks like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.Text;
using BurnNotice.Model;
using BurnNotice.Business;
using BitFactory.Logging;

namespace BurnNotice.Service
{
    public class BurnNotice : IBurnNoticeService
    {

        #region IBurnNoticeService Members

        public Character Character_GetById(int characterId)
        {
            Business.BurnNotice bn = new Business.BurnNotice();
            return bn.Character_GetById(characterId);
        }

        public IEnumerable<Character> Characters_GetAll()
        {
            Business.BurnNotice bn = new Business.BurnNotice();
            return bn.Characters_GetAll();
        }

        public IEnumerable<Character> Characters_GetByEpisode(int episodeId)
        {
            Business.BurnNotice bn = new Business.BurnNotice();
            return bn.Characters_GetByEpisode(episodeId);
        }

        public IEnumerable<Character> Characters_GetBySpySkill(int spySkillId)
        {
            Business.BurnNotice bn = new Business.BurnNotice();
            return bn.Characters_GetBySpySkill(spySkillId);
        }

        public IEnumerable<SpySkill> SpySkills_GetByCharacterId(int characterId)
        {
            Business.BurnNotice bn = new Business.BurnNotice();
            return bn.SpySkills_GetByCharacterId(characterId);
        }

        public Episode Episode_GetById(int episodeId)
        {
            Business.BurnNotice bn = new Business.BurnNotice();
            return bn.Episode_GetById(episodeId);
        }

        public IEnumerable<Episode> Episode_GetAll()
        {
            Business.BurnNotice bn = new Business.BurnNotice();
            return bn.Episode_GetAll();
        }

        #endregion
    }
}

From the Business Layer, where we have a reference to the Data Layer, we can call the analogous methods in the Data Layer that will actually run queries against the Entity Framework. Once the data comes back and before we send it along to the service layer, we can implement any rules that we want enforced (but don’t necessarily want the data changed in the database) and make changes to the data accordingly. In my example below, you will see that I have put in code to append the text “: Bad Guy” if the description of the character uses the word “Evil” when I am returning a character’s data from the database. This is dumb I know, but it is only for illustration. I’m assuming that real life example would make more sense:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using BurnNotice.Model;
using BurnNotice.Data;

namespace BurnNotice.Business
{
    public class BurnNotice : IBurnNoticeService
    {
        #region IBurnNoticeService Members

        public Character Character_GetById(int characterId)
        {
            Data.BurnNotice bn = new Data.BurnNotice();
            Character ch = bn.Character_GetById(characterId);

            if (ch.CharacterDescription.ToLower().Contains("evil"))
            { ch.CharacterDescription += ": Bad Guy"; }

            return ch;
        }

        public IEnumerable<Character> Characters_GetAll()
        {
            Data.BurnNotice bn = new Data.BurnNotice();
            return bn.Characters_GetAll();
        }

        public IEnumerable<Character> Characters_GetByEpisode(int episodeId)
        {
            Data.BurnNotice bn = new Data.BurnNotice();
            return bn.Characters_GetByEpisode(episodeId);
        }

        public IEnumerable<Character> Characters_GetBySpySkill(int spySkillId)
        {
            Data.BurnNotice bn = new Data.BurnNotice();
            return bn.Characters_GetBySpySkill(spySkillId);
        }

        public IEnumerable<SpySkill> SpySkills_GetByCharacterId(int characterId)
        {
            Data.BurnNotice bn = new Data.BurnNotice();
            return bn.SpySkills_GetByCharacterId(characterId);
        }

        public Episode Episode_GetById(int episodeId)
        {
            Data.BurnNotice bn = new Data.BurnNotice();
            return bn.Episode_GetById(episodeId);
        }

        public IEnumerable<Episode> Episode_GetAll()
        {
            Data.BurnNotice bn = new Data.BurnNotice();
            return bn.Episode_GetAll();
        }

        #endregion
    }
}

So, it is in the data layer then that the magic happens. In the Data Layer, we can use calls to the Entity Framework Model using its data context (called BurnNotice_Database() in the code below) and utilize all of the benefits of the Entity Framework, but as far as the objects that we are actually passing around, they are the POCO’s and not the actual Entity Framework objects. So, once the call reaches the BurnNotice class in the Data Layer, the calls to the database are done like so:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using BurnNotice.Model;

namespace BurnNotice.Data
{
    public class BurnNotice : IBurnNoticeService
    {
        #region IBurnNoticeService Members

        public Character Character_GetById(int characterId)
        {
            using (var context = new BurnNotice_Database())
            {
                Character character = context.Characters.Where(c => c.CharacterId == characterId).FirstOrDefault();
                return character;
            }
        }

        public IEnumerable<Character> Characters_GetAll()
        {
            using (var context = new BurnNotice_Database())
            {
                return context.Characters.ToList();
            }
        }

        public IEnumerable<Character> Characters_GetByEpisode(int episodeId)
        {
            using (var context = new BurnNotice_Database())
            {
                var characters = from c in context.Characters
                                 join e in context.EpisodeCharacters on c.CharacterId equals e.CharacterId
                                 where e.EpisodeId == episodeId
                                 select c;


                return characters;
            }
        }

        public IEnumerable<Character> Characters_GetBySpySkill(int spySkillId)
        {
            using (var context = new BurnNotice_Database())
            {
                var characters = from c in context.Characters
                                 join s in context.CharacterSpySkills on c.CharacterId equals s.CharacterId
                                 where s.SpySkillId == spySkillId
                                 select c;


                return characters;
            }
        }

        public IEnumerable<SpySkill> SpySkills_GetByCharacterId(int characterId)
        {
            using (var context = new BurnNotice_Database())
            {
                var spySkills = from s in context.SpySkills
                                join c in context.CharacterSpySkills on s.SpySkillId equals c.SpySkillId
                                where c.CharacterId == characterId
                                select s;


                return spySkills;
            }
        }

        public Episode Episode_GetById(int episodeId)
        {
            using (var context = new BurnNotice_Database())
            {
                Episode episode = context.Episodes.Where(e => e.EpisodeId == episodeId).FirstOrDefault();
                return episode;
            }
        }

        public IEnumerable<Episode> Episode_GetAll()
        {
            using (var context = new BurnNotice_Database())
            {
                return context.Episodes.ToList();
            }
        }

        #endregion
    }
}

If you set the Service Project to be the Startup Project and then run the solution, the WCF test client will come up and, as shown in the screenshot below, I am able to see a list of all my service methods and then invoke them and receive data back from the service.

 

In part three of this tutorial, I show how to do that last part of this process, which is to call the service from another project (in this case, a Silverlight Application). I hope that will be useful as well, but Part 2 should be enough to show how using POCO’s enables us to implement a full n-tier solution with WCF and the Entity Framework.

Tags: , ,