Wednesday, March 25, 2009

Entity Framework Patterns: Identity Map

ADO.Net Entity Framework is a different way of looking at persistence than most of us are used to.  It wants us to do things like add objects/entities to our data context instead of saving them, then it doesn’t save those objects to the  database until we call SaveChanges(). We don’t directly save specific entities but instead EF tracks the entities we’ve loaded and then saves changes to db for any entities that it thinks have changed. My first reaction when I realized how different these concepts were from my standard way of saving data was that I hated it (this actually took place with LINQ to SQL which I still don’t care for due to the way it handles sprocs).  But the promise of rapid application development and more maintainable code kept me coming back. I started reading up on architectures using ORMs (mostly in the Java world) and I discovered that most of the things I initially didn’t like about Entity Framework and LINQ to SQL are actually accepted design patterns from the ORM world that have been developed by people much smarter than me who have been working for years to solve the Impedance Mismatch problem.  So I thought it might be helpful to talk about some of these patterns and how they are handled by Entity Framework.  The first one we’ll look at is Identity Map.

Identity Map Definition

In Martin Fowler’s book Patterns of Enterprise Application Architecture, he defines Identity Map with the following two phrases:

Ensures that each object gets loaded only once by keeping every every loaded object in a map. Looks up objects using the map when referring to them.

So what does this mean?  It’s probably better to demonstrate than to explain, so let’s look at the characteristics of Identity Map through some code examples.

There Can Be Only One

Let’s start by looking at the other way of doing things.  This is the non-Identity Map example. If we have an app that uses a simple persistence layer that does a database query,  and returns to us a DataTable we might see code like the following:

DataTable personData1 = BAL.Person.GetPersonByEmail("bill@gates.com");

DataTable personData2 = BAL.Person.GetPersonByEmail("bill@gates.com");

if (personData1 != personData2)

{

    Console.WriteLine("We have 2 different objects");

}

In this example, personData1 and personData2 both contain separate copies of the data for person Bill Gates. If we change the data in personData2, it has no effect on personData1.  They are totally separate objects that happen to contain the same data. If we make changes to both and then save them back to the database there is no coordination of the changes. One just overwrites the changes of the other.  Our persistence framework (ADO.Net DataTables) just doesn’t know that personData1 and personData2 both contain data for the same entity.  The thing to remember about this scenario is that multiple separate objects that all contain data for the same entity, lead to concurrency problems when it’s time to save data.

Now let’s look at the Identity Map way of doing things. Below, we have some ADO.Net Entity Framework code where we create two different object queries that both get data for the same person, and then we use those queries to load three different person entity objects.

EFEntities context = new EFEntities();

 

var query1 = from p in context.PersonSet

            where p.email == "bill@gates.com"

            select p;

Person person1 = query1.FirstOrDefault<Person>();

Person person2 = query1.FirstOrDefault<Person>();

 

var query2 = from p in context.PersonSet

             where p.name == "Bill Gates"

             select p;

Person person3 = query2.FirstOrDefault<Person>();           

 

if (person1 == person2 & person1==person3)

{

    Console.WriteLine("Identity Map gives us 3 refs to a single object");

}

 

person1.name = "The Billster";

Console.WriteLine(person3.name); // writes The Billster

When I run the code above, all 3 entities are in fact equal.  Plus, when I change the name property on person1, I get that same change on person3. What’s going on here?  They’re all refs to a single object that is managed by the ObjectContext. So Entity Framework does some magic behind the scenes where regardless of how many times or how many different ways we load an entity, the framework ensures that only one entity object is created and the multiple entities that we load are really just multiple references to that one object. That means that we can have 10 entity objects in our code and if they represent the same entity, they will all be references to the same object.  The result is that at save time we have no concurrency issues.  All changes get saved.  So how does this work? 

Every entity type has a key that uniquely identifies that entity.   If we look at one of our Person entities in the debugger, we notice that it has a property that Entity Framework created for us named EntityKey.  EntityKey contains a lot of information on things like what key values our entity has (for our Person entity the key field is PersonGuid), what entity sets our entity belongs to, basically all the information Entity Framework needs to uniquely identify and manage our Person entity.

The EntityKey property is used by the ObjectContext (or just context) that Entity Framework generates for us.  In our example the context class is EFEntities.  The context class does a number of things and one of them is maintaining an Identity Map.  Think of the map as a cache that contains one an only one instance of each object identified by it’s EntityKey. In fact, you will probably never hear the term Identity Map used.  Most .Net developers just call it the object cache, or even just the cache. So, in our example,  when we get person1 from our context, it runs the query, creates an instance of person (which the context knows is uniquely identified by field PersonGuid), stores that object in the cache, and gives us back a reference to it.  When we get person2 from the context, the context does run the query again and pulls data from our database, but then it sees that it already has a person entity with the same EntityKey in the cache so it throws out the data and returns a reference to the entity that’s already in cache.  The same thing happens for person3.

Quiz: What Happens To Cached Entities When  the Database Changes?

So here’s a question.  If we run the code sample above that loads person1, person2, and person3 from our context, but this time we use a break point to pause execution right after we load person1, then we manually update the database by changing the phone_home field on Bill Gates’ record to “(999) 999-9999”, then we continue executing the rest of our code. What value will we see for phone_home when we look at person1, person2, and person3?  Will it be the original value, or the new value?  Remember that all 3 entities are really just 3 references to the same entity object in the cache, and our first db hit when we got person1 did pull the original phone_home value, but then the queries for person2 and person3 also hit the database and pulled data.  How does Entity Framework handle that. The answer is shown in the debugger watch window below. It throws the new data out. 

image 

This can lead to some really unexpected behavior if you don’t know to look for it, especially if you have a long running context that’s persisted and used over and over for multiple requests.  It is very important to be thinking about this when you’re deciding when to create a context, how long to keep it running, and what you want to happen when data on the backend is changed.  There is a way to modify this behavior for individual queries by setting the ObjectQuery.MergeOption property.  But we still need to remember and plan for this default behavior.

If There’s a Cache, Why Am I Hitting The Database? 

Remember the second part of Martin Fowler’s definition where he said that the Identity Map looks up objects using the map when referring to them?  The natural question that comes to mind is, if I’m loading an object that already exists in my cache, and Entity Framework is just going to return a reference to that cached object and throw away any changes it gets from the database query, can’t I just get the object directly from my cache and skip the database query altogether? That could really reduce database load.

Unfortunately the answer is kind of, but not really.  In Entity Framework v1, you can get an entity directly from the cache without hitting the database, but only if you use a special method to get the entity by it’s EntityKey.  Having to use the EntityKey is a big limitation since most of the time you want to look up data by some other field.  For example, in a login situation I need to get a person entity by email or username.  I don’t have the PersonGuid.  I’m hoping that we get more options for loading entities from the cache in v2 but for now, if you do have the key field, this is how you do it:

Guid billsGuid = new Guid("0F3087DB-6A83-4BAE-A1C8-B1BD0CE230C0");

EntityKey key = new EntityKey("EFEntities.PersonSet", "PersonGuid", billsGuid);

Person bill = (Person)context.GetObjectByKey(key);

There are a couple of things I want to point out.  First, when we creating the key, the first parameter we have to give is the entity set name that we’re pulling from and this name must include the name of our ObjectContext class. Second, you’ll notice that GetObjectByKey() returns an Object, so we did have to cast the return value to Person.

Conclusion

So that’s one pattern down.  Hopefully discussing some of these differences in approaching persistence helps ease your transition to using Entity Framework a bit.  Next time we’ll cover another key pattern, Unit of Work.

kick it on DotNetKicks.com

Saturday, March 14, 2009

What is the Difference Between a DTO and a POCO?

First off, I’m not the authority on DTOs, POCOs, object oriented architecture, or really anything now that I stop to think about it.  However, I do use a DTO / POCO centric architecture whenever I can and there’s at least one former client of mine who is now saddled with an entity class named DevicePoco (there was already a Device entity object that followed the Active Record pattern, otherwise I would never have named an object XXXPoco). When my client saw the new object with the crazy name in their BAL, their first reaction was of course to ask “What the heck is a POCO?”  Not too long ago I was at a Visual Studio User Group meeting where the question of POCOs and how they are different from DTOs came up.  The presenter, who quite honestly is a much better developer than me, stated confidently that POCOs and DTOs are the same thing.  I immediately clamped both hands over my mouth to keep from screaming “They are not!”.  So, there seems to be a lack of good information in the .Net community about what these objects are.  I’m going to try and clarify the issue.

What is a Data Transfer Object (DTO)?

Normally this is where I would say Wikipedia defines a DTO as...  Unfortunately, the current Wikipedia definition is pretty awful except for the line:  

“The difference between Data Transfer Objects and Business Objects or Data Access Objects is that a DTO does not have any behaviour except for storage and retrieval of its own data (accessors and mutators).”

That’s the key concept.  A DTO stores data.  It has no methods (behaviors) other than accessors and mutators which are just used to get and set data.  Why make an object that simple? Because they make a great, lightweight, strongly typed data container when you want to move data from your DAL to your BAL or between the umpteen layers in your n-tier architecture.  Below is the code for the PersonDTO that I’ve been using in many of my recent posts. You’ll notice that it really does nothing except store data.

public class PersonDTO : DTOBase

{

    public Guid PersonGuid { get;set; }

    public int PersonId { get; set; }

    public DateTime UtcCreated { get; set; }

    public DateTime UtcModified { get; set; }

    public string Password { get; set; }

    public string Name { get; set; }

    public string Nickname { get; set; }

    public string PhoneMobile { get; set; }

    public string PhoneHome { get; set; }

    public string Email { get; set; }

    public string ImAddress { get; set; }

    public int ImType { get; set; }

    public int TimeZoneId { get; set; }

    public int LanguageId { get; set; }

    public string City { get; set; }

    public string State { get; set; }

    public int ZipCode { get; set; }

 

 

    // Constructor

    // No parameters and all value types are intialized to the

    // null values defined in CommonBase.

    public PersonDTO()

    {        

        PersonGuid = Guid_NullValue;

        PersonId = Int_NullValue;

        UtcCreated = DateTime_NullValue;

        UtcModified = DateTime_NullValue; 

        Name = String_NullValue;

        Nickname = String_NullValue;

        PhoneMobile = String_NullValue;

        PhoneHome = String_NullValue;

        Email = String_NullValue;

        ImAddress = String_NullValue;

        ImType = Int_NullValue;

        TimeZoneId = Int_NullValue;

        LanguageId = Int_NullValue;

        City = String_NullValue;

        State = String_NullValue;

        ZipCode = Int_NullValue;

        IsNew = true;

    }

}

So, to sum up, a DTO is just a collection of properties (or data members).  It has no validation, no business logic, no logic of any kind.  It’s just a simple, lightweight data container used for moving data between layers.

So What’s a POCO?

A POCO is not a DTO.  POCO stands for Plain Old CLR Object, or Plain Old C# Object.  It’s basically the .Net version of a POJO, Plain Old Java Object.  A POCO is your Business Object.  It has data, validation, and any other business logic that you want to put in there.  But there’s one thing a POCO does not have, and that’s what makes it a POCO.  POCOs do not have persistence methods.  If you have a POCO of type Person, you can’t have a Person.GetPersonById() method, or a Person.Save() method.  POCOs contain only data and domain logic, no persistence logic of any kind.  The term you’ll hear for this concept is Persistence Ignorance (PI).  POCOs are Persistence Ignorant. 

Below is the code for my BAL.Person class which is a POCO.  You’ll notice that it contains no persistence logic of any kind, just data and validation methods.  You’ll also notice that I don’t recreate a bunch of accessors and mutators for my person data.  That would clutter up my Person class, plus they would be redundant since they’ve already been defined in PersonDTO.  Instead I just have a single property named Data that is of type PersonDTO.  This approach makes getting and saving a person really easy.  When getting a person, I just get a PersonDTO from my DAL and then set person.Data = personDTO.  When saving, my save methods all take a PersonDTO as a parameter so I can just use my person.Data property for that as well.

public class Person : BALBase

{

        // Data

        // This property exists for all BAL objects, and it is

        // set to the DTO type for this entity.  This is the

        // mechanism that we use to implement "has a" inheritance

        // instead of "is a" inheritance.

        public PersonDTO Data { get; set; }

 

 

        // Person - default constructor

        public Person() {this.Data = new PersonDTO();}

        // Person - takes a DTO

        public Person(PersonDTO dto) {this.Data = dto;}

 

 

          // Validate

        public override List<ValidationError> Validate()

        {

            // Call all validation functions

            Val_Name();

            Val_Email();

            Val_Password();

            Val_TimeZone();

            Val_City();

            Val_State();

            Val_ZipCode();

            Val_ImType();

 

            // If the ValidationErrors list is empty then

            // we passed validation.

            return this.ValidationErrors;

        }

 

 

        // Validation Methods:

        // There are only 2 requirements on validation methods.

        //  - They must handle adding a Validation Error to the

        //    ValidationErrors list if they find an error.

        //  - You must manually add a call to all validation methods

        //    to the Validate() function.

        //  When creating a new ValidationError object, remember

        //  that the first parameter is the exact name of the field

        //  that has the bad value, and the error message should

        //  not contain the field name, but instead the <FieldName>

        //  tag, which will be replaced by the UI or consuming app.

 

        // Val_Name

        public bool Val_Name()

        {

            // Name required

            if (this.Data.Name == DTOBase.String_NullValue)

            {

                this.ValidationErrors.Add(new ValidationError("Person.Name", "<FieldName> is required"));

                return false;

            }

            else

            {

                return true;

            }

        }

 

        // You get the idea.  I’m leaving out the rest of the validation code

        // so you don’t go blind reading the same lines over and over.

}

No persistence logic there, just data and validation logic.  So you’re probably thinking, if the persistence logic doesn’t go in my entity class, then where does it go?  The answer is, another class.  POCOs must be hydrated by some other class that encapsulates the persistence logic for that entity, like a repository or a data controller.  I typically use a repository.  For this example I used a PersonRepository class that encapsulates the logic for getting a new person object, getting a personDTO from the DAL, and then setting person.Data = personDTO. Same with the save.  My PersonRepository class has a SavePerson() method that takes a full person object then passes its person.Data value to the DAL to be persisted.  Code for getting and setting a person entity in my UI looks like this:

hydrate from db:

Person person = PersonRepository.GetPersonByEmail(email);

 

save to db:

PersonRepository.SavePerson(ref person, true);

Why Would I Ever Want to Do This?

The next question you might ask is What’s the point?  Why should I use these patterns instead of just using DataTables and putting all my persistence logic in my entity objects?  That answer is a little tougher.  I prefer a POCO / Repository / DTO architecture, but it’s not the one right way to design an application.  I think the benefits are that it is a very clean and easy to maintain architecture.  It provides separation of business logic and persistence logic, which is more in line with the Single Responsibility Principle.  Using POCOs with DTOs and a well designed DAL is just about the best performing architecture you can build, see my series on High Performance DAL Architecture. But, I think most .Net developers will be driven to use POCOs and repositories (but not DTOs) by ORMs.  Entity Framework, nHibernate, and a lot of the ORMs out there require or assume a POCO type architecture. In fact, Entity Framework has introduced an IPOCO interface which I’m having trouble finding documentation on but it sounds like something good. Also, if you want to get into Domain Driven Design, you’ve got to embrace the POCO.  In his excellent book Applying Domain-Driven Design and Patterns, Jimmy Nilsson even has a section titled “POCO as a Lifestyle”. 

So, in conclusion, learn to love the POCO, and make sure you don’t spread any misinformation about it being the same thing as a DTO.  DTOs are simple data containers used for moving data between the layers of an application.  POCOs are full fledged business objects with the one requirement that they are Persistence Ignorant (no get or save methods).  Lastly, if you haven’t checked out Jimmy Nilsson’s book yet, pick it up from your local university stacks.  It has examples in C# and it’s a great read.

Friday, March 6, 2009

High Performance Data Access Layer Architecture Part 3

This is the final post in a series that describes one design that I use for high performance data access.  In Part 1 we covered overall architecture and design of the PersonDb.  In Part 2 we covered the DALBase implementation.  If you haven’t read those posts I would recommend going back and looking them over before getting into today’s topic, the DTOParser classes. 

We’re implementing the architecture below.

image

So, at the beginning we decided that we’re using Data Transfer Objects (DTO) to move data between our BAL and DAL and that our DAL would have only two base return types; either a single DTO, or a generic List<DTO>.  At this point we’ve written our PersonDb, which encapsulates our data access methods, and we’ve written our DALBase, which encapsulates our GetSingleDTO() and GetDTOList() methods as well as helper methods for other things like getting a connection string and creating sproc parameters.  Below is the code that we wrote for GetSingleDTO().

// GetSingleDTO

protected static T GetSingleDTO<T>(ref SqlCommand command) where T : DTOBase

{

    T dto = null;

    try

    {

        command.Connection.Open();

        SqlDataReader reader = command.ExecuteReader();

        if (reader.HasRows)

        {

            reader.Read();

            DTOParser parser = DTOParserFactory.GetParser(typeof(T));

            parser.PopulateOrdinals(reader);

            dto = (T)parser.PopulateDTO(reader);

            reader.Close();

        }

        else

        {

            // Whever there's no data, we return null.

            dto = null;

        }

    }

    catch (Exception e)

    {

        // Throw a friendy exception that wraps the real

        // inner exception.

        throw new Exception("Error populating data", e);

    }

    finally

    {

        command.Connection.Close();

        command.Connection.Dispose();

    }

    // return the DTO, it's either populated with data or null.

    return dto;

}

So this method encapsulates all of our repeatable logic for getting a single DTO from a reader, and we use .Net generics to create a generalized method that can return any type that inherits from DTOBase.  However, notice that the details of which data fields we’re getting from the reader and how these fields map to our DTO properties are delegated to another object, the DTOParser.

What are ordinals and why use them?

Before we get into the DTOParser, let’s take a minute to talk about ordinals.  An ordinal is basically just an index number that tells you where a given data field is in the stream that you’re accessing through your SqlDataReader.  Let’s say you have a data field “date_created” and you need to get the data for date_created out of a reader.  Most developers would use code that looks like this.

Object field = reader["date_created"];

DateTime dateCreated = (field == DBNull.Value) ? DateTime.MinValue : (DateTime)field;

The data is gotten from the reader by data field name, it’s stored in an Object, we do a DBNull check, and if our data value passes we cast it to DateTime.  This is pretty solid code and I like the fact that we’re always doing a null check, but there are still some problems with it from a performance perspective.

First, we’re getting our data from the reader by name “date_created”.  The reader doesn’t know which specific field “date_created” is.  It has to go find the index associated with that name and then it can then use that index to access the data. That index value is the ordinal and the SqlDataReader can work in a much more efficient manner if we give it an ordinal to work with instead of a data field name.

Second, we’re getting a DateTime value but we’re first casting it to Object. I’d rather not do that cast since I know that I’m looking for DateTime data, but the reader[“field_name”] syntax returns an Object, plus I need to do a null check.  What other choice do I have?  If I’m using ordinals, the answer is that the SqlDataReader has a strongly typed GetDateTime() method that was made for this exact purpose.  SqlDataReader has strongly typed GetXXX() methods for every data type which will allow us to avoid this cast to Object.  SqlDataReader also has an IsDBNull() method which we can use to do our DBNull check. The catch is, these methods won’t accept data field names, they require you to use ordinals.

So, let’s write the same code assuming that we know the ordinal for “date_created” is 4.  The result would look like this.

DateTime dateCreated = reader.IsDBNull(4) ? DateTime.MinValue : reader.GetDateTime(4);

This code uses the most efficient method possible to get data from our SqlDataReader, and we save ourselves a cast to Object and a cast from Object (boxing and unboxing).  If we’re really serious about maximizing performance, this is the code we want to use.

How the DTOParser is Used

We covered this in the last post, but just to refresh our memories, let’s take a quick look at how our DTOParser object is used in DALBase.  The code below is taken from our generic method DALBase.GetSingeDTO<T>( SqlCommand command).   We create a return object of type T, and we use the command object that was passed in to get a reader.  If the reader has any rows, we call Read() on it. Next we use the DTOParserFactory to get a parser object.  The DTOParserFactory.GetParser() method takes the desired DTO type as a parameter and returns an instance of the appropriate concrete DTOParser class. At that point all we have to do is pass our reader to the parser, cast the returned DTO to type T, and do a little cleanup.

      T dto = null;

      command.Connection.Open();

      SqlDataReader reader = command.ExecuteReader();

      if (reader.HasRows)

      {

          reader.Read();

          DTOParser parser = DTOParserFactory.GetParser(typeof(T));

          parser.PopulateOrdinals(reader);

          dto = (T)parser.PopulateDTO(reader);

          reader.Close();

      }

      else

      {

          // Whever there's no data, we return null.

          dto = null;

      }

The DTOParser Base Class

Now we can finally get back to writing our DTOParser classes. We’re going to have a separate concrete DTOParser class for each DTO type in our application.  There are two things that we need every concrete DTOParser to do.  First, our parser needs a method that takes an  SqlDataReader then gets and saves the ordinals for all of our data fields. Second, the parser needs a method that takes a reader and returns a single DTO populated with the data for the reader’s current record. We’re going to define the interface for these two methods using a DTOParser abstract base class.  All of our concrete DTOParser classes will now inherit from DTOParser and implement these two methods.  Note that the return type for PopulateDTO is DTOBase, which is the base type for all of our DTOs.

abstract class DTOParser

{

    abstract public DTOBase PopulateDTO(SqlDataReader reader);

    abstract public void PopulateOrdinals(SqlDataReader reader);

}

The DTOParser_Person Concrete Class

Now we can finally get into our concrete parser class for the PersonDTO.  DTOParser_Person will encapsulate all of our logic for getting data field/column values from our data.  The class needs to do three things:

  1. provide properties to store ordinal for each data field/column
  2. implement the PopulateOrdinals() method
  3. implement the PopulateDTO method

To refresh your memory, this is what our PersonDTO class looks like:

image

So we’ll start off by creating Ord_DataMemberName properties to hold the ordinal value for each one of our PersonDTO data members. The Ord_DataMemberName properties are of type int.  You may wonder why we’re bothering to create properties for each ordinal, why not just get these on the fly in our PopulateDTO() method?  The answer is that we really don’t need these properties when we’re accessing a single DTO.  However, when we’re getting a list of DTO’s we want to be able to get an instance of our parser, call PopulateOrdinals() one time, and then call PopulateDTO() for each item in our list.  In that situation we only populate the ordinals one time and because we are saving them to local properties, we can use them for each subsequent call to PopulateDTO(). The resulting DTOParser_Person class will look like this:

image

Now we need to implement the PopulateOrdinals() method.  This logic is pretty simple.  We take a reference to the SqlDataReader as our only parameter.  The reader has a GetOrdinal method that we can use to get the value of each ordinal by field/column name.  We just need to do this lookup for each field/column and then store the result in the corresponding Ord_XXX property.

public override void PopulateOrdinals(SqlDataReader reader)

{

    Ord_PersonGuid = reader.GetOrdinal("person_guid");

    Ord_PersonId = reader.GetOrdinal("person_id");

    Ord_UtcCreated = reader.GetOrdinal("utc_created");

    Ord_UtcModified = reader.GetOrdinal("utc_modified");

    Ord_Password = reader.GetOrdinal("password");

    Ord_Name = reader.GetOrdinal("name");

    Ord_Nickname = reader.GetOrdinal("nickname");

    Ord_PhoneMobile = reader.GetOrdinal("phone_mobile");

    Ord_PhoneHome = reader.GetOrdinal("phone_home");

    Ord_Email = reader.GetOrdinal("email");

    Ord_ImAddress = reader.GetOrdinal("im_address");

    Ord_ImType = reader.GetOrdinal("im_type");

    Ord_TimeZoneId = reader.GetOrdinal("time_zone_id");

    Ord_LanguageId = reader.GetOrdinal("language_id");

    Ord_City = reader.GetOrdinal("city");

    Ord_State = reader.GetOrdinal("state_code");

    Ord_ZipCode = reader.GetOrdinal("zip_code");

}

The only other thing we need to do is implement the PopulateDTO() method.  The logic for this is also simple.  The first thing we do is create a new PersonDTO.  Remember that PersonDTO, and all DTO types, inherit from DTOBase so we can use a PersonDTO as our return value.  Also, remember that the constructor for PersonDTO initializes all data members to they null value for their type (null values are defined in the CommonBase class). So, every data member starts off with a null value, which in our application means unassigned.  That means that if a field doesn’t pass the DBNull check, we don’t have to do anything to the corresponding PersonDTO data member, it’s already set to it’s null value.

So, after we have our PersonDTO object created, we just need to do a simple conditional for each data member.  First use the corresponding ordinal to make sure the value returned by the reader isn’t null.  Second, if that value isn’t null, then use the reader’s typed GetXXX() method to get the value.  Once all of the PersonDTO’s data members have been set, we return it. The resulting code looks like this.

public override DTOBase PopulateDTO(SqlDataReader reader)

{

    // We assume the reader has data and is already on the row 

    // that contains the data we need. We don't need to call read.

    // As a general rule, assume that every field must be null

    // checked. If a field is null then the nullvalue for that

    // field has already been set by the DTO constructor, we

    // don't have to change it.

 

    PersonDTO person = new PersonDTO();

 

    // PersonGuid

    if (!reader.IsDBNull(Ord_PersonGuid)) { person.PersonGuid = reader.GetGuid(Ord_PersonGuid); }

    // PersonId

    if (!reader.IsDBNull(Ord_PersonId)) { person.PersonId = reader.GetInt32(Ord_PersonId); }

    // UtcCreated

    if (!reader.IsDBNull(Ord_UtcCreated)) { person.UtcCreated = reader.GetDateTime(Ord_UtcCreated); }

    // UtcModified

    if (!reader.IsDBNull(Ord_UtcModified)) { person.UtcModified = reader.GetDateTime(Ord_UtcModified); }

    // Password

    if (!reader.IsDBNull(Ord_Password)) { person.Password = reader.GetString(Ord_Password); }

    // Name

    if (!reader.IsDBNull(Ord_Name)) { person.Name = reader.GetString(Ord_Name); }

    // Nickname

    if (!reader.IsDBNull(Ord_Nickname)) { person.Nickname = reader.GetString(Ord_Nickname); }

    // PhoneMobile

    if (!reader.IsDBNull(Ord_PhoneMobile)) { person.PhoneMobile = reader.GetString(Ord_PhoneMobile); }

    // PhoneHome

    if (!reader.IsDBNull(Ord_PhoneHome)) { person.PhoneHome = reader.GetString(Ord_PhoneHome); }

    // Email

    if (!reader.IsDBNull(Ord_Email)) { person.Email = reader.GetString(Ord_Email); }

    // ImAddress

    if (!reader.IsDBNull(Ord_ImAddress)) { person.ImAddress = reader.GetString(Ord_ImAddress); }

    // ImType

    if (!reader.IsDBNull(Ord_ImType)) { person.ImType = reader.GetInt32(Ord_ImType); }

    // TimeZoneId

    if (!reader.IsDBNull(Ord_TimeZoneId)) { person.TimeZoneId = reader.GetInt32(Ord_TimeZoneId); }

    // LanguageId

    if (!reader.IsDBNull(Ord_LanguageId)) { person.LanguageId = reader.GetInt32(Ord_LanguageId); }

    // City

    if (!reader.IsDBNull(Ord_City)) { person.City = reader.GetString(Ord_City); }

    // State

    if (!reader.IsDBNull(Ord_State)) { person.State = reader.GetString(Ord_State); }

    // ZipCode

    if (!reader.IsDBNull(Ord_ZipCode)) { person.ZipCode = reader.GetInt32(Ord_ZipCode); }

    // IsNew

    person.IsNew = false;

 

    return person;

}

Summary

That’s it!  We have a DTO!  We now have a framework to easily create, parse and return strongly typed DTOs and due to the optimizations that we made like choosing a lightweight data container, using SqlDataReaders with ordinals, and minimizing casts, our DAL will perform like lightning. 

Looking back over this code, I realize that there really are quite a few pieces.  However, I also noticed that most of the pieces are very small and easy to understand.  I try to employ SOLID principles, especially the Single Responsibility Principle, and looking back over a DAL design like this I think it really pays off in terms of maintainability and code readability. When you look at methods like PopulateOrdinals() or PopulateDTO(), those methods do only one thing, it’s very obvious from the method names and from code itself what the methods are designed to do, and it’s easy to see what needs to be done to implement this code for different DTO types.  I think the clarity and understandability created by designing code in this way is well worth the extra effort it requires.

So that’s pretty much it.  The one thing I did not cover is the DTOParserFactory class.  It’s just a simple factory class and I’m including the code for it as well as the full code listing for the DTOParser_Person class below.

- rudy

 

internal static class DTOParserFactory

{

    // GetParser

    internal static DTOParser GetParser(System.Type DTOType)

    {

        switch (DTOType.Name)

        {

            case "PersonDTO":

                return new DTOParser_Person();

                break;

            case "PostDTO":

                return new DTOParser_Post();

                break;

            case "SiteProfileDTO":

                return new DTOParser_SiteProfile();

                break;

        }

        // if we reach this point then we failed to find a matching type. Throw

        // an exception.

        throw new Exception("Unknown Type");

    }

}

 

class DTOParser_Person : DTOParser

{

    private int Ord_PersonGuid;

    private int Ord_PersonId;

    private int Ord_UtcCreated;

    private int Ord_UtcModified;

    private int Ord_Password;

    private int Ord_Name;

    private int Ord_Nickname;

    private int Ord_PhoneMobile;

    private int Ord_PhoneHome;

    private int Ord_Email;

    private int Ord_ImAddress;

    private int Ord_ImType;

    private int Ord_TimeZoneId;

    private int Ord_LanguageId;

    private int Ord_City;

    private int Ord_State;

    private int Ord_ZipCode;

 

 

    public override void PopulateOrdinals(SqlDataReader reader)

    {

        Ord_PersonGuid = reader.GetOrdinal("person_guid");

        Ord_PersonId = reader.GetOrdinal("person_id");

        Ord_UtcCreated = reader.GetOrdinal("utc_created");

        Ord_UtcModified = reader.GetOrdinal("utc_modified");

        Ord_Password = reader.GetOrdinal("password");

        Ord_Name = reader.GetOrdinal("name");

        Ord_Nickname = reader.GetOrdinal("nickname");

        Ord_PhoneMobile = reader.GetOrdinal("phone_mobile");

        Ord_PhoneHome = reader.GetOrdinal("phone_home");

        Ord_Email = reader.GetOrdinal("email");

        Ord_ImAddress = reader.GetOrdinal("im_address");

        Ord_ImType = reader.GetOrdinal("im_type");

        Ord_TimeZoneId = reader.GetOrdinal("time_zone_id");

        Ord_LanguageId = reader.GetOrdinal("language_id");

        Ord_City = reader.GetOrdinal("city");

        Ord_State = reader.GetOrdinal("state_code");

        Ord_ZipCode = reader.GetOrdinal("zip_code");

    }

 

 

 

 

 

    public override DTOBase PopulateDTO(SqlDataReader reader)

    {

        // We assume the reader has data and is already on the row 

        // that contains the data we need. We don't need to call read.

        // As a general rule, assume that every field must be null

        // checked. If a field is null then the nullvalue for that

        // field has already been set by the DTO constructor, we

        // don't have to change it.

 

        PersonDTO person = new PersonDTO();

 

        // PersonGuid

        if (!reader.IsDBNull(Ord_PersonGuid)) { person.PersonGuid = reader.GetGuid(Ord_PersonGuid); }

        // PersonId

        if (!reader.IsDBNull(Ord_PersonId)) { person.PersonId = reader.GetInt32(Ord_PersonId); }

        // UtcCreated

        if (!reader.IsDBNull(Ord_UtcCreated)) { person.UtcCreated = reader.GetDateTime(Ord_UtcCreated); }

        // UtcModified

        if (!reader.IsDBNull(Ord_UtcModified)) { person.UtcModified = reader.GetDateTime(Ord_UtcModified); }

        // Password

        if (!reader.IsDBNull(Ord_Password)) { person.Password = reader.GetString(Ord_Password); }

        // Name

        if (!reader.IsDBNull(Ord_Name)) { person.Name = reader.GetString(Ord_Name); }

        // Nickname

        if (!reader.IsDBNull(Ord_Nickname)) { person.Nickname = reader.GetString(Ord_Nickname); }

        // PhoneMobile

        if (!reader.IsDBNull(Ord_PhoneMobile)) { person.PhoneMobile = reader.GetString(Ord_PhoneMobile); }

        // PhoneHome

        if (!reader.IsDBNull(Ord_PhoneHome)) { person.PhoneHome = reader.GetString(Ord_PhoneHome); }

        // Email

        if (!reader.IsDBNull(Ord_Email)) { person.Email = reader.GetString(Ord_Email); }

        // ImAddress

        if (!reader.IsDBNull(Ord_ImAddress)) { person.ImAddress = reader.GetString(Ord_ImAddress); }

        // ImType

        if (!reader.IsDBNull(Ord_ImType)) { person.ImType = reader.GetInt32(Ord_ImType); }

        // TimeZoneId

        if (!reader.IsDBNull(Ord_TimeZoneId)) { person.TimeZoneId = reader.GetInt32(Ord_TimeZoneId); }

        // LanguageId

        if (!reader.IsDBNull(Ord_LanguageId)) { person.LanguageId = reader.GetInt32(Ord_LanguageId); }

        // City

        if (!reader.IsDBNull(Ord_City)) { person.City = reader.GetString(Ord_City); }

        // State

        if (!reader.IsDBNull(Ord_State)) { person.State = reader.GetString(Ord_State); }

        // ZipCode

        if (!reader.IsDBNull(Ord_ZipCode)) { person.ZipCode = reader.GetInt32(Ord_ZipCode); }

        // IsNew

        person.IsNew = false;

 

        return person;

    }

 

}

kick it on DotNetKicks.com