In this post we will derive REST from constraints. However before we do that I would recommend reading the 1st part of the series here.

Deriving REST

Dr. Fielding in his dissertation talks about the method that he would use to define REST. This method is more constraints driven rather than requirements driven. A constraints driven approach identifies the factors that influence system behavior and then we apply the design so that the constraints works with those factors rather than working against them.

 

Requiements Vs Constratints

Requiements Vs Constratints

 

Many software architectures are built and designed for small set of requirements and as we get new requirement we grow our design to incorporate those. The PC architectures follow this pattern because the domain of the architecture and domain of the business are closely coupled. These designs solve programmer problems like encapsulation. These designs are designed and tested in a limited environment and then deployed at production where we discover that there are limitations that keep these designs from being broadly usable apart from the environment for which it was designed.

 

REST was designed to solve this problem by determining these constraints in a distributed architecture that restrict the design to be usable broadly. Then REST applies these constraints on a working design and thus shaping it incrementally. Hence we end up mapping the business domain on the architecture domain.

So as we conclude that REST is defined as the identifying the forces that are barriers in distributed computing then knowing these barriers might be helpful in understanding the significance of the individual constraints.

 

Fallacies of Distributed Computing

These are the set of assumptions that L. Peter Deutsch at Sun Microsystems (now Oracle Corporation) originally declared and it states the assumptions that the programmers unaccustomed to distributed applications invariably make. These assumptions ultimately prove false, ensuing either the failure of the system, a considerable reduction in system scope, or in giant, unplanned expenses needed to revamp the system to satisfy its original goals.

The 8 Fallacies of Distributed Computing are as below:

  • The network is reliable.
  • Latency is zero.
  • Bandwidth is infinite.
  • The network is secure.
  • Topology doesn’t change.
  • There is one administrator.
  • Transport cost is zero.
  • The network is homogeneous.

So we should design our architecture to work with these forces of nature rather than against them.

Constraints

Let’s have a look at few architectural constraints that define the RESTful style.

Client – Server constraint

This is one of the fundamental constraint and enforces the constraint in for the client server architecture. The constraint defines all the communication between nodes in a distributed architecture as being between a client and a server. A server is continuously listening for message and when a client sends a message to the server then the server processes it and returns a response. This constraints allows separation the concerns of server and client mainly for User Interface and thus allows different types of client to work with the server and also the client can evolve independently of the server.

Client Server

Client Server

The guiding forces for the Client-Server constraint are as follows.

  • Network security is improved as by scoping the connections between clients and servers we can make the system more secure.
  • Administration is easier as by scoping the connections between the clients and servers we limit the responsibilities of client server and hence they are easy to manage.
  • Heterogeneous network is workable by connecting and disconnecting any number of clients on multiple platforms with no impact on the server

The properties of this constraint are:

  • Client portability is more because the client structure is independent of the server
  • Scalability is better because the server does not have to worry about the user interface details
  • Independent client evolution happens as the server and client are independent.

Stateless constraint

In distributed application the stateless constraint is quite prominent. Stateless constraint does not imply that we should maintain no state of the application but Stateless constraint applies to the communication between the client and server. So the client server interaction must be stateless so that the server is able to process the request with just the information provided by the client request without any context available on the server. The design with Stateless constraint will imply that the state is stored on the client. This design is quite suitable in designs where clients and servers are constantly being added, removed or their network identities are being modified.

Statelessness

Statelessness

The guiding forces for the Stateless constraint are as follows:

  • Network Reliability is improved by storing the state in the client and we allow the interaction between the client and server to be stateless and this give the application the capability to recover from network errors.
  • Network Topology will be simpler since the state of the client is on the server we can add, remove clients and servers from the network without any corruption of data.
  • Administration will be simple when we have stateless interactions.

The various properties of this constraint comes are:

  • Visibility is improved since the system does look for any further than the current request so the full nature of the request is known easily.
  • Reliability of the system is more reliable because the system could recover from partial failures.
  • Scalability is better because the server does not have to worry about the state maintenance across various requests and servers.

There are a couple of design trade-offs that we would have to do when following this architecture.

  • Network Performance might decrease as we might me sending more or repetitive data in each request for the server to have enough information to process the request independently.
  • Client consistency might be lost as the state management is done on the client and the implementation might be different on different platforms.

Cache constraint

According to REST the response from the server should be implicitly or explicitly labeled as cacheable or not cacheable. When the response is cacheable then the client is allowed to reuse the response in equivalent requests. This could allow our applications to reap the benefits of caching at multiple levels (server, intermediate or client). This will majorly improve the network efficiency.

Cache

Cache

The guiding forces for the Cache constraint are as follows:

  • Latency is reduced as some the requests might be served on the client itself and some of them from other caches.
  • Bandwidth consumption is less since some requests might not even reach the server and served beforehand by cache.
  • Transport cost is reduced as the number of requests might be reduced.

The properties of Cache constraint are:

  • Efficiency is improved since the application might have less latency and sucks less network.
  • Scalability is improved since the application is more efficient it could handle more clients.
  • User perceived performance could be improved when the response from the request is coming from the cache.

The design trade-offs that we might have live with, in this architecture is

Decreased reliability on data if the data is stale and differs significantly from the one which would have been provided from the server (if requested).

Uniform Interface constraint

This is the major differentiator between the REST architecture and other network-based architectures. This constraint emphasizes on having a Uniform Interface for all the components in the architecture and could be achieved by applying the generality principle to the component interface and hence simplifying the overall system architecture and improving the visible interactions. So each component talks to the other via standard mechanism. Implementation of decoupling from the service could lead to independent evolution.

Uniform Interface

Uniform Interface

To achieve the Uniform Interface constraint we need to include the following elements in our design:

  • Identification of resources
  • Manipulation of resources through representation
  • Self-descriptive messages
  • Hypermedia as an engine of application state (HATEOAS)

The guiding forces for the Cache constraint are as follows:

  • Network reliability is improved when all the components of the design understand the message sin the same way.
  • Network topology could be simpler and evolve as the clients and serve communicate with each other following the same interface
  • Administration could be easier since we could introduce generic tools for network optimization
  • Heterogeneous network could be supported better because the communication interface is the same between different components.

The properties of Uniform Interface constraint are as below:

  • Visibility is more when we are exchanges the same Interface between all the components of the architecture.
  • Evolvability for each component will be easier as all the component talk the same language

The design trade-offs that we might have live with, in this architecture is

Decreased efficiency since the data will be transferred in standard format rather than the specific format in which it is needed by the application.

Layered System Constraint

Layered system constraint states that a component in a system should only know about the components of the layer with which it is interacting.

Layered System

Layered System

The guiding forces for the Layered System constraint are as follows:

  • Network topology could be simpler as the communication is restricted to the layers and when we change the network components then the only the elements that interact with that layer will be impacted.
  • Security will be better since we layering will allow us to place trust boundaries in layers know the possible components interaction.

The properties of Layered System constraint are as below:

  • Scalability is enormous when we have layered system and modern web is a living example of this.
  • Manageability is also great since each layer could be managed by different admins and still be perfectly operational and scalable. Example my browser know to manage the connection proxy which is managed by my company which know how to connect to Internet which is managed the ISP and so on and so forth. Each layer is managed by different system with different policies.

The design trade-offs that we might have live with, in this architecture is

Increased latency since the data might travel more layers as each component will be communicating with the layer it’s supposed to as compared to a direct connection. We can mitigate this trade off by usage of shared caches and intermediate load balancers.

Code on Demand Constraint

This is listed as an optional constraint in Dr. Fielding paper and this might be one of the reasons why it’s not talked about as much. Code on Demand states that along with provides the clients with the data and metadata, the servers could also provide executable code. The idea is to provide the client with readymade features so that they do not need to write or rewrite them.

Code On Demend

Code On Demend

The properties of Layered System constraint are as below:

  • Simplicity is increased since the client have less number of pre written features and these features could be made available by the server.

The design trade-offs that we might have live with, in this architecture is

Reduced visibility since the clients are downloading the readymade code and features and these might affect caching, manageability and security. So the key rule to applying this constraint is that we should apply this constraint is such a way that the clients who support it should be benefitted by it and the client who do not support this should not break.

 

Any questions, comments and feedback are most welcome.

 

I have been learning and working on REST for a while now. But I have on many blogs that there are disconnects between what REST actually is and what is perceived. So I wanted to write an article based on my understanding of REST. I would talk here about what REST actually is and how to design systems that follow the principles of REST. I will talk about following things in the series.

  • Components of modern distributed architecture
  • Properties of RESTful design
  • What REST is and what it is not?
  • The journey to RESTfulness
  • REST and the rest
  • RESTful Architecture
  • Elements of RESTful Architecture
  • Designing for RESTful Services
  • REST and Cloud

Components of modern distributed architecture

Distributed application development is more challenging in the modern times as we are dealing with of everything users, services, hardware, etc. Few of the major problems that we face today are:

Interoperability between heterogeneous applications

In simple words we want to integrate different applications which have been developed with different frameworks and might even run on different platforms. You must have seen multiple ways to sign up or share various website. It’s a live example of integration of heterogeneous applications in one place. These are different service providers who could not make assumptions for the applications and services provided by any other platform and yet we would like to use all these providers at one place.

Share

Share

 

Signup

Signup

 

 

We want these different integration pieces to simple, consistent and reliable.

Heterogeneous

Heterogeneous

Diversity in Devices

REST is based on the idea of a network based API rather than a library based API and this goes hand in hand with integration of heterogeneous applications and services available. Today we want the integration to be device independent. When we say devices we not only mean smartphones and tablets, it includes most of the electronic devices including cameras, navigation devices, watches, car in dash and what not.

Device Integration

Device Integration

 

 

We want our services and applications to work seamlessly for all the devices. Since most of these devices run native apps and not web based applications maintenance and updates are big challenges. And you could imagine the issues we might come across for performance and efficiency when working with multiple devices because of the network availability and amount of data that can be transferred on the network (data being paid as per usage).

To be or not to be: Cloud

Apart from the interoperability between different services and different devices a major problem that we face is the number of users simultaneously accessing the service. We could most out of the scalable infrastructure if have a scalable architecture. The various organizations have already been taking the advantage of the elastic infrastructure provided by various companies.  The elastic infrastructure allows the business to automatically grow or shrink the computing power and storage capacity of the applications according to the number of users and pay only for the resources that are used.

 

Cloud

Cloud

However we need to understand that cloud just provides the hardware and capability to scale our applications and services and we need to develop our applications and services in a way so that could utilize the various capabilities offered by cloud. We need to build saleable architecture to take advantage of saleable infrastructure.

We could design scalable architectures by neither depending on the middleware in our infrastructure nor on the hardware does that have inherent limitations of its own. We should build applications with transparency in mind so that in case of any errors or failures we don’t have to dig through for days.

The gives us the capability to eliminate the situations shown on the left and provide us a clean (that’s all you need to think) and maintenance free hardware.

 

Server Rooms

Server Rooms

Properties of RESTful design

There are various properties of REST design align with the solutions of the challenges that we discussed above.

  • Heterogeny – The ability to seamlessly interoperate with other participants regardless of language or platform.
  • Scalability – The ability to limit complexity between components in a distributed system, efficiently handling requests and scaling out horizontally when needed.
  • Evolvability – The ability for client and services to evolve independently of one another.
  • Visibility – The ability for value added components such as intelligent gateways to operate correctly without needing access to any hidden or proprietary state such as session state.
  • Reliability – The ability for clients to recover more reliably from failures by developing rich compensation strategies.
  • Efficiency – The ability for multiple components such as proxy servers and caches to participate in the handling of requests taking load away from your server.
  • Performance – The ability to use caches, greatly improving the speed in which a response can be delivered, giving the impression of increased performance.
  • Managability – The ability for simpler management due to interactions between components happening in a highly consistent and visible way.

One or more of these properties align with solving each of the challenges that we discussed previously.

What REST is and what it is not?

REST is not RPC – In RPC the design target of a network interaction is a remote function and the goal RPC of RPC to abstract all the network details so that the developer writing the code should not care about the components interacting over the network.

Whereas in REST the design target of a network interaction is a network resource. Also the network schematics are part of the design.

REST is not just HTTP – Http is the underlying architecture on which REST is based but using just HTTP verbs correctly does not make our services completely RESTful. However most RESTful systems use HTTP as the underlying platform

REST in not just URI – URIs hold an important place in RESTful design but extreme focus on URIs could push us back to thinking designs more the RPC way.

REST is not just anything that is not SOAP – SOAP is more of an implementation detail and REST is more like an architectural style. SOAP aligns iteslf with RPC design style and anything which not SOAP does not imply that it is REST.

Representational State Transfer better known as REST is an architectural style defined in the dissertation of Dr. Roy Fielding at University of California, Irvine in 2000. He designed REST for larger architectural concepts on which web was designed. As per fielding the phrase “Representational state transfer” represents for how a well-designed application behaves as virtual state machine of web pages where progress is made via links.

The journey to RESTfulness – Richardson’s Maturity Model

This model gaining attention and importance in the community and has been referred by Martin Fowler and books like The RESTful CookBook. It is a model we could use to grade our API as per the constraints of REST. The more our APIs adhere to these constraints the closer they are towards RESTfulness.

The different steps in the image below represent the incremental steps towards REST. These are in no way the levels of REST.

 

RESTfull

RESTfull

L0 – represents that we are following the RPC style with the Plain Old XML (POX). This is the most elementary level of the service maturity.

L1 – represents the use differentiated resources.

L2 – represents the usage of HTTP verbs and HTTP status codes.

L3– represents the use of hypermedia controls.

More details on the Richardson’s Maturity Model could be found at Martin Fowler’s blog.

 

Any Questions, Comments and feedback are always welcome.

 

DIP states that the higher level modules should be coupled with the lower level modules with complete abstraction. Meaning

  • The high level modules should not depend on low level modules but both should depend on abstraction.
  • Abstraction should not depend on detail but detail should depend on abstraction.

Following DIP allows our code to be loosely coupled thus ensuring that the high level modules are dependent on abstraction rather than concrete implementation of low level modules. The Dependency Injection Principle is an implementation of this principle.

 DependencyInversionPrinciple

In the image above it is obvious that we would never solder the wire directly to the supply but a level of abstraction by multiple plugs by simply plugging in.

Dependencies

Before about the dependency inversion principle in detail let us first understand what dependencies are and what dependencies we add into our modules without even knowing.

We add many dependencies in our code during the course of development. If we are developing a .NET application we would most probably have a dependency on the .NET framework. That is not a major concern because the framework in unlikely to change during the course of our development. However our major concern should be the dependencies that we add into our application that might change like third party libraries. Another common dependency that we have in the code is the database dependency. So we should try to make sure these dependencies are not implicit but explicit so that we can replacement implementations for these easily. The following is the list of common dependencies that we might have in our applications.

  • Framework
  • Third Party Libraries
  • Database
  • File System
  • Email
  • Web Services
  • System Resources (Clock)
  • Configuration
  • The new keyword
  • Static methods
  • Thread.Sleep
  • Random

Mostly we add dependencies into the application when the higher level modules call the lower level modules and the high level modules instantiate the lower level modules as they need them.  For example the user interface logic could depend on the business logic and business logic might instantiate the Infrastructure classes, data access classes, etc. So the user interface logic could depend on the business logic and business logic could depend on the Infrastructure or data access logic.

 

Let us have a look at some code that violates DIP. Say we have a class order that exposes the checkout method and has 2 methods: ProcessOrder() and ProcessPayment(). We can see neither there are any explicit dependencies set for the order class nor are there any implicit dependencies in the Checkout method. We could also see that neither ProcessPayment nor ProcessOrder declare any dependencies.

public class Order

{
    public void Checkout()
    {
        ProcessPayment();
        ProcessOrder();
    }
}

 

When we go and have a look at the implementations of these methods we see that the ProcessPayment method has a dependency on the PaymentGateway to charge the credit card with the amount and the ProcessOrder method has a dependency on the InventorySystem to reserve the item in the inventory.

private void ProcessPayment()
    {
        // Instantiate the PaymentGateway
        PaymentGateway paymentGateway = new PaymentGateway();
        // Charge the card with amount
        paymentGateway.ChargeCreditCard();
    }

    private void ProcessOrder()
    {
        // Instantiate InventorySystem
        InventorySystem inventorySystem = new InventorySystem();
        // Reserve the item in inventory
        inventorySystem.ReserveInventory();
    }

So the problem starts when any of these systems (PaymentGateway or InventorySystem) is not available or changes. It issues with this types of implementation are

  • Tight coupling between classes (Order is tightly coupled with PayementGateway and InventorySystem)
  • Not easy to change implementation because to change the implementation we need to change the implementation of the Order class and it will violate the Open Close Principle.
  • Difficult to test

Dependency Injection

Dependency Injection is a technique that is used to allow calling code to inject dependencies a class needs when it is instantiated. It also goes by the name of the Hollywood principle (“Don’t call us, we’ll call you!”). So instead of creating an instance of the PaymentGateway we should be able to call some service to charge the credit card.

There are 3 popular techniques for Dependency injection

Constructor Injection

This is implemented by the use of strategy pattern wherein the dependencies are passed in the constructor of class. So the constructer specifies the dependencies it need to function completely and the calling code about the dependencies of the class.

 

class OrderConstructorInjection
    {
        private PaymentGateway _paymentGateway;
        private InventorySystem _inventorySystem;

        public OrderConstructorInjection(IPaymentGateway paymentGateway, IInventorySystem inventorySystem)
        {
            _paymentGateway = paymentGateway;
            _inventorySystem = inventorySystem;
        }

        public void Checkout()
        {
            ProcessPayment();
            ProcessOrder();
        }

        private void ProcessPayment()
        {
            // Charge the card with amount using dependency injected in constructor
            _paymentGateway.ChargeCreditCard();
        }

        private void ProcessOrder()
        {
            // Reserve the item in inventory using dependency injected in constructor
            _inventorySystem.ReserveInventory();
        }
    }

Pros

  • Class declares upfront what it needs to function properly
  • Class will always be in a valid state once constructed as it does not have any other dependency than the ones explicitly mentioned in the constructor.

Cons

  • Constructors might end up having too many parameters (design smell)
  • Some methods in the class might not use all the parameters passed in the constructor (design smell)
  • Some features like serialization might need a default constructor as well.

Property Injection

In this type of injection we pass the dependencies via properties. It is also known as setter injection.

public class OrderPropertyInjection
    {
        public IPaymentGateway _paymentGateway { get; set; }
        public IInventorySystem _inventorySystem { get; set; }

        public void Checkout()
        {
            ProcessPayment();
            ProcessOrder();
        }

        private void ProcessPayment()
        {
            // Charge the card with amount using class properties
            _paymentGateway.ChargeCreditCard();
        }

        private void ProcessOrder()
        {
            // Reserve the item in inventory using class properties
            _inventorySystem.ReserveInventory();
        }
    }

Pros

  • Flexible as the dependency can be changed at any time.

Cons

  • Objects may be in inconsistent state between construction and setting of dependency.

Parameter Injection

In this type of injection we pass the dependencies in the method directly as parameters.

public class OrderParameterInjection
    {
        public void Checkout(IPaymentGateway paymentGateway, IInventorySystem inventorySystem)
        {
            ProcessPayment(paymentGateway);
            ProcessOrder(inventorySystem);
        }

        private void ProcessPayment(IPaymentGateway paymentGateway)
        {
            // Charge the card with amount using dependencies passed as parameter
            paymentGateway.ChargeCreditCard();
        }

        private void ProcessOrder(IInventorySystem inventorySystem)
        {
            // Reserve the item in inventory using dependencies passed as parameter
            inventorySystem.ReserveInventory();
        }
    }

Pros

  • Gives us the granular level control on the dependencies that we need to inject
  • More flexible as we don’t need to modify anything in the rest of the class other than the method we are changing.

Cons

  • The method itself might end up with many parameters (design smell)
  • If we change the method signature then we might need to make changes at the places where this method is being used.

Where to instantiate objects

Now that we have made the implementation of the order class without any instantiations then where do we instantiate the dependencies. Below are few common places where we can instantiate the dependencies.

Default Constructor

We could have a default constructor that would instantiate the dependencies needed in the application. This approach is referred as poor man’s IoC

Main

We can instantiate the dependencies we need in the Main method of the application or startup routine of the application.

IoC Container

We could use an Inversion of Control (IoC) container.  IoC containers are responsible for object graph instantiation and the initiation happen when the application begins and IoC’s generally use code or configurations to figure out what is set up to use when an Interface is called for. We need to register the managed interfaces and implementations with the container and then the dependencies on Interfaces is resolved at application startup or runtime.

Few of the IoC containers available in .NET are

  • Microsoft Unity
  • StructureMap
  • Ninject
  • Windsor
  • Funq / Munq

 

 

Find the complete source code for this post at googledrive or skydrive.

Any questions comments and feedback are most welcome.

 

The Interface Segregation Principle states that the clients should not be forced to use the methods that they do not use.

 

InterfaceSegregationPrincipleThe above image depicts the complex interface with switches and buttons for the usb to work but the end user does not care about this complexity. The end use just needs to know where to plug-in the usb for the usb to work. An Interface is a non-implementable type that specifies a public set of methods and properties that are implemented by the type that chooses to implement that interface. An Interface could also be the public interface of a class where it exposes the public methods and properties of the class. Now if some client needs only a part of the functionality then we should be able to better design the interfaces or sub-class so that the client in not forced to use what it does not need. Let us take the example of a store that takes order both online as well as in the store. In the online order the store accepts credit cards but in the store order it accepts only cash. So have an Interface IOrder as shown below:

public interface IOrder
{
    void ProcessOrder();
    void ProcessCreditCard();
}

And both the OnlineOrder class and InStoreOrder class implement this interface.

public class OnlineOrder : IOrder
{
    public void ProcessOrder()
    {
        //Process the order placed
    }
    public void ProcessCreditCard()
    {
        //Process payment through credit card
    }
}

 

public class InStoreOrder : IOrder
{
    public void ProcessOrder()
    {
        //Process the order placed
    }
    public void ProcessCreditCard()
    {
        //Not Implemented
        throw new NotImplementedException();
    }
}

In the above implementation we are violating the Interface Segregation Principle because the InStoreOrder class implements the IOrder interface but it does not implement one of the methods ProcessCreditCard. Let’s improve the solution to follow Interface Segregation Principle. To do that we will break the IOrder interface and create another Interface IOnlineOrder which will have the ProcessCreditCard method. So now the interfaces will look like below.

public interface IOrder
{
    void ProcessOrder();
}

public interface IOnlineOrder : IOrder
{
    void ProcessCreditCard();
}

So our existing implementations could still remain the same and new the implementation that need online part of the functionality will use the implement the required interface only.

public class OnlineOrder : IOnlineOrder
{
    public void ProcessOrder()
    {
        //Process the order placed
    }
    public void ProcessCreditCard()
    {
        //Process payment through credit card
    }
}

 

public class InStoreOrder : IOrder
{
    public void ProcessOrder()
    {
        //Process the order placed
    }
}

 

Liskov Substitution Principle can be considered to be an extension of the Open / Closed principle which states the base class reference should be replaceable by the child class without changing the functionality.

LiskovSubstitutionPrinciple

Let us assume that we implemented a Rectangle class with height, width properties and getArea method. Alone it will function perfectly fine.

 

public class Rectangle
{
    protected double Width;
    protected double Height;

    public virtual void SetWidth(double width)
    {
        Width = width;
    }

    public virtual void SetHeight(double height)
    {
        Height = height;
    }

    public double GetWidth()
    {
        return Width;
    }

    public double GetHeight()
    {
        return Height;
    }

    public double GetArea()
    {
        return Height * Width;
    }
}

Now we would like to have a similar functionality for a Square as well, so instead on reinventing the wheel we will simply inherit from the Rectangle class and customize the functionality for a square.

public class Square : Rectangle
{
    public override void SetWidth(double width)
    {
        Width = width;
        Height = width;
    }
    public override void SetHeight(double height)
    {
        Width = height;
        Height = height;
    }
}

 

But now if we replace the reference of the parent class by the child class then we will not the correct area for the rectangle since we have change the core functions (setHeight, setWidth) to set the height and width to same value which is not true in case of rectangle. Hence we have violated the Liskov Substitution principle.

var rectangleAsRectangle = new Rectangle();
rectangleAsRectangle.SetHeight(40);
rectangleAsRectangle.SetWidth(60);

Console.WriteLine("Area of the rectangle = " +
AreaCalculator.CalculateArea(rectangleAsRectangle) + " where Height = " +
rectangleAsRectangle.GetHeight() + " and Width = " +
rectangleAsRectangle.GetWidth());

Output:

Area of the rectangle = 2400 where Height = 40 and Width = 60

Rectangle squareAsRectangle = new Square();
squareAsRectangle.SetHeight(40);
squareAsRectangle.SetWidth(60);

Console.WriteLine("Area of the rectangle = " +
AreaCalculator.CalculateArea(squareAsRectangle) +
" where Height = " +squareAsRectangle.GetHeight() +
" and Width = " + squareAsRectangle.GetWidth());

Output:

Area of the rectangle = 3600 where Height = 60 and Width = 60

It is clear that the Square type is not substitutable for the Rectangle. LSP states that we should be able to the child classes should be able to extend the base classes without changing their existing functionality and we are violating that in this implementation as our square class is changing the behavior of the rectangle class.

Generally speaking the non-substitutable code will break polymorphism.

We can fix this code by creating a class (Shape) from which both Rectangle and Square inherit from. So as we could see in the code below, we have created an abstract class shape with an abstract method GetArea.

public abstract class Shape
{
    public abstract double GetArea();
}

We will now inherit this class in Rectangle and Square and provide the individual implementation of GetArea.

public class Rectangle : Shape
{
    private double _height;
    private double _width;

    public double Height
    {
        get { return _height; }
        set { _height = value; }
    }

    public double Width
    {
        get { return _width; }
        set { _width = value; }
    }

    public override double GetArea()
    {
        return Height * Width;
    }
}

public class Square : Shape
{
    private double _sideLength;
    public double SideLength
    {
        get
        {
            return _sideLength;
        }
        set
        {
            _sideLength = value;
        }
    }

    public override double GetArea()
    {
        return SideLength*SideLength;
    }
}

We could use the Shape class to get the area of the shapes.

static void Main()
        {
            Shape shape = new Rectangle { Height = 40, Width = 60 };
            Console.WriteLine("Area of the rectangle shape = " + shape.GetArea());

            shape = new Square { SideLength = 40 };
            Console.WriteLine("Area of the square shape = " + shape.GetArea());

            Console.ReadLine();
        }

Output

Area of the rectangle shape = 2400

Area of the square shape = 1600

So now the parent class is substitutable by the child classes without changing any existing functionality and so we are not violating the Liskov Substitution Principle.

 

Find the complete source code for this post at googledrive or skydrive.

Any questions comments and feedback are most welcome.