Debugging or Running an ASP.NET Application without Windows Azure Compute Emulator

Recently, one of my .NET developers who was involved in a Windows Azure Project came and asked me two questions:

1. Why does it take longer time to debug or run a Windows Azure Project than a typical ASP.NET project? It takes about 15 to 30 seconds to debug a Windows Azure Project, but only 8 to 15 seconds to debug an ASP.NET project.

Figure 1 – Debugging a Windows Azure Project takes longer than an ASP.NET project

Figure 1 – Debugging a Windows Azure Project takes longer than an ASP.NET project

2. Can I debug or run the ASP.NET project instead of the Windows Azure Project when developing a Windows Azure application?

Figure 2 – Setting ASP.NET Web Application as Startup Project

Figure 2 – Setting ASP.NET Web Application as Startup Project

I’ve been looking at the online discussions around these issues and have found they’re very popular questions:

This article will answer and explain these two questions in more detail, including how it really works under the hood, tips and tricks to overcome the issue, and identified limitations.

1. Why does it take longer to debug or run a Windows Azure Project than a typical ASP.NET project?

Windows Azure development tools and SDK

First of all, I need to explain how the Windows Azure development tools and SDK work.

Microsoft enables developers to develop .NET applications targeting Windows Azure easily with the help of Windows Azure SDK (Software Development Kit). The SDK includes assemblies, samples, documentation, emulators, and command-line tools to build Windows Azure applications.

The emulator is designed to simulate the cloud environment, so developers don’t have to be connected to the cloud at all times. The two emulators are: Compute Emulator that simulates the Azure fabric environment and Storage Emulator that simulates the Windows Azure Storage. Apart from emulators, the two important command-line tools are CSPack that prepares and packages the application for deployment and CSRun that deploys and manages the application locally. Other command-line tools can be found here.

Apart from the SDK, there’s an add-in called Windows Azure Tools for Microsoft Visual Studiothat extends Visual Studio 2010 to enable the creation, configuration, building, debugging, running, packaging, and deployment of scalable web applications and services on Windows Azure. You will find a new “cloud” template (as can be seen in Figure 3) when adding a new project after installing it. Furthermore, it encapsulates the complexity of running the tools and other commands behind the scenes when we build, run, and publish a Windows Azure Project with Visual Studio.

Figure 3 – Windows Azure project template

Figure 3 – Windows Azure project template

The reason why it takes longer

It is true that it takes more time to debug or run a Windows Azure Project than a typical ASP.NET project.

In fact there’s a reasonable rationale behind. When we debug or run a Windows Azure cloud project, all other associated projects (Web / Worker Role) will be compiled and packed into acsx directory. Afterwards, Visual Studio lets CSRun deploy and run your package. TheCompute Emulator will then set up and host your web applications in IIS as many as we specify in Instance Count property.

Figure 4 – Websites are being set up in IIS when running Windows Azure Project

Figure 4 – Websites are being set up in IIS when running Windows Azure Project

As the Full IIS capability was introduced in SDK 1.3, web applications on Windows Azure involve two processes: w3wp.exe which runs your actual ASP.NET application, and WaIISHost.exe which runs your RoleEntryPoint in WebRole.cs / WebRole.vb.

As can be seen, there’re more steps involved when debugging or running a Windows Azure Project. This explains why it takes longer to debug or run Windows Azure Project on Compute Emulator compared to debugging or running an ASP.NET project on IIS or ASP.NET Development Server (Cassini) which is more straightforward.

2. Can I debug or run the ASP.NET project instead of the Windows Azure Project when developing a Windows Azure Project?

Jumping into the next question, is it possible to debug or run ASP.NET project instead of Windows Azure project?

The answer is yes. You can do so simply by setting the ASP.NET project as startup project. However, there are some caveats:

1. Getting configuration settings from Windows Azure Service Configuration

People often store settings at ServiceConfiguration.cscfg in their Windows Azure Project. You can get the setting value by callingRoleEnvironment.GetConfigurationSettingValue(“Setting1”). However, you will run into an error when debugging or running the ASP.NET project.

Figure 5 – Error when calling RoleEnvironment.GetConfigurationSettingValue in ASP.NET Project

Figure 5 – Error when calling RoleEnvironment.GetConfigurationSettingValue in ASP.NET Project

The reason of getting this error is because the ASP.NET project is unable to recognize and call GetConfigurationSettingValue as the settings belongs to Windows Azure Project.

The Resolution

To resolve this error, there’s a trick we can do as shown in the following code fragments. The idea is to encapsulate the retrieval settings using a get property. WithRoleEnvironment.IsAvailable, we are able to determine if the current runs on Windows Azure environment or a typical ASP.NET project. If it doesn’t run on Windows Azure environment, we can get the value from web.config instead of ServiceConfiguration.cscfg. Of course, we need to also store the setting somewhere else such as AppSettings in web.config file.

<code class="language-java">public string Setting1 
{   get
 {   string setting1 = string.Empty;    if (RoleEnvironment.IsAvailable)   return RoleEnvironment.GetConfigurationSettingValue("Setting1").ToString();   else   return ConfigurationManager.AppSettings["Setting1"].ToString(); } } </code>

Code Fragment 1.1 – Encapsulating the setting with Get property

<code class="language-java"><Role name="AspNetWebApplication">  <Instances count="3" />  <ConfigurationSettings>   <Setting name="Setting1" value="running on Windows Azure environment" />  </ConfigurationSettings>
 </Role> </code>

Code Fragment 1.2 – Setting in ServiceConfiguration.cscfg

<code class="language-java"><appSettings>  <add key="Setting1" value="running as typical ASP.NET project"/>
 </appSettings>  </code>

Code Fragment 1.3 – Setting in web.config

2. Loading a storage account

We normally use the store the Storage Account Connection String in Service Configuration setting as well.

Figure 6 – Setting Storage Connection String in Service Configuration

Figure 6 – Setting Storage Connection String in Service Configuration

As such, you might run into similar error again when running the ASP.NET project.

The Resolution

We use similar technique to resolve, but slightly different API. If theRoleEnvironment.IsAvailable returns false, we will get the value from AppSetting in web.config. If we find that it uses Development Storage, we will loadCloudStorageAccount.DevelopmentStorageAccount, else we will parse the connection string that is loaded from AppSettings in web.config file. The following code fragments illustrate how you should write your code and configuration.

<code class="language-java">CloudStorageAccount storageAccount;
 storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
 { string cs = ConfigurationManager.AppSettings["DataConnectionString "].ToString();
 if (cs.Equals("UseDevelopmentStorage=true"))
 storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
 else  storageAccount = CloudStorageAccount.Parse(cs);

Code Fragment 2.1 – Encapsulating the setting with get property

<code class="language-java"><appSettings>  <add key="DataConnectionString" 
value="DefaultEndpointsProtocol=https;AccountName={name};AccountKey={key}"/>  <!--<add key="DataConnectionString" value="UseDevelopmentStorage=true"/>-->
 </appSettings> </code>

Code Fragment 2.2 – Setting in ServiceConfiguration.cscfg

<code class="language-java"><Role name="WebRole1">  <Instances count="1" />  <ConfigurationSettings>  <Setting name="DataConnectionString" 
value="DefaultEndpointsProtocol=https;AccountName={name};AccountKey={key}" />
 <!-- <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" />-->  </ConfigurationSettings>
 </Role> </code>

Code Fragment 2.3 – Setting in web.config

An important note: you will still need to turn on Windows Azure Storage Emulator when using this technique.

Catches and Limitations

Although these tricks work in most cases, there are several catches and limitations identified:

  • The technique is only applicable for ASP.NET Web Role, but not Worker Role.
  • Apart from two issues identified, logging with Windows Azure Diagnostic may not work. This may not be a serious concern as we are talking about the development phase, not in production.
  • You are unable to simulate multiple instances when debugging or running ASP.NET project.


To conclude, this article answers two questions.  We have identified some caveats as well as the tricks to overcome these issues.

Although this technique is useful to avoid debugging or running a Windows Azure Project, itdoesn’t mean you never need to run as a Windows Azure Project again. I would still recommend you occasionally run the Windows Azure Project to ensure that your ASP.NET project targets Windows Azure perfectly.


Posted in Azure, Azure Development | 5 Comments

Installing Third Party Software on Windows Azure – What are the options?

I have seen this question asked many times now: “How do I install third party software on Windows Azure?” This is a reasonably important question to address as Windows Azure applications often need to use third party software components.

In some cases, using a software component can be as simple as adding a reference to it. You can also set the Copy Local property to True to bring the component along with your service package to the cloud. However, in some cases a proper installation is required. This is because the installation does other things than just copying the component to the system (such as: modifying registry, register the components to GAC, etc.) One example would be when installing Report Viewer on the Web Role to display reports.

This article will explain three techniques you can use to install third party software on Windows Azure. We will cover why and how to install third party software, and the catches that come with each technique.

Before diving into the specific techniques, let’s refresh the concept behind the current version of Windows Azure PAAS as it relates to what we’ll be discussing.

Design for Scale: Windows Azure Stateless VM

Windows Azure emphasizes the application philosophy of scaling-out (horizontally) instead of scaling-up (vertically). To achieve this, Windows Azure introduces the stateless virtual machine (VM). This means a VM’s local disks will not be used for storage since they are considered stateless or non-persistent. Any changes made after the VM is provisioned will be gone if the VM is re-imaged. This can happen if a hardware failure occurs on the machine where the VM is hosted.

Windows Azure persistent storage

Figure 1 – Windows Azure Stateless VM and Persistent Storage

Instead, the recommended approach is to store data to dedicated persistent storage such as SQL Azure or Windows Azure Storage.

Now, let’s discuss each technique to install software on Windows Azure in more detail.

Technique 1: Manual Installation through RDP

The first technique we discuss here is the fastest and easiest, but unfortunately also the most fragile. The idea is to perform a remote desktop (RDP) to a specific instance and perform manual installation. This might sound silly to some of you as we just discussed the stateless VM above. Nonetheless, this technique is pretty useful in staging or testing environments, when we need to quickly assess if a specific software can run in a Windows Azure environment.

The Catch

The software installed will not be persistent.

NOTE: Do not use this technique in production.

Technique 2: Start-up Task

The second technique we cover here is a Start-up Task. In my opinion, this will probably be the best solution depending on your circumstances. The idea of a Start-up Task is to execute a script (in form of a batch file) prior to the role initialization. As it will be always executed prior role initialization, even if the instance is re-imaged it will still be executed.

How to?

1. Preparing your startup script

Create a file name startup.cmd using Notepad or other ASCII editor. Copy the following example and save it.

powershell -c “(new-object””, ” ReportViewer.exe”)
ReportViewer.exe /passive
  • The first line is to download a file from the given URL to local storage.
  • The second line is to run the installer “ReportViewer.exe” using passive mode. We should install using passive or silent mode so there aren’t any dialog pop-up screens. Please also note that each installer may have different silent or passive mode installation parameter.

2. Including startup.cmd to Visual Studio

The next step is to include your startup.cmd script to Visual Studio. To do that, simply right click on the project name and choose “Add Existing Item”. Browse the startup.cmd file. Next, set “Copy to Output Directory” to “Copy always”, to ensure that the script will be included inside your package when it is built.

Including Startup.cmd in the Service

Fiure 2 – Incuding startup.cmd in the Service

3. Adding Startup Task on your ServiceDefinition.csdef file

The final step is to add a startup section in ServiceDefinition.csdef file, specifically below the intended Role tag as illustrated in below figure.

Adding Startup Task in ServiceDefinition.csdef

Figure 3 – Adding Startup Task in ServiceDefinition.csdef

  • The commandLine attribute requires the path of our startup script
  • The executionContext attribute requires us to choose either:
    • elevated (which will run as admin-role) or
    • limited (non admin-role)
  • The taskTypehas following options:
    • Simple [Default] – System waits for the task to exit before any other tasks are launched
    • Background – System does not wait for the task to exit
    • Foreground – Similar to background, except role is not restarted until all foreground tasks exit

The Catches

Here are some situations where a startup task cannot be used:

1. Installation that cannot be scripted out

2. Installation that requires many user involvement

3. Installation that takes a very long time to complete

Technique 3: VM Role

The final technique we are looking at is VM Role. In fact, one of the reasons why Microsoft introduced VM Role is to address the issues that couldn’t be done by Startup Task.

In reality, VM Role is another option amongst Windows Azure Compute Roles. However, unlike Web and Worker Roles, you will have more responsibility when using VM Role. People often make the mistake of treating VM Role as IAAS. This is not appropriate as VM Role still inherits behaviors from Web and Worker Roles. VM Role still can be easily scaled out just like Web and Worker Roles. Similarly, storing data in VM Role’s local disk is considered non-persistent.

The following figure illustrates the lifecycle of VM Role.

Figure 4 – VM Role Lifecycle from the Windows Azure Platform Training Kit. Find the whole PowerPoint presentation here:

Let’s drill down to the first step “Build VM Image” in more detail. There are several tasks that should be done here. First of all is to create the VHD that contains the operating system. The next step is to install Windows Azure Integration Component onto the image. Subsequently, you can install and configure the third party software. Finally, you do a SysPrep to generalize the VM image.

The Catches

There are several catches when using VM Role:

1. You will have more responsibility when using VM Role, including: building, customizing, installing, uploading, and eventually maintaining the VM image.

2. Up to now, the only supported OS for VM Role is Windows Server 2008 R2.

3. At the time of writing this article, VM Role is still at beta. As we know, significant changes may happen to the beta product.


We have covered three techniques to install software in Windows Azure so far. Although, Startup task remains the recommended option in most cases, it may not be the most suitable all the time. RDP and VM Role can sometimes be advantageous depending on the scenario.


This post was also published at A Cloud Place blog.

Posted in Azure, Azure Development | 9 Comments

Moving applications to the cloud: Part 3 – The recommended solution

We illustrated Idelma’s case study in the last article. This article continues from where we left off, looking at how a partner, Ovissia, would provide a recommended solution. Just as a reminder, Idelma had some specific requirements for migration including: cost-effectiveness, no functional changes for the user, and the proprietary CRM system stays on-premise.

Having analyzed the challenges Idelma faces and the requirements it mentioned, Ovissia’s presales architect Brandon gets back to Idelma with the answers. In fact, some of the migration techniques are referenced from the first post in this series.

Cloud Architecture

TicketOnline Cloud Architecture

Figure 1 – TicketOnline Cloud Architecture

Above is the recommended cloud architecture diagram when moving TicketOnline to the cloud. As can be seen from the figure, some portions of the system will remain similar to the on-premise architecture, while others shift towards the cloud-centric architecture.

Let’s take a look at each component in more detail.

1. Migrating a SQL Server 2005 database to SQL Azure

SQL Azure is cloud-based database service built on SQL Server technologies. In fact, at the moment,the most similar version of SQL Azure is SQL Server 2008.

There are several ways to migrate SQL Server to SQL Azure. One of the simplest ways is to use “Generate and Publish Scripts” wizard from SQL Server 2008 R2 / 2012 Management Studio.

Generate Scripts Wizard

Figure 2 – SQL Server Management Studio 2008 R2 (Generate Scripts Wizard)

Another option is to use third party tool such as SQL Azure Migration Wizard.

After the database has been successfully migrated to SQL Azure,connecting to it from the application is as straightforward as changing the connection string.

2. Relaying on-premise WCF Service using Windows Azure Service Bus

One of Idelma’s requirements states that the CRM web service must remain on-premise and also must be consumed securely. To satisfy this, Ovissia recommends using Windows Azure Service Bus which provides messaging and relay capability across multiple network hierarchies. With the relay capability and secured by Access Control Service, it enables the hybrid model scenario such that the TicketOnline web application is able to securely connect back to the on-premise CRM Service.

2.1 Converting ASMX to WCF

Despite its powerful capability, Service Bus requires WCF Service instead of asmx web service. Thus, the current asmx web service should be converted to WCF Service. MSDN library provides a walkthrough on migrating ASMX Web Service to WCF.

3. Converting to Web Role

Windows Azure Web Role is used to host web applications running on Windows Azure. Therefore, it is an ideal component to host the TicketOnline web application. Hosting on Windows Azure Web Role requires an ASP.NET web application, not an ASP.NET website. Please refer to this documentation for the difference between the two. The MSDN library also provides a detailed walkthrough on how to convert a web site to web application in Visual Studio.

When the website has been converted to a web application project, it is one step closer to the Web Role. In fact, there are only three differences between the two as can be seen on the following figures.

Web Role vs ASP.Net Web Application

Figure 3 – Differences between Web Role VS ASP.NET Web Application (Windows Azure Platform Training Kit – BuildingASPNETApps.pptx, slide 7)

4. Converting Windows Service Batch Job to Windows Azure Worker Role

Running Windows Service on Windows Azure can be pretty challenging. In fact, Windows Service is not available out-of-the-box on Windows Azure. The recommended approach is to convert the Windows Service to a Windows Azure Worker Role. You may refer to section 3 of the first article in this series for further explanation.

5. Conventional File System to Windows Azure Blob Storage

Idelma uses a conventional file server to store documents and images. When moving the application to Windows Azure, the ideal option is to store them in Windows Azure Storage, particularly Blob Storage. Not only is it cost-effective, but Windows Azure Storage also provides highly available and scalable storage services.

However, migrating from a conventional file system to Blob storage requires some effort:

  • First, the API – the way the application accesses Blob Storage. For a .NET application, Windows Azure provides a Storage Client Library for .NET which enables .NET developers to access Windows Azure Storage easily.
  • Second, migrating existing files – this can be done through explorer tools such as Cloud Xplorer or Cloud Storage Studio.

6. Configuration changes

Today, many applications (including TicketOnline) store settings such as application configuration and connection string in .config files (app.config / web.config). We know that the .config file is stored in an individual virtual machine (VM), but storing those settings in a .config file has a drawback. If you need to apply any changes to the settings, a re-deployment is required.

In the cloud, the recommended solution is to store the settings in an accessible and centralized medium such as a database. But if we just need to store the key-value pair setting, ServiceConfiguration.cscfg is actually a good choice. Changing settings in ServiceConfiguration.cscfg does not require a re-deployment and each VM will always get the latest updated settings.

There’s effort little bit of work to do when changing the setting from .config to ServiceConfiguration.cscfg. The following snippet shows the difference between the two.

string settingFromConfigFiles = ConfigurationManager.AppSettings["name"].ToString();
//getting setting from .config files

string settingFromAzureConfig = RoleEnvironment.GetConfigurationSettingValue("name").ToString();
//getting setting from ServiceConfiguration.cscfg

7. Sending Email on Windows Azure

The current architecture shows that email is sent through on-premise SMTP. If there is a requirement to continue using on-premise SMTP to send email, we could either propose to use a similar relay technique using Service Bus or use Windows Azure Connect to group cloud VMs and on-premise SMTP together.

Another option is to use a third-party SMTP provider. Recently, Microsoft has partnered withSendGrid to provide a special offer to Windows Azure subscribers for 25,000 free emails per month. This serves as a value-added service by Windows Azure without any extra charges.

8. Logging on the Cloud

Currently, TicketOnline stores the logs in a database. Although this works well with a SQL Azure database, it may not be the most cost-effective option as SQL Azure chargesapproximately $ 10 per GB per month. Over the time, the log will grow more and more, and might result in high running costs for the customer.

Remember, a workable solution is not enough; the solution should be cost-effective as well.

Windows Azure Storage is another option to store diagnostic data and logs. In fact, Windows Azure Diagnostic makes Windows Azure Storage the best option to store logs and diagnostic information. More details on Windows Azure Diagnostic can be found in section 4 of the first article in this series.


To conclude, this article provides a recommended solution to answer the challenges that Idelma face. You can see the difference between the on-premise and cloud architecture. This article also explains various components of the proposed solution.

Of course, this is not the only available solution as there may be other variations. There is no one-size-fits-all solution and there are always trade-offs among solutions. Finally, I hope this series on Moving Applications to the Cloud brings you some insight, especially for those who are considering moving applications to the cloud.

This post was also published at A Cloud Place blog.

Posted in Azure, Cloud | Leave a comment

Moving Applications to the Cloud: Part 2 – A Scenario-Based Example

In my last post, I discussed some of the key considerations when moving an application to the cloud. To provide a better understanding, I’m using a simple scenario-based example to illustrate how an application could be moved to the cloud.

This article will explain the challenges a company might face, the current architecture of the example application, and finally what the company should expect when moving an application to the cloud. My next article will discuss the recommended solution in more detail.


idelmaCompany name, logo, business, scenario, and incidents either are used fictitiously.  Any resemblance to an actual company is entirely coincidental.





Idelma is a ticket selling provider that sells tickets to concerts, sports event, and music gigs. Tickets are sold offline through ticket counters and online through a website called TicketOnline.

Customers visiting TicketOnline can browse list of available shows, find out more information on each show, and finally purchase tickets online. When a ticket is purchased, it’s reserved but will not be processed immediately. Other processes such as generating ticket and sending the generated ticket along with the receipt will be done asynchronously in a few minutes time.

Current Challenges

During peak season (typically in July and December), TicketOnline suffered from heavy traffic that caused slow response time. The traffic for off-peak season is normally about 100,000 to 200,000 hits per day, with the average of 8 to 15 on-going shows. In peak season, the traffic may reach five to seven times more than off-peak season.

The following diagram illustrates the web server hits counter of TicketOnline over the last three years.

Figure 1 – TicketOnline web server hits counter for the last three years

Additionally, the current infrastructure setup is not designed to be highly-available. This results in several periods of downtime each year.

The options: on-premise vs cloud

Idelma’s IT Manager Mr. Anthony recognizes the issues and decides to make some improvement to bring better competitive advantages to the company. When reading an article online, he discovered that cloud computing may be a good solution to address the issues. Another option would be to purchase a more powerful set of hardware that could handle the load.

With that, he has done a pros and cons analysis of the two options:

  • On-premise hardware investment

There are at least two advantages of investing in more hardware. One, they will have full control over the infrastructure, and can use the server for other purposes when necessary. Second, there might be less or no modification needed on the application at all, depending on how it is architected and designed. If they decide to scale up (vertically), they might not need to make any changes. However, if they decide to scale out (horizontally) to a web farm model, a re-design would be needed.

On the other hand, there are also several disadvantages of on-premise hardware investment. For sure, upfront investment in purchasing hardware and software are considered relatively expensive. Next, they would need to be able to answer the following questions: How much hardware and software should be purchased? What are the hardware specifications? If the capacity planning is not properly done, it may lead to either a waste of capacity or insufficient of capacity.  Another concern is, when adding more hardware, more manpower might be needed as well.

  • Cloud

For cloud computing, there’s almost no upfront investment required for hardware, and in some cases software doesn’t pose a large upfront cost either. Another advantage is the cloud’s elastic nature fits TicketOnline periodic bursting very much. Remember, they face high load only in June and December. Another advantage would be less responsibility. The administrator can have more time to focus on managing the application since the infrastructure is managed by the provider.

Though there are a number of advantages, there are also some disadvantages when choosing a cloud platform. For one thing, they might have less control over the infrastructure. As discussed in the previous article, there might also be some architectural changes when moving an application to the cloud. However, these can be dealt with in a one-time effort.

The figure below summarizes the considerations between the two options:

Figure 2 – Considerations of an On-premise or Cloud solution

After looking at his analysis, Mr. Anthony believes that the cloud will bring more competitive advantages to the company. Understanding that Windows Azure offers various services for building internet-scale application, and Idelma is also an existing Microsoft customer, Mr. Anthony decided to explore Windows Azure. After evaluating the pricing, he is even more comfortable to step ahead.

Quick preview of the current system

Now, let’s take a look of the current architecture of TicketOnline.

Figure 3 – TicketOnline Current Architecture

  • TicketOnline web application

The web application is hosted on a single instance physical server. It is running on Windows Server 2003 R2 as operating system with Internet Information Services (IIS) 6 as the web server and ASP.NET 2.0 as the web application framework.

  • Database

SQL Server 2005 is used as database engine to store mainly relational data for the application. Additionally, it is also used to store logs such as trace logs, performance-counters logs, and IIS logs.

  • File server

Unstructured files such as images and documents are stored separately in a file server.

  • Interfacing with another system

The application would need to interface with a proprietary CRM system that runs on a dedicated server to retrieve customer profiles through asmx web service.

  • Batch Job

As mentioned previously, receipt and ticket generation will happen asynchronously after purchasing is made. A scheduler-based batch job will perform asynchronous tasks every 10 minutes. The tasks include verifying booking details, generating tickets, and sending the ticket along with the receipt as an email to customer. The intention of an asynchronous process is to minimize concurrent access load as much as possible.

This batch job is implemented as a Windows Service installed in a separated server.

  • SMTP Server

On-premise SMTP Server will be used to send email, initiated either from the batch job engine or the web application.

Requirements for migration

The application should be migrated to the cloud with the following requirements:

  • The customer expects a cost effective solution in terms of the migration effort as well as the monthly running cost.
  • There aren’t any functional changes on the system. Meaning, the user (especially front-end user) should not see any differences in term of functionality.
  • As per policy, this propriety CRM system will not be moved to the cloud. The web service consumption should be consumed in secured manner.

Calling for partners

As the in-house IT team does not have competency and experience with Windows Azure, Mr. Anthony contacted Microsoft to suggest a partner who is capable to deliver the migration.

Before a formal request for proposal (RFP) is made, he expects partner to provide the following:

  • High-level architecture diagram how the system will look when moving to the cloud.
  • Explanation of each component illustrated on the diagram.
  • The migration processes, effort required, and potential challenges.

If Microsoft recommends you as the partner, how will you handle this case? What will the architecture look like in your proposed solution?

The most exciting part will come in the next article when I go into more detail on which solution is recommended and how the migration process takes place.

This post was also published at A Cloud Place blog.

Posted in Azure, Cloud | 2 Comments

Moving applications to the cloud: Part 1 – What are the considerations?

Windows Azure provides many remarkable services that benefit its customers. Assuming that you’ve already decided to hop on Windows Azure, some questions you might be asking include: What are the key considerations when moving applications to the cloud? How do you move an application to the cloud?

The goal of this article is to discuss several common considerations (including any changes that might apply) when moving your application to Windows Azure. Though there are also significant concerns from business perspective, this article will focus on the technical aspects.

1. Architecture Change

The first and probably the most significant consideration is the architecture. Your current architecture may or may not work perfectly on the cloud. Some applications may be moved easily and without many changes, while others may require a certain degree of alignment to fit a cloud-centric architecture.

Designing architecture that fits into the cloud model sometime is not enough.

More important is designing the architecture that brings optimal results. For instance: faster response time, elastically scalable system, and cost effective solution.

Single instance vs Web farm

If your current application is deployed on multiple instances (a.k.a. a web farm), you are one step closer to a cloud-centric architecture. I would recommend you to check out this post on the web farm concept to see where the differences are compared to single-instance deployment. The web farm architecture is naturally very similar to Windows Azure multiple-instance deployment.

Even though you can have a single instance for your Windows Azure deployment, it’s recommended to have at least two instances per role to meet the 99.95% SLA. The instances sitting behind Windows Azure load-balancer will be load-balanced in round-robin.

In web farm architecture, storing information in each individual instance will not work when the information should be shared across instances. The information could refer to session state, any relational data, or any unstructured files. Thus, a central repository is required to ensure that each request from the client will be consistently handled. Figure 1 illustrates how the multiple-instances are deployed in Windows Azure.

multiple-instance architecture

Figure 1: Multi-instance architecture

What are the options for a central repository

Pertaining to central repository, the following summarizes various options that best suit shared information.

  • Session state: several options such as Windows Azure Caching, Windows Azure Storage, and SQL Azure could be used. The detail explanations on the options are discussed here.
  • Relational data: SQL Azure is the highly available cloud database service and is your best option. SQL Azure is built on top of SQL Server technologies, so migration from SQL Server is typically quite straightforward.
  • Unstructured files: Windows Azure Storage (particularly Blob Storage) is the preferable option to store unstructured documents or files.

2. Application-Level Security

The second aspect that should be taken into account is application-level security. This will eventually lead to the question: How do you manage your user account and profile? Many applications use database or Active Directory to keep their user profile. There are also some that rely on third-party identity providers.

Below describes how each method will be reformed when moving the application to Windows Azure.

  • Database

Storing user accounts inside the database is perhaps the simplest method. As long as the database you are using is compatible with SQL Server 2008, to migrate it to SQL Azure shouldn’t be too much trouble. The user account tables should be migrated along with the other tables in your database.

If you are using ASP.NET Membership Provider, migrating to SQL Azure is even easier with the availability of ASP.NET Universal Provider Nuget Package.

  • Active Directory

Active Directory is popular choice, especially for corporate applications. This avoids having one person (with a single user ID) manage different accounts across many applications. With the release of ADFS (Active Directory Federation Service) 2.0, third party applications, regardless of whether they’re residing on-premise or in the cloud, can authenticate to corporate Active Directory account using claim-based authentication.

  • Third Party Identity Provider

Nowadays, many applications, especially public facing websites, rely on third-party identity providers (such as Google ID, Live ID, Facebook, etc.) to perform authentication. Fortunately, Windows Azure offers Access Control Service which simplifies the authentication process with multiple identity providers.

3. Overcoming the Shortcomings

Even though cloud solutions provide a wide-range of services, there are also some limitations.  To know what’s available and what isn’t is the responsibility of cloud architects when designing a cloud solution for their customers. For the features that are unavailable, the architects should provide alternate solutions that meet the requirements.

The following discusses an example of a potential limitation in Windows Azure and how it could be overcome.

Migrating Windows Service to Worker Role

  • Running a batch-job as the Windows Service is common. However, installing the Windows Service in a Windows Azure environment can be pretty challenging. In fact, Windows Service is not available out-of-the-box on Windows Azure.
  • The recommended approach is to convert the Windows Service to a Windows Azure Worker Role. This could be implemented in several ways:
    • Some people prefer to migrate it manually so that they have more control. The following code snippets illustrates the changes should be made when migrating a Windows Service to a Worker Role.

4. Diagnostics: Logging and Monitoring

Logging and monitoring are important as they could be used to tracing exceptions, monitoring performance, and planning for capacity.

Although configuring them is normally not difficult, there are some differences between performing these tasks on-premise or in the cloud. For one thing, you might have many instances in a cloud environment, the cloud instances aren’t persistent and, they might have a massive amount of data.

Now, the goal is to store the diagnostic information persistently, accessibly, and cost-effectively so that the diagnostic information can be viewed and monitored easily.

Windows Azure Diagnostic to collect diagnostic information

Windows Azure Diagnostic (WAD) enables you to collect diagnostic information from your Windows Azure application. WAD transfers the diagnostic information to Windows Azure Storage to ensure its persistency. The transfer can happen either on a schedule or on-demand. As we know that Windows Azure Storage is a highly-accessible service that’s competitively priced, so that goal can be accomplished.

Viewing and Monitoring Diagnostic information with tools

Data transferred to Windows Azure Storage can be accessed either with tools or API. Some tools (such as Cerebrata’s Azure Diagnostic Manager) enable us to view and monitor the diagnostic information easily through GUI (Graphical User Interface) as is shown in Figure 2. With that, we are able to take appropriate actions.

Cerebrata Azure Diagnostic Manager

Figure 2 Cerebrata Azure Diagnostic Manager


I haven’t discussed everything that needs to be taken into account, but the four points discussed above are the some of the key considerations when moving your applications to Windows Azure. Although some changes might apply, the changes are normally around the architecture and design. You don’t have to change the business logic.

In the next article, I will elaborate in more detail with a case study on moving an application to the cloud: starting from the current scenario, challenges that customer faced, architectural changes, and the final outcome.

This post was also published at A Cloud Place blog.

Posted in Azure, Cloud | 1 Comment

Managing session state in Windows Azure: What are the options?

One of the most common questions in developing ASP.NET applications on Windows Azure is how to manage session state. The intention of this article is to discuss several options to manage session state for ASP.NET applications in Windows Azure.

What is session state?

Session state is usually used to store and retrieve values for a user across ASP.NET pages in a web application. There are four available modes to store session values in ASP.NET:

  1. In-Proc, which stores session state in the individual web server’s memory. This is the default option if a particular mode is not explicitly specified.
  2. State Server, which stores session state in another process, called ASP.NET state service.
  3. SQL Server, which stores session state in a SQL Server database
  4. Custom, which lets you choose a custom storage provider.

You can get more information about ASP.NET session state here.

In-Proc session mode does not work in Windows Azure

The In-Proc option, which uses an individual web server’s memory, does not work well in Windows Azure. This may be applicable for those of you who host your application in a multi-instance web-farm environment; Windows Azure load balancer uses round-robin allocation across multi-instances.

For example: you have three instances (A, B, and C) of a Web Role. The first time a page is requested, the load balancer will allocate instance A to handle your request. However, there’s no guarantee that instance A will always handle subsequent requests. Similarly,the value that you set in instance A’s memory can’t be accessed by other instances.

The following picture illustrates how session state works in multi-instances behind the load balancer.

Figure 1 – WAPTK BuildingASP.NETApps.pptx Slide 10

The other options

1.     Table Storage

Table Storage Provider is a subset of the Windows Azure ASP.NET Providers written by the Windows Azure team. The Table Storage Session Provider is,in fact, a custom provider that is compiled into a class library (.dll file), enabling developers to store session state inside Windows Azure Table Storage.

The way it actually works is to store each session as a record in Table Storage. Each record will have an expired column that describe the expired time of each session if there’s no interaction from the user.

The advantage of Table Storage Session Provider is its relatively low cost: $0.14 per GB per month for storage capacity and $0.01 per 10,000 storage transactions. Nonetheless, according to my own experience, one of the notable disadvantages of Table Storage Session Provider is that it may not perform as fast as the other options discussed below.

The following code snippet should be applied in web.config when using Table Storage Session Provider.

<sessionState mode="Custom" customProvider="TableStorageSessionStateProvider">   <providers>     <clear/>    <add name="TableStorageSessionStateProvider"         type="Microsoft.Samples.ServiceHosting.AspProviders.TableStorageSessionStateProvider" />   </providers>

You can get more detail on using Table Storage Session Provider step-by-step here.

2.     SQL Azure

As SQL Azure is essentially a subset of SQL Server, SQL Azure can also be used as storage for session state. With just a few modifications, SQL Azure Session Provider can be derived from SQL Server Session Provider.

You will need to apply the following code snippet in web.config when using SQL Azure Session Provider:

<sessionState mode="SQLServer"
sqlConnectionString="Server=tcp:[serverName];Database=myDataBase;User ID=[LoginForDb]@[serverName];Password=[password];Trusted_Connection=False;Encrypt=True;"
cookieless="false" timeout="20" allowCustomSqlDatabase="true"

For the detail on how to use SQL Azure Session Provider, you can either:

The advantage of using SQL Azure as session provider is that it’s cost effective, especially when you have an existing SQL Azure database. Although it performs better than Table Storage Session Provider in most cases, it requires you to clean the expired session manually by calling the DeleteExpiredSessions stored procedure. Another drawback of using SQL Azure as session provider is that Microsoft does not provide any official support for this.

3.     Windows Azure Caching

Windows Azure Caching is probably the most preferable option available today. It provides a high-performance, in-memory, distributed caching service. The Windows Azure session state provider is an out-of-process storage mechanism for ASP.NET applications. As we all know, accessing RAM is very much faster than accessing disk, so Windows Azure Caching obviously provides the highest performance access of all the available options.

Windows Azure Caching also comes with a .NET API that enables developers to easily interact with the Caching Service. You should apply the following code snippet in web.config when using Cache Session Provider:

<sessionState mode="Custom" customProvider="AzureCacheSessionStoreProvider">   <providers>     <add name="AzureCacheSessionStoreProvider"           type="Microsoft.Web.DistributedCache.DistributedCacheSessionStateStoreProvider, Microsoft.Web.DistributedCache"           cacheName="default" useBlobMode="true" dataCacheClientName="default" />   </providers>

A step-by-step tutorial for using Caching Service as session provider can be found here.

Other than providing high performance access, another advantage about Windows Azure Caching is that it’s officially supported by Microsoft. Despite its advantages, the charge of Windows Azure Caching is relatively high, starting from $45 per month for 128 MB, all the way up to $325 per month for 4 GB.


I haven’t discussed all the available options for managing session state in Windows Azure, but the three I have discussed are the most popular options out there, and the ones that most people are considering using.

Windows Azure Caching remains the recommended option, despite its cons but developers and architects shouldn’t be afraid to decide on a different option, if it’s more suitable for them in a given scenario.

This post was also published at A Cloud Place blog.

Posted in ASP.NET, Azure, Azure Development | 1 Comment

An Introduction to Windows Azure (Part 2)

This is the second article of a two-part introduction to Windows Azure. In Part 1, I discussed the Windows Azure data centers and examined the core services that Windows Azure offers. In this article, I will explore additional services available as part of Windows Azure which enable customers to build richer, more powerful applications.

Additional Services

1. Building Block Services

‘Building block services’ were previously branded ‘Windows Azure AppFabric’. The main objective of building block services is to enable developers to build connected applications. The three services under this category are:

(i) Caching Service

Generally, accessing RAM is much faster than accessing disk, including storage and databases. For that reason, Microsoft have developed an in-memory and distributed caching service to deliver low latency, high-performance access, namely Windows Server AppFabric Caching. However, there are some activities, such as installing and managing, and some hardware requirements like investing in clustered servers, which have to be handled by the end-user.

Windows Azure Caching Service is a self-managed, yet distributed, in-memory caching service built on top of the Windows Server AppFabric Caching Service. Developers will no longer have to install and manage the Caching Service / Clusters. All they need to do is to create a namespace, specify the region, and define the Cache Size. Everything will get provisioned automatically in just a few minutes.

Creating new Windows Azure Caching Service

Additionally, Azure Caching Service comes along with a .NET client library and session providers for ASP.NET, which allow the developer to quickly use them in the application.

(ii) Access Control Service

Third Party Authentication

With the trend for federated identity / authentication becoming increasingly popular, many applications have relied on authentication from third party identity providers (IdPs) such as Live ID, Yahoo ID, Google ID, and Facebook.

One of the challenges developers face when dealing with different IdPs is that they use different standard protocols (OAuth, WS-Trust, WS-Federation) and web tokens (SAML 1.1, SAML 2.0, SWT).

Multiple ID Authentication

Access Control Service (ACS) allows application users to authenticate using multiple IdPs. Instead of dealing with different IdPs individually, developers just need to deal with ACS and let it take care of the rest.

AppFabric Azzess Control Services

(iii) Service Bus

Windows Azure’s Service Bus allows secure messaging and connectivity across multiple network hierarchies. It enables hybrid model scenarios, such as connecting cloud applications with on-premise systems. The Service Bus allows applications running on Windows Azure to call back to on-premise applications located behind firewalls and NATs.

Service Bus Diagram

Migrating from an on-premise Windows Communication Foundation (WCF) framework to the Service Bus is trivial as they use a similar programming approach.

2. Data Services

Data Services consists of SQL Azure Reporting and SQL Azure Data Sync, both of which are still currently available as Community Technology Previews (CTP).

(i)  SQL Azure Reporting

SQL Azure Reporting aims to provide developers with a service similar to that of the current SQL Server Reporting Service (SSRS), with the advantages of being in the cloud. Developers are still able to use familiar tools such as SQL Server Business Intelligence Development Studio. Migrating on-premise reports is also easy as SQL Azure Reporting is essentially built on top of SSRS architecture.

(ii) SQL Azure Data Sync

SQL Azure Data Sync is a cloud-based data synchronization service built on top of theMicrosoft Sync Framework. It enables synchronization between a cloud database and another cloud database, or with an on-premise database.

SQL Azure Data Sync

(from Windows Azure Bootcamp)

3. Networking

Three networking services are available today:

(i) Windows Azure CDN

The Content Delivery Network (CDN) caches static content such as video, images, JavaScript, and CSS at the closest node to users. By doing so, it improves performance and provides the best user experience. There are currently 24 nodes available globally.

Windows Azure CDN Locations

(ii) Windows Azure Traffic Manager

Traffic Manager is designed to enable high performance and high availability of web applications, by providing load-balancing across multiple hosted services in the six available data centers. In its current CTP guise, developers can select one of the following rules:

  • Performance – detects the location of the user traffic and routes it to the best online hosted service based on network performance.
  • Failover – based on an ordered list of hosted services, traffic is routed to the online service highest on the list.
  • Round Robin – equally distributes traffic to all hosted services.

(iii) Windows Azure Connect

Windows Azure Connect supports secure network connectivity between on-premise resources and the cloud by establishing a virtual network environment between them. With Windows Azure Connect, cloud applications appear to reside on the same network environment as on-premise applications.

Windows Azure Connect

(from the Windows Azure Platform Training Kit)

Windows Azure Connect enables scenarios such as:

  • Using an on-premise SMTP Server from a cloud application.
  • Migrating enterprise apps which require an on-premise SQL Server to Windows Azure.
  • Domain-join a cloud application running in Azure to an Active Directory.

4. Windows Azure Marketplace

Windows Azure Marketplace is a centralized online market where developers are able to easily sell their applications or datasets.

(i) Marketplace for Data

Windows Azure Marketplace for Data is an information marketplace allowing ISVs to provide datasets (either free or paid) on any platform, and available to the global market. For example, Average House Prices, Borough provides annual and quarterly house prices based on Land Registry data in the UK. Developers can then subscribe and utilize this dataset to develop their application.

(ii) Marketplace for Applications

Windows Azure Market Place for Applications enables developers to publish and sell their applications. Many, if not all of these applications are SAAS applications built on Windows Azure. Applications submitted to the Marketplace must meet a set of criteria.


To conclude, we have examined the huge investment that Microsoft is making and will continue to make in Windows Azure, the core of its cloud strategy. Three fundamental services (Compute, Storage, and Database) are offered to developers to satisfy the basic needs of developing cloud applications. Additionally, with Windows Azure services, (Building Blocks Services, Data Services, Networking, and Marketplace) developers will find it increasingly easy to develop rich and powerful applications. The foundations of this cloud offering are robust and we should continue to look out for new features to be added to this platform.


This article was written using the following resources as references:

This post was also published at A Cloud Place blog.

Posted in Azure | Leave a comment

Editing your XML documents with Liquid XML Studio

As we know XML is a popular file format and standard that has been used for many purposes in the IT industry. Starting from storing configuration file, storing data, transferring via web service, and so many more.

Nonetheless, I believe most of you have ever got frustrated with editing and manipulating XML document.

Recently, I have been introduced to try out a powerful XML editor, Liquid XML Studio. In fact, it is more than an editor. Liquid XML Studio comes with the following features:


Check out this link for more detail of Liquid XML Studio!

Leave a comment

An Introduction to Windows Azure (Part 1)

Windows Azure is the Microsoft cloud computing platform which enables developers to quickly develop, deploy, and manage their applications hosted in a Microsoft data center. As a PAAS provider, Windows Azure not only takes care of the infrastructure, but will also help to manage higher level components including operating systems, runtimes, and middleware.

This article will begin by looking at the Windows Azure data centers and will then walk through each of the available services provided by Windows Azure.

Windows Azure Data Centers

Map showing global location of datacenters

Slide 17 of WindowsAzureOverview.pptx (Windows Azure Platform Training Kit)

Microsoft has invested heavily in Windows Azure over the past few years. Six data centers across three continents have been developed to serve millions of customers. They have been built with an optimized power efficiency mechanism, self-cooling containers, and hardware homogeneity, which differentiates them from other data centers.

The data centers are located in the following cities:

  • US North Central – Chicago, IL
  • US South Central – San Antonio, TX
  • West Europe – Amsterdam
  • North Europe – Dublin
  • East Asia – Hong Kong
  • South-East Asia – Singapore

Windows Azure Datacenters- aerial and internal views

Windows Azure data centers are vast and intricately sophisticated.

Images courtesy of Microsoft

Windows Azure Services

Having seen the data centers, let’s move on to discuss the various services provided by Windows Azure.

Microsoft has previously categorized the Windows Azure Platform into three main components: Windows Azure, SQL Azure, and Windows Azure AppFabric. However, with the recent launch of the Metro-style Windows Azure portal, there are some slight changes to the branding, but the functionality has remained similar.  The following diagram illustrates the complete suite of Windows Azure services available today.

The complete suite of Windows Azure services available today

The complete suite of Windows Azure services available today

A. Core Services

1. Compute

The Compute service refers to computation power, usually in the form of provisioned Virtual Machines (VMs). In Windows Azure, the compute containers are often referred to as ‘roles’. At the moment, there are three types of roles:

(i) Web Roles

Web Roles offer a predefined environment, set-up to allow developers to easily deploy web applications. Web server IIS (Internet Information Services) has been preinstalled and preconfigured to readily host your web application.

(ii) Worker Roles

Worker Roles allow the developer to run an application’s background processes that do not require user interface interaction. Worker Roles are perfectly suitable to run processes such as scheduled batch jobs, asynchronous processing, and number crunching jobs.

(iii) VM Roles

VM Roles enable developers to bring their customized Windows Server 2008 R2 VM to the cloud, and configure it. VM Roles are suitable for cases where the prerequisite software requires lengthy, manual installation.

Using VM Roles has one substantial drawback. Unlike Web Roles and Worker Roles, whereby Windows Azure will automatically manage the OS, VM Roles require developers to actively manage the OS.

Apart from ‘roles’, there are two other essential terms, namely ‘VM Size’ and ‘Instance’.

  • VM Size denotes the predefined specifications that Windows Azure offers for the provisioned VM. The following diagram shows various Windows Azure VM Sizes.

Various Windows Azure VM Sizes, and the associated costs

Slide 21 of WindowsAzureOverview.pptx (Windows Azure Platform Training Kit)

  • Instance refers to the actual VM that is provisioned. Developers will need to specify how many instances they need after selecting the VM Size.

Screenshot showing VM size

2.     Storage

Windows Azure Storage is a cloud storage service that comes with the following characteristics:

The first step in using Windows Azure Storage is to create a storage account by specifying storage account name and the region:

Screenshot- creating a storage account

There are four types of storage abstraction that are available today:

(i) BLOB (Binary Large Object) Storage

Blob Storage provides a highly scalable, durable, and available file system in the cloud. Blob Storage allows customers to store any file type such as video, audio, photos, or text.

(ii) Table Storage

Table Storage provides structured storage that can be used to store non-relational tabular data. A Table is a set of entities, which contain a set of properties. An application can manipulate the entities and query over any of the properties stored in a Table.

(iii) Queue Storage

Queue Storage is a reliable and persistent messaging delivery that can be used to bridge applications. Queues are often being used to reliably dispatch asynchronous work.

(iv) Azure Drive

Azure Drive (aka X-Drive) provides the capability to store durable data by using the existing Windows NTFS APIs. Azure Drive is essentially a VHD Page Blob mounted as an NTFS drive by a Windows Azure instance.

3.  Database

SQL Azure database is a highly available database service built on existing SQL Server technology. Developers do not have to setup, install, configure, or manage any of the  database infrastructure. All developers need to do is define the database name, edition, and size. Developers are then ready to bring the objects and data to the cloud:

Screenshot- creating a database

SQL Azure uses the same T-SQL language and the same tools as SQL Server Management Studio to manage databases.  SQL Azure is likely to lead to a shift in the responsibility of DBAs toward a more logical administration, as SQL Azure handles physical administration. For example, a SQL Azure database will be replicated to three copies to ensure high-availability.

Although some variations exist today, Microsoft plans to support the features unavailable in SQL Azure in the future. Users can always vote and provide feedback to the SQL Azure team for upcoming feature consideration.

Coming up in my next article, I will carry on the discussion with the additional services that Windows Azure offers including ‘Building Block Services’, Data Services, Networking and more so make sure you keep an eye out for it because it’s coming soon!

This post was also published at A Cloud Place blog.

Posted in Azure | 4 Comments

Comparing IAAS and PAAS: A Developer’s Perspective

In my previous article, I discussed the basic concepts behind Cloud Computing including definitions, characteristics, and various service models. In this article I will discuss service models in more detail, and in particular the comparison between IAAS and PAAS from a developer’s standpoint.

I’m using two giant cloud players for illustrative purposes: Amazon Web Service representing IAAS and Windows Azure Platform representing PAAS. Nonetheless, please be informed that the emphasis is on the service models and not the actual cloud players.

Figure 1: IAAS VS PAAS

Infrastructure as a Service (IAAS)

IAAS refers to the cloud service model that provides on-demand infrastructure services to the customer. The infrastructure may refer to rentable resources such as computation power, storage, load-balancer, and etc.

As you can see on the left-hand side of Table 1, the IAAS provider will be responsible for managing physical resources, for example network, servers, and clustered machines. Additionally, they typically will also manage virtualization technology enabling customers to run VMs (virtual machines). When it comes to the Operating System (OS), it is often arguable whether it’s managed by the provider or customer. In most cases, the IAAS provider will be responsible for customer VM Images with a preloaded OS but the customer will need to subsequently manage it. Using AWS as an example, AMI (Amazon Machine Image) offers customers several types of Operating Systems such as Windows Server, Linux SUSE, andLinux Red Hat. Although the OS is preloaded, AWS will not maintain or update it.

Other stacks of software including middleware (such as IIS, Tomcat, Caching Services), runtime (JRE and .NET Framework), and databases (SQL Server, Oracle, MySQL) are normally not provided in the VM Image. That’s because the IAAS provider won’t know and won’t care what customers are going to do with the VM. Customers are responsible for taking care of them. When all of the above mentioned software has been settled, customers will finally deploy the application and data on the VM.

Step-by-step: Setting-up an Application on IAAS Environment

To convey a comprehensive explanation, I am going to illustrate the steps involved when setting up an application in an IAAS environment. For that, I’m borrowing a slide from a presentation by Mark Russinovich, at the BUILD conference. This illustration explains how a typical IAAS provisioning model works.


Figure 2: Setting up an App

Considering a common scenario when you have finished developing a multi-tier application, you as the developer will need to deploy it to the cloud. The application will need to be hosted on a Web Server and an RDBMS database. For IAAS, here are the typical steps:

1.       Preparing Database Servers

Select the VM Images from the VM Images library. The VM Image will then get provisioned and launched. If DBMS software is not provided, you will need to install DBMS on your own.

2.       Preparing Web / Application Servers

Select VM Images from the library to get provisioned and launched. If the web/app server/runtime aren’t installed, you’ll need to install them by yourself.

3.       Provisioning a Database and Its Objects

The next step is about provisioning the database, including configuring the data files, log files, security, etc. Then you create the tables and add data to it.

4.       Deploying Your Application

Next you take the application that you’ve developed and deploy it to the Web Server.

5.       Configuring load-balancer

When you need to host your application on multiple instances, you may also need to configure things such as the IP Address for each instance and load balancer.

6.       Managing Your VMs and DMBS

The final step is about managing the VMs. For example, when there’s an update or service pack on the OS, the IAAS provider will not automatically do it for you. Instead, you may need to do it by yourself.

Platform as a Service (PAAS)

Now, let’s jump into another cloud spectrum, “PAAS”, to see how it differs. In PAAS, the provisioning model is about an on-demand application hosting environment. Not only managing the component like an IAAS provider would, a PAAS provider will also help customers manage additional responsibilities such as OS, Middleware, Runtime, and even Databases, as can be seen on the right-hand side of  Table 1.

In other words, you can think of PAAS as renting a stack of software, hardware, and infrastructure. Customer will just need to bring the application and data and they are ready to go.

Step-by-step: Setting-up an Application on PAAS Environment

For PAAS, given that the database server, VM, and web server VM are readily provisioned, you just need to do two steps, as illustrated by another slide from Mark Russinovich.


Figure 3: Provision and Deploy

1.       Database Provisioning

You might need to indicate where (which region) your virtual DB Server is provisioned, but you don’t have to install a bunch of DBMS software on your own. You will need to provision the database, create tables, and add data.

2.       Deploying Your Application

This is a similar step applicable to IAAS, you will still need to deploy your application on the PAAS cloud environment.

How about the load-balancer? Take Windows Azure as example, it will all automatically be configured and ready to take the traffic, and everything else will be automatically managed. You don’t have to worry about IP Addresses or a load-balancer.

How about maintaining VMs? The DBMS and Web Server VM will be maintained by the provider. For example:

  • If the VM where your application is hosted has any hardware issues, the provider should be able to detect the failure and rectify it immediately to make sure that your application will stay up and running. In Windows Azure, Fabric Controller will be the component handling these kinds of issues.
  • If there are new updates or patches on the Operating System, the provider will make sure that the VM your application sits on is always updated. For example: Windows Azure uses “Guest OS Version” to differentiate service updates. Of course you can also choose to stick to one version or auto-update.


Figure 4: Configuration


To summarize, we have investigated different service models and provisioning steps of IAAS and PAAS solutions. PAAS providers indeed take on much more responsibility for your solution than an IAAS provider would. On the other side, IAAS may offer more flexibility at lower level (example: public IP addresses, load-balancer, etc.).

There’s no one-size-fits-all here. As a developer or architect, you should understand a customer’s need and determine the correct model to get the best possible outcome.

This post was also published at A Cloud Place blog.

Posted in Cloud | 1 Comment