Windows Azure Virtual Machine: A look at Windows Azure IaaS Offerings (Part 1)

This article looks at the journey Windows Azure has taken from when it was first launched as a PaaS, to the newly announced IaaS offerings. In the later part of this article, I’ll also provide a quick, hands-on tutorial on how to set up a Windows Azure Virtual Machine.

Started with PaaS, the stateless VM model

As many of you might be aware of Microsoft started Windows Azure with PaaS (Platform as- a Service) model, generally available in February 2010.

With PaaS, Web and Worker Roles were introduced, customers only had to take care of the application and data, not the operating system and infrastructure. The stateless Virtual Machine (VM) concept was also brought into the picture. This means at the runtime, each VM should not store the data locally as it’ll be gone if the VM is reincarnated due to unexpected events, such as hardware failures. Instead, data should be stored in persistent storages such as SQL Database or Windows Azure Storage.

One primary advantage of this model is scaling in and out could be easily done. In fact, it’s just a matter of changing a parameter and within a few minutes the VM(s) will get provisioned.

Scaling in Windows Azure Paas

Figure 1 – Scaling in Windows Azure PaaS “Cloud Services”

Challenges of PaaS

Although since its launch many customers have adopted Windows Azure as a cloud platform, there have also been many unsuccessful deals because of various stumbling blocks, especially when migrating the existing applications to the PaaS model. The following summarizes two major challenges:

1. Migration and portability

When talking about the effort involved in migration, a lot of it depends on the architecture of the application itself. I’ve written a series of articles on moving an application to the PaaS cloud model.

If you’ve decided to migrate your application to the PaaS regardless of the effort, what about bringing them back to on-premise? It might take more effort again. Alternatively, you could maintain two copies of your application source code.

2. Stateless virtual machine

Although there are some techniques to install third-party software on Windows Azure Stateless VM, the installation could be only done when setting up the VM; any changes at runtime wouldn’t be persistent. This restricts customers to install and run state-full applications on Windows Azure.

Introducing IaaS

With feedback from customers and communities, an initiative of supporting Infrastructure as a Service (IaaS) was finally announced on 7 June 2012 at the Meet Windows Azure event. This is an awesome move by Microsoft bringing more powerful capabilities to the platform and also competing with other cloud providers. Exciting news to customers!

Typically, the support of IaaS is implemented with Windows Azure Virtual Machine (WAVM). The major difference between this newly launched IaaS VM and PaaS so-called “Cloud Services” VM is the persistence. Yes, the IaaS VM is now persistent. Meaning that, any change that we perform at runtime will stay durable although the VM is reimaged. Aside from WAVM, the IaaS offerings are also supported with various new networking features. They offer a rich set of capabilities to establish connection amongst cloud VMs and also between cloud VM and on-premise network infrastructure.

Disclaimer: at the time this article was written, Windows Azure IaaS offerings including Virtual Machine are still in Preview. As such, any changes might be applied till the GA (general availability).

Windows Azure Virtual Machine

Windows Azure Virtual Machine utilizes the fantastic backend Windows Azure Storage. As such, it inherits the highly-available benefit so that the VM image is replicated for 3 copies.

Windows Azure Virtual Machine on Blob Storage

Figure 2 – Windows Azure Virtual Machine on Blob Storage
(Source: MS TechEd North America 2012 – AZR201.pptx – Slide 30)

The VM is represented in a standard and consistent form of VHD file. Thus, the VHD can be effortlessly moved from an on-premise virtualized environment (Hyper-V) to Windows Azure or the other way around, or even to other cloud providers. This gives the customer lots of mobility, portability, and no lock-in experience.

Image Mobility

Figure 3 – Image Mobility
Windows Azure Platform Training Kit – WindowsAzureVirtualMachines.pptx – Slide 11

Supported OS Images in Windows Azure VM

Windows Azure supports several versions of Windows Server and several distros of Linux as can be seen in the figure below:

Figure 4 – Supported OS in Windows Azure Virtual Machine
(Source: Windows Azure Platform Training Kit – VirtualMachineOverview.pptx – Slide 7)

Some of you might be surprise to see Linux distros are on the list. This proves that Microsoft is now heading in an open direction to reach more Microsoft and open-source customers.

A hands-on tutorial

0. This tutorial requires you to have Windows Azure subscription. If you don’t have one, you can sign up the free trial here. As Windows Azure IaaS is still in Preview at the moment, you are required to request the preview features here. It might take some time for them to grant you the preview features.

1. If you are ready with the subscription and preview features, log on to new Windows Azure Management Portal with your live ID and password. You will see the following screen if you’ve successfully logged in to the portal.

windows azure management portal

2. To create a Virtual Machine, click on the “+ New” button located in the left bottom corner. When the pop-up menu shows up, select Virtual Machine in the left hand menu and select FROM GALLERY.

3. (VM OS Selection screen) It will then show the available OS images. Let’s choose Microsoft SQL Server 2012 Evaluation Edition. This is basically Windows Server 2008 R2 with SQL Server 2012 Evaluation Edition pre-installed.

VM OS Selection

4. (VM Configuration Screen) The subsequent step requires us to fill in the VM configurations. Please remember your password; you will need to use it again in later steps.

5. (VM Mode Screen) This screen allows you to define how and where your VM will be stored. Choose the STANDALONE VIRTUAL MACHINE option and enter your preferred DNS Name for you service. As mentioned above, WAVM will use Blob Storage to store the VHD. This screen allows you to choose the Storage Account, Affinity Group, and Subscription.

VM Mode Screen

6. (VM Options Screen) This screen requires you to define the Availability Set of your Virtual Machine. Just simply click accept button image, leave the configuration as default. I will explain more about the Availability Set in a subsequent article.

7. If everything goes well, you will see the VM is being provisioned.

It might take few minutes for the VM to be ready; you will see the status change to Running. You can then click on the VM to see the details.

8. Clicking “Connect” will download a RDP file. Open the RDP file and you should see the Windows Security pop up. Enter the password that you specified in step 4.

9. When it prompts you with the certificate error, just simply accept it by clicking “Yes”.

10. As can be seen, I’ve successfully RDP-in to the VM. Most importantly, any changes that we do now (at the runtime) will be persistent.

You can also see that SQL Server 2012 is pre-installed for us.

Coming Up Next

In the next article, we will continue to look at Windows Azure Virtual Machine in more detail, including disk and images concepts, networking features, the combination of PaaS and IaaS, and so on. Stay tuned.

Posted in Azure, IaaS | Leave a comment

An Independent Review of Explorer Tools for Windows Azure Blob Storage

Windows Azure Blob Storage

Windows Azure Storage is one of the core components in Windows Azure that offers a scalable, highly available, and competitively priced storage option. Amongst others abstractions in Azure Storage (Table Storage and Queue Storage), Blob Storage is perhaps the most widely-used service. Blob Storage allows us to store any unstructured text and binary data such as video, audio, images, and so many more.

Blob Storage can either be accessed through the API programmatically or explorer tools. This article discusses and reviews several popular explorer tools for Blob Storage.

Reviews and Ratings

Disclaimer

The reviews and ratings are entirely my individual opinion and preference. The reviews and ratings are based on my personal experience when using each product, and what I consider important.

Measurement Criteria

This review will examine these products using the following four dimensions:

  • User interface and experience
    I’ll look at how usable the product is. Have the user interface and experience been designed to be comfortable and user friendly?
  • Basic features
    This category covers the standard and basic functionality when dealing with Blob Storage. This includes operations such copying / moving files, managing security, and access.
  • Advanced settings
    This dimension measures how flexible and configurable the product is. This includes the ability to adjust settings or preferences such as defining block size, retry policy, bandwidth settings, and so on.
  • Others notable features
    This metric is about supplementary features that enrich the product, making the product more powerful and beneficial for users. This might include innovative features such as multi-language support, directory comparison, graphical user interface for logging, etc.

For each measurement, I’ll provide a brief description and rating ranging from 1 to 5. 1 means the product provides a poor experience or lacks capability, 5 means the product provides awesome proficiency. Additionally, I would be also giving a N/A (not applicable) for the product that doesn’t have any applicable features.

1. Cloud Storage Studio 2 by Cerebrata

We start the review with Cloud Storage Studio 2 (CSS2) from Cerebrata, a company acquired by Red Gate last Oct 2011. CSS 2 is an exploration tool not only for Blob Storage, but also for Tables and Queue Storage.

The product costs $195 for a Professional License (volume discounts apply). Customers are encouraged to try it out with a 30-day free trial.

a) User interface and experience

CSS2 provides a powerful UI grouping concept and navigation, enabling users to group related storage accounts and subscriptions together – as can be seen in the Figure 1.

Cloud Storage Studio UI from Cerebrata

Figure 1 – Cloud Storage Studio UI

The Navigation (in red) and Tabs (in yellow) area look good to me. However, I find the Explorer Area (in blue) is tedious. Copying files and directories will prompt a dialog box that only allows us to copy the blobs within the container only as can be seen in Figure 2. I believe there should be more intuitive way to implement this.

 Cloud Storage Studio Copying Blobs

Figure 2 – Cloud Storage Studio Copying Blobs

Rating: 3.5

b) Basic features

I would say it satisfies most of the basic needs when dealing with Blob Storage. Starting from managing containers, displaying directories, all the way down to individual blob level are all properly supported.

Rating: 5.0

c) Advanced settings

CSS2 provides powerful settings that enable users to easily define the configuration settings.

Cloud Storage Studio Configuration Settings

Figure 3 – Cloud Storage Studio Configuration Settings

Rating: 4.5

d) Other notable features

One of the features that I like most in CSS2 is the graphical UI for Storage Analytic Logging and Metric. It provides a really expressive experience and has a good look and feel.

Figure 4 - Cloud Storage Studio Data Views

Figure 4 – View Storage Analytics Data

Rating: 4.0

2. CloudXplorer by ClumsyLeaf

CloudXplorer is a lightweight yet handy explorer tool from ClumsyLeaf Software. It has been very popular and has been used by many people including Microsoft folks in various events.

CloudXplorer is entirely free-of-charge, downloadable from here.

a) User interface and experience

CloudXplorer comes with Windows Explorer-like user interface, providing a friendly experience, especially for Windows users. Uploading and downloading Blobs are implemented with “Copy / Cut and Paste” experience, and the same when dealing with our local files.

Figure 5 – CloudXplorer User Interface

Figure 5 – CloudXplorer User Interface

Rating: 5.0

b) Basic features

I would say it satisfies most of the basic needs.

Rating: 5.0

c) Advanced settings

I don’t find any options for user to define advanced configuration and settings.

Rating: N/A

d) Other notable features

Unfortunately, I also didn’t find any fancy features in CloudXplorer.

Rating: N/A

3. CloudBerry Explorer for Azure Blob Storage by CloudBerry Lab

The last product I’m reviewing is the CloudBerry Explorer. CloudBerry Labs offers many great products focusing for explorer tools and online backup for various cloud providers such asAmazon AWS, Windows Azure, and RackSpace.

Furthermore, CloudBerry Explorer supports multi-languages: English, Chinese, and Japanese. The product comes in two versions:

  • Free version
  • And PRO version, purchasable at $ 39.99

Check out the following for the comparison between the two.

Figure 6 – CloudBerry Explorer User Interface

Figure 6 – CloudBerry Explorer User Interface

Rating: 4.5

b) Basic features

Like the other two tools, I would say it satisfies most needs.

Rating: 5.0

c) Advanced settings

CloudBerry Explorer also provides a powerful and flexible option for user to configure settings such as setting bandwidth, chunk size, encryption, etc. However, I notice that a few of the features such as encryption and compression are only available in PRO version.

Figure 7 – CloudBerry Explorer Options

Figure 7 – CloudBerry Explorer Options

Rating: 5.0

d) Other notable features

My favorite feature of CloudBerry Explorer is Compare and Sync Folders. This is an extremely useful feature enabling us to compare and sync either cloud or local folders. As seen in the screenshot below, the tool shows the comparison result between the two displays. Then we can finally define to either sync left to right, right to left, or in both directions.

Figure 5 – CloudBerry Explorer User Interface

Figure 5 – CloudBerry Explorer User Interface

Rating: 4.5

Conclusion

We have gone through three explorer tools for Windows Azure Blob Storage. I would say all three products are pretty awesome. There are always advantages from one to another. The following table summarizes reviews and ratings that we’ve come across.

In conclusion, if you need a simple and lightweight explorer, CloudXplorer is probably the way to go. However, if you need more flexible settings and innovative features, you should consider Cloud Storage Studio or CloudBerry Explorer.

Posted in Azure Storage | 7 Comments

Debugging or Running an ASP.NET Application without Windows Azure Compute Emulator

Recently, one of my .NET developers who was involved in a Windows Azure Project came and asked me two questions:

1. Why does it take longer time to debug or run a Windows Azure Project than a typical ASP.NET project? It takes about 15 to 30 seconds to debug a Windows Azure Project, but only 8 to 15 seconds to debug an ASP.NET project.

Figure 1 – Debugging a Windows Azure Project takes longer than an ASP.NET project

Figure 1 – Debugging a Windows Azure Project takes longer than an ASP.NET project

2. Can I debug or run the ASP.NET project instead of the Windows Azure Project when developing a Windows Azure application?

Figure 2 – Setting ASP.NET Web Application as Startup Project

Figure 2 – Setting ASP.NET Web Application as Startup Project

I’ve been looking at the online discussions around these issues and have found they’re very popular questions:

This article will answer and explain these two questions in more detail, including how it really works under the hood, tips and tricks to overcome the issue, and identified limitations.

1. Why does it take longer to debug or run a Windows Azure Project than a typical ASP.NET project?

Windows Azure development tools and SDK

First of all, I need to explain how the Windows Azure development tools and SDK work.

Microsoft enables developers to develop .NET applications targeting Windows Azure easily with the help of Windows Azure SDK (Software Development Kit). The SDK includes assemblies, samples, documentation, emulators, and command-line tools to build Windows Azure applications.

The emulator is designed to simulate the cloud environment, so developers don’t have to be connected to the cloud at all times. The two emulators are: Compute Emulator that simulates the Azure fabric environment and Storage Emulator that simulates the Windows Azure Storage. Apart from emulators, the two important command-line tools are CSPack that prepares and packages the application for deployment and CSRun that deploys and manages the application locally. Other command-line tools can be found here.

Apart from the SDK, there’s an add-in called Windows Azure Tools for Microsoft Visual Studiothat extends Visual Studio 2010 to enable the creation, configuration, building, debugging, running, packaging, and deployment of scalable web applications and services on Windows Azure. You will find a new “cloud” template (as can be seen in Figure 3) when adding a new project after installing it. Furthermore, it encapsulates the complexity of running the tools and other commands behind the scenes when we build, run, and publish a Windows Azure Project with Visual Studio.

Figure 3 – Windows Azure project template

Figure 3 – Windows Azure project template

The reason why it takes longer

It is true that it takes more time to debug or run a Windows Azure Project than a typical ASP.NET project.

In fact there’s a reasonable rationale behind. When we debug or run a Windows Azure cloud project, all other associated projects (Web / Worker Role) will be compiled and packed into acsx directory. Afterwards, Visual Studio lets CSRun deploy and run your package. TheCompute Emulator will then set up and host your web applications in IIS as many as we specify in Instance Count property.

Figure 4 – Websites are being set up in IIS when running Windows Azure Project

Figure 4 – Websites are being set up in IIS when running Windows Azure Project

As the Full IIS capability was introduced in SDK 1.3, web applications on Windows Azure involve two processes: w3wp.exe which runs your actual ASP.NET application, and WaIISHost.exe which runs your RoleEntryPoint in WebRole.cs / WebRole.vb.

As can be seen, there’re more steps involved when debugging or running a Windows Azure Project. This explains why it takes longer to debug or run Windows Azure Project on Compute Emulator compared to debugging or running an ASP.NET project on IIS or ASP.NET Development Server (Cassini) which is more straightforward.

2. Can I debug or run the ASP.NET project instead of the Windows Azure Project when developing a Windows Azure Project?

Jumping into the next question, is it possible to debug or run ASP.NET project instead of Windows Azure project?

The answer is yes. You can do so simply by setting the ASP.NET project as startup project. However, there are some caveats:

1. Getting configuration settings from Windows Azure Service Configuration

People often store settings at ServiceConfiguration.cscfg in their Windows Azure Project. You can get the setting value by callingRoleEnvironment.GetConfigurationSettingValue(“Setting1”). However, you will run into an error when debugging or running the ASP.NET project.

Figure 5 – Error when calling RoleEnvironment.GetConfigurationSettingValue in ASP.NET Project

Figure 5 – Error when calling RoleEnvironment.GetConfigurationSettingValue in ASP.NET Project

The reason of getting this error is because the ASP.NET project is unable to recognize and call GetConfigurationSettingValue as the settings belongs to Windows Azure Project.

The Resolution

To resolve this error, there’s a trick we can do as shown in the following code fragments. The idea is to encapsulate the retrieval settings using a get property. WithRoleEnvironment.IsAvailable, we are able to determine if the current runs on Windows Azure environment or a typical ASP.NET project. If it doesn’t run on Windows Azure environment, we can get the value from web.config instead of ServiceConfiguration.cscfg. Of course, we need to also store the setting somewhere else such as AppSettings in web.config file.

<code class="language-java">public string Setting1 
{   get
 {   string setting1 = string.Empty;    if (RoleEnvironment.IsAvailable)   return RoleEnvironment.GetConfigurationSettingValue("Setting1").ToString();   else   return ConfigurationManager.AppSettings["Setting1"].ToString(); } } </code>

Code Fragment 1.1 – Encapsulating the setting with Get property

<code class="language-java"><Role name="AspNetWebApplication">  <Instances count="3" />  <ConfigurationSettings>   <Setting name="Setting1" value="running on Windows Azure environment" />  </ConfigurationSettings>
 </Role> </code>

Code Fragment 1.2 – Setting in ServiceConfiguration.cscfg

<code class="language-java"><appSettings>  <add key="Setting1" value="running as typical ASP.NET project"/>
 </appSettings>  </code>

Code Fragment 1.3 – Setting in web.config

2. Loading a storage account

We normally use the store the Storage Account Connection String in Service Configuration setting as well.

Figure 6 – Setting Storage Connection String in Service Configuration

Figure 6 – Setting Storage Connection String in Service Configuration

As such, you might run into similar error again when running the ASP.NET project.

The Resolution

We use similar technique to resolve, but slightly different API. If theRoleEnvironment.IsAvailable returns false, we will get the value from AppSetting in web.config. If we find that it uses Development Storage, we will loadCloudStorageAccount.DevelopmentStorageAccount, else we will parse the connection string that is loaded from AppSettings in web.config file. The following code fragments illustrate how you should write your code and configuration.

<code class="language-java">CloudStorageAccount storageAccount;
 if(RoleEnvironment.IsAvailable)
 storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
 else
 { string cs = ConfigurationManager.AppSettings["DataConnectionString "].ToString();
 if (cs.Equals("UseDevelopmentStorage=true"))
 storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
 else  storageAccount = CloudStorageAccount.Parse(cs);
 }
 </code>

Code Fragment 2.1 – Encapsulating the setting with get property

<code class="language-java"><appSettings>  <add key="DataConnectionString" 
value="DefaultEndpointsProtocol=https;AccountName={name};AccountKey={key}"/>  <!--<add key="DataConnectionString" value="UseDevelopmentStorage=true"/>-->
 </appSettings> </code>

Code Fragment 2.2 – Setting in ServiceConfiguration.cscfg

<code class="language-java"><Role name="WebRole1">  <Instances count="1" />  <ConfigurationSettings>  <Setting name="DataConnectionString" 
value="DefaultEndpointsProtocol=https;AccountName={name};AccountKey={key}" />
 <!-- <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" />-->  </ConfigurationSettings>
 </Role> </code>

Code Fragment 2.3 – Setting in web.config

An important note: you will still need to turn on Windows Azure Storage Emulator when using this technique.

Catches and Limitations

Although these tricks work in most cases, there are several catches and limitations identified:

  • The technique is only applicable for ASP.NET Web Role, but not Worker Role.
  • Apart from two issues identified, logging with Windows Azure Diagnostic may not work. This may not be a serious concern as we are talking about the development phase, not in production.
  • You are unable to simulate multiple instances when debugging or running ASP.NET project.

Conclusion

To conclude, this article answers two questions.  We have identified some caveats as well as the tricks to overcome these issues.

Although this technique is useful to avoid debugging or running a Windows Azure Project, itdoesn’t mean you never need to run as a Windows Azure Project again. I would still recommend you occasionally run the Windows Azure Project to ensure that your ASP.NET project targets Windows Azure perfectly.

References

Posted in Azure, Azure Development | 5 Comments

Installing Third Party Software on Windows Azure – What are the options?

I have seen this question asked many times now: “How do I install third party software on Windows Azure?” This is a reasonably important question to address as Windows Azure applications often need to use third party software components.

In some cases, using a software component can be as simple as adding a reference to it. You can also set the Copy Local property to True to bring the component along with your service package to the cloud. However, in some cases a proper installation is required. This is because the installation does other things than just copying the component to the system (such as: modifying registry, register the components to GAC, etc.) One example would be when installing Report Viewer on the Web Role to display reports.

This article will explain three techniques you can use to install third party software on Windows Azure. We will cover why and how to install third party software, and the catches that come with each technique.

Before diving into the specific techniques, let’s refresh the concept behind the current version of Windows Azure PAAS as it relates to what we’ll be discussing.

Design for Scale: Windows Azure Stateless VM

Windows Azure emphasizes the application philosophy of scaling-out (horizontally) instead of scaling-up (vertically). To achieve this, Windows Azure introduces the stateless virtual machine (VM). This means a VM’s local disks will not be used for storage since they are considered stateless or non-persistent. Any changes made after the VM is provisioned will be gone if the VM is re-imaged. This can happen if a hardware failure occurs on the machine where the VM is hosted.

Windows Azure persistent storage

Figure 1 – Windows Azure Stateless VM and Persistent Storage

Instead, the recommended approach is to store data to dedicated persistent storage such as SQL Azure or Windows Azure Storage.

Now, let’s discuss each technique to install software on Windows Azure in more detail.

Technique 1: Manual Installation through RDP

The first technique we discuss here is the fastest and easiest, but unfortunately also the most fragile. The idea is to perform a remote desktop (RDP) to a specific instance and perform manual installation. This might sound silly to some of you as we just discussed the stateless VM above. Nonetheless, this technique is pretty useful in staging or testing environments, when we need to quickly assess if a specific software can run in a Windows Azure environment.

The Catch

The software installed will not be persistent.

NOTE: Do not use this technique in production.

Technique 2: Start-up Task

The second technique we cover here is a Start-up Task. In my opinion, this will probably be the best solution depending on your circumstances. The idea of a Start-up Task is to execute a script (in form of a batch file) prior to the role initialization. As it will be always executed prior role initialization, even if the instance is re-imaged it will still be executed.

How to?

1. Preparing your startup script

Create a file name startup.cmd using Notepad or other ASCII editor. Copy the following example and save it.

powershell -c “(new-object
system.net.webclient).downloadfile(”http://download.microsoft.com/download/E/A/1/EA1BF9E8-D164-4354-8959-F96843DD8F46/ReportViewer.exe”, ” ReportViewer.exe”)
ReportViewer.exe /passive
  • The first line is to download a file from the given URL to local storage.
  • The second line is to run the installer “ReportViewer.exe” using passive mode. We should install using passive or silent mode so there aren’t any dialog pop-up screens. Please also note that each installer may have different silent or passive mode installation parameter.

2. Including startup.cmd to Visual Studio

The next step is to include your startup.cmd script to Visual Studio. To do that, simply right click on the project name and choose “Add Existing Item”. Browse the startup.cmd file. Next, set “Copy to Output Directory” to “Copy always”, to ensure that the script will be included inside your package when it is built.

Including Startup.cmd in the Service

Fiure 2 – Incuding startup.cmd in the Service

3. Adding Startup Task on your ServiceDefinition.csdef file

The final step is to add a startup section in ServiceDefinition.csdef file, specifically below the intended Role tag as illustrated in below figure.

Adding Startup Task in ServiceDefinition.csdef

Figure 3 – Adding Startup Task in ServiceDefinition.csdef

  • The commandLine attribute requires the path of our startup script
  • The executionContext attribute requires us to choose either:
    • elevated (which will run as admin-role) or
    • limited (non admin-role)
  • The taskTypehas following options:
    • Simple [Default] – System waits for the task to exit before any other tasks are launched
    • Background – System does not wait for the task to exit
    • Foreground – Similar to background, except role is not restarted until all foreground tasks exit

The Catches

Here are some situations where a startup task cannot be used:

1. Installation that cannot be scripted out

2. Installation that requires many user involvement

3. Installation that takes a very long time to complete

Technique 3: VM Role

The final technique we are looking at is VM Role. In fact, one of the reasons why Microsoft introduced VM Role is to address the issues that couldn’t be done by Startup Task.

In reality, VM Role is another option amongst Windows Azure Compute Roles. However, unlike Web and Worker Roles, you will have more responsibility when using VM Role. People often make the mistake of treating VM Role as IAAS. This is not appropriate as VM Role still inherits behaviors from Web and Worker Roles. VM Role still can be easily scaled out just like Web and Worker Roles. Similarly, storing data in VM Role’s local disk is considered non-persistent.

The following figure illustrates the lifecycle of VM Role.

Figure 4 – VM Role Lifecycle from the Windows Azure Platform Training Kit. Find the whole PowerPoint presentation here: http://acloudyplace.com/wp-content/uploads/2012/05/MovingApplicationsToTheCloudWithVMRole.pptx

Let’s drill down to the first step “Build VM Image” in more detail. There are several tasks that should be done here. First of all is to create the VHD that contains the operating system. The next step is to install Windows Azure Integration Component onto the image. Subsequently, you can install and configure the third party software. Finally, you do a SysPrep to generalize the VM image.

The Catches

There are several catches when using VM Role:

1. You will have more responsibility when using VM Role, including: building, customizing, installing, uploading, and eventually maintaining the VM image.

2. Up to now, the only supported OS for VM Role is Windows Server 2008 R2.

3. At the time of writing this article, VM Role is still at beta. As we know, significant changes may happen to the beta product.

Conclusion

We have covered three techniques to install software in Windows Azure so far. Although, Startup task remains the recommended option in most cases, it may not be the most suitable all the time. RDP and VM Role can sometimes be advantageous depending on the scenario.

Reference

This post was also published at A Cloud Place blog.

Posted in Azure, Azure Development | 9 Comments

Moving applications to the cloud: Part 3 – The recommended solution

We illustrated Idelma’s case study in the last article. This article continues from where we left off, looking at how a partner, Ovissia, would provide a recommended solution. Just as a reminder, Idelma had some specific requirements for migration including: cost-effectiveness, no functional changes for the user, and the proprietary CRM system stays on-premise.

Having analyzed the challenges Idelma faces and the requirements it mentioned, Ovissia’s presales architect Brandon gets back to Idelma with the answers. In fact, some of the migration techniques are referenced from the first post in this series.

Cloud Architecture

TicketOnline Cloud Architecture

Figure 1 – TicketOnline Cloud Architecture

Above is the recommended cloud architecture diagram when moving TicketOnline to the cloud. As can be seen from the figure, some portions of the system will remain similar to the on-premise architecture, while others shift towards the cloud-centric architecture.

Let’s take a look at each component in more detail.

1. Migrating a SQL Server 2005 database to SQL Azure

SQL Azure is cloud-based database service built on SQL Server technologies. In fact, at the moment,the most similar version of SQL Azure is SQL Server 2008.

There are several ways to migrate SQL Server to SQL Azure. One of the simplest ways is to use “Generate and Publish Scripts” wizard from SQL Server 2008 R2 / 2012 Management Studio.

Generate Scripts Wizard

Figure 2 – SQL Server Management Studio 2008 R2 (Generate Scripts Wizard)

Another option is to use third party tool such as SQL Azure Migration Wizard.

After the database has been successfully migrated to SQL Azure,connecting to it from the application is as straightforward as changing the connection string.

2. Relaying on-premise WCF Service using Windows Azure Service Bus

One of Idelma’s requirements states that the CRM web service must remain on-premise and also must be consumed securely. To satisfy this, Ovissia recommends using Windows Azure Service Bus which provides messaging and relay capability across multiple network hierarchies. With the relay capability and secured by Access Control Service, it enables the hybrid model scenario such that the TicketOnline web application is able to securely connect back to the on-premise CRM Service.

2.1 Converting ASMX to WCF

Despite its powerful capability, Service Bus requires WCF Service instead of asmx web service. Thus, the current asmx web service should be converted to WCF Service. MSDN library provides a walkthrough on migrating ASMX Web Service to WCF.

3. Converting to Web Role

Windows Azure Web Role is used to host web applications running on Windows Azure. Therefore, it is an ideal component to host the TicketOnline web application. Hosting on Windows Azure Web Role requires an ASP.NET web application, not an ASP.NET website. Please refer to this documentation for the difference between the two. The MSDN library also provides a detailed walkthrough on how to convert a web site to web application in Visual Studio.

When the website has been converted to a web application project, it is one step closer to the Web Role. In fact, there are only three differences between the two as can be seen on the following figures.

Web Role vs ASP.Net Web Application

Figure 3 – Differences between Web Role VS ASP.NET Web Application (Windows Azure Platform Training Kit – BuildingASPNETApps.pptx, slide 7)

4. Converting Windows Service Batch Job to Windows Azure Worker Role

Running Windows Service on Windows Azure can be pretty challenging. In fact, Windows Service is not available out-of-the-box on Windows Azure. The recommended approach is to convert the Windows Service to a Windows Azure Worker Role. You may refer to section 3 of the first article in this series for further explanation.

5. Conventional File System to Windows Azure Blob Storage

Idelma uses a conventional file server to store documents and images. When moving the application to Windows Azure, the ideal option is to store them in Windows Azure Storage, particularly Blob Storage. Not only is it cost-effective, but Windows Azure Storage also provides highly available and scalable storage services.

However, migrating from a conventional file system to Blob storage requires some effort:

  • First, the API – the way the application accesses Blob Storage. For a .NET application, Windows Azure provides a Storage Client Library for .NET which enables .NET developers to access Windows Azure Storage easily.
  • Second, migrating existing files – this can be done through explorer tools such as Cloud Xplorer or Cloud Storage Studio.

6. Configuration changes

Today, many applications (including TicketOnline) store settings such as application configuration and connection string in .config files (app.config / web.config). We know that the .config file is stored in an individual virtual machine (VM), but storing those settings in a .config file has a drawback. If you need to apply any changes to the settings, a re-deployment is required.

In the cloud, the recommended solution is to store the settings in an accessible and centralized medium such as a database. But if we just need to store the key-value pair setting, ServiceConfiguration.cscfg is actually a good choice. Changing settings in ServiceConfiguration.cscfg does not require a re-deployment and each VM will always get the latest updated settings.

There’s effort little bit of work to do when changing the setting from .config to ServiceConfiguration.cscfg. The following snippet shows the difference between the two.

string settingFromConfigFiles = ConfigurationManager.AppSettings["name"].ToString();
//getting setting from .config files

string settingFromAzureConfig = RoleEnvironment.GetConfigurationSettingValue("name").ToString();
//getting setting from ServiceConfiguration.cscfg

7. Sending Email on Windows Azure

The current architecture shows that email is sent through on-premise SMTP. If there is a requirement to continue using on-premise SMTP to send email, we could either propose to use a similar relay technique using Service Bus or use Windows Azure Connect to group cloud VMs and on-premise SMTP together.

Another option is to use a third-party SMTP provider. Recently, Microsoft has partnered withSendGrid to provide a special offer to Windows Azure subscribers for 25,000 free emails per month. This serves as a value-added service by Windows Azure without any extra charges.

8. Logging on the Cloud

Currently, TicketOnline stores the logs in a database. Although this works well with a SQL Azure database, it may not be the most cost-effective option as SQL Azure chargesapproximately $ 10 per GB per month. Over the time, the log will grow more and more, and might result in high running costs for the customer.

Remember, a workable solution is not enough; the solution should be cost-effective as well.

Windows Azure Storage is another option to store diagnostic data and logs. In fact, Windows Azure Diagnostic makes Windows Azure Storage the best option to store logs and diagnostic information. More details on Windows Azure Diagnostic can be found in section 4 of the first article in this series.

Conclusion

To conclude, this article provides a recommended solution to answer the challenges that Idelma face. You can see the difference between the on-premise and cloud architecture. This article also explains various components of the proposed solution.

Of course, this is not the only available solution as there may be other variations. There is no one-size-fits-all solution and there are always trade-offs among solutions. Finally, I hope this series on Moving Applications to the Cloud brings you some insight, especially for those who are considering moving applications to the cloud.

This post was also published at A Cloud Place blog.

Posted in Azure, Cloud | Leave a comment

Moving Applications to the Cloud: Part 2 – A Scenario-Based Example

In my last post, I discussed some of the key considerations when moving an application to the cloud. To provide a better understanding, I’m using a simple scenario-based example to illustrate how an application could be moved to the cloud.

This article will explain the challenges a company might face, the current architecture of the example application, and finally what the company should expect when moving an application to the cloud. My next article will discuss the recommended solution in more detail.

Disclaimer

idelmaCompany name, logo, business, scenario, and incidents either are used fictitiously.  Any resemblance to an actual company is entirely coincidental.

 

 

 

Background

Idelma is a ticket selling provider that sells tickets to concerts, sports event, and music gigs. Tickets are sold offline through ticket counters and online through a website called TicketOnline.

Customers visiting TicketOnline can browse list of available shows, find out more information on each show, and finally purchase tickets online. When a ticket is purchased, it’s reserved but will not be processed immediately. Other processes such as generating ticket and sending the generated ticket along with the receipt will be done asynchronously in a few minutes time.

Current Challenges

During peak season (typically in July and December), TicketOnline suffered from heavy traffic that caused slow response time. The traffic for off-peak season is normally about 100,000 to 200,000 hits per day, with the average of 8 to 15 on-going shows. In peak season, the traffic may reach five to seven times more than off-peak season.

The following diagram illustrates the web server hits counter of TicketOnline over the last three years.

Figure 1 – TicketOnline web server hits counter for the last three years

Additionally, the current infrastructure setup is not designed to be highly-available. This results in several periods of downtime each year.

The options: on-premise vs cloud

Idelma’s IT Manager Mr. Anthony recognizes the issues and decides to make some improvement to bring better competitive advantages to the company. When reading an article online, he discovered that cloud computing may be a good solution to address the issues. Another option would be to purchase a more powerful set of hardware that could handle the load.

With that, he has done a pros and cons analysis of the two options:

  • On-premise hardware investment

There are at least two advantages of investing in more hardware. One, they will have full control over the infrastructure, and can use the server for other purposes when necessary. Second, there might be less or no modification needed on the application at all, depending on how it is architected and designed. If they decide to scale up (vertically), they might not need to make any changes. However, if they decide to scale out (horizontally) to a web farm model, a re-design would be needed.

On the other hand, there are also several disadvantages of on-premise hardware investment. For sure, upfront investment in purchasing hardware and software are considered relatively expensive. Next, they would need to be able to answer the following questions: How much hardware and software should be purchased? What are the hardware specifications? If the capacity planning is not properly done, it may lead to either a waste of capacity or insufficient of capacity.  Another concern is, when adding more hardware, more manpower might be needed as well.

  • Cloud

For cloud computing, there’s almost no upfront investment required for hardware, and in some cases software doesn’t pose a large upfront cost either. Another advantage is the cloud’s elastic nature fits TicketOnline periodic bursting very much. Remember, they face high load only in June and December. Another advantage would be less responsibility. The administrator can have more time to focus on managing the application since the infrastructure is managed by the provider.

Though there are a number of advantages, there are also some disadvantages when choosing a cloud platform. For one thing, they might have less control over the infrastructure. As discussed in the previous article, there might also be some architectural changes when moving an application to the cloud. However, these can be dealt with in a one-time effort.

The figure below summarizes the considerations between the two options:

Figure 2 – Considerations of an On-premise or Cloud solution

After looking at his analysis, Mr. Anthony believes that the cloud will bring more competitive advantages to the company. Understanding that Windows Azure offers various services for building internet-scale application, and Idelma is also an existing Microsoft customer, Mr. Anthony decided to explore Windows Azure. After evaluating the pricing, he is even more comfortable to step ahead.

Quick preview of the current system

Now, let’s take a look of the current architecture of TicketOnline.

Figure 3 – TicketOnline Current Architecture

  • TicketOnline web application

The web application is hosted on a single instance physical server. It is running on Windows Server 2003 R2 as operating system with Internet Information Services (IIS) 6 as the web server and ASP.NET 2.0 as the web application framework.

  • Database

SQL Server 2005 is used as database engine to store mainly relational data for the application. Additionally, it is also used to store logs such as trace logs, performance-counters logs, and IIS logs.

  • File server

Unstructured files such as images and documents are stored separately in a file server.

  • Interfacing with another system

The application would need to interface with a proprietary CRM system that runs on a dedicated server to retrieve customer profiles through asmx web service.

  • Batch Job

As mentioned previously, receipt and ticket generation will happen asynchronously after purchasing is made. A scheduler-based batch job will perform asynchronous tasks every 10 minutes. The tasks include verifying booking details, generating tickets, and sending the ticket along with the receipt as an email to customer. The intention of an asynchronous process is to minimize concurrent access load as much as possible.

This batch job is implemented as a Windows Service installed in a separated server.

  • SMTP Server

On-premise SMTP Server will be used to send email, initiated either from the batch job engine or the web application.

Requirements for migration

The application should be migrated to the cloud with the following requirements:

  • The customer expects a cost effective solution in terms of the migration effort as well as the monthly running cost.
  • There aren’t any functional changes on the system. Meaning, the user (especially front-end user) should not see any differences in term of functionality.
  • As per policy, this propriety CRM system will not be moved to the cloud. The web service consumption should be consumed in secured manner.

Calling for partners

As the in-house IT team does not have competency and experience with Windows Azure, Mr. Anthony contacted Microsoft to suggest a partner who is capable to deliver the migration.

Before a formal request for proposal (RFP) is made, he expects partner to provide the following:

  • High-level architecture diagram how the system will look when moving to the cloud.
  • Explanation of each component illustrated on the diagram.
  • The migration processes, effort required, and potential challenges.

If Microsoft recommends you as the partner, how will you handle this case? What will the architecture look like in your proposed solution?

The most exciting part will come in the next article when I go into more detail on which solution is recommended and how the migration process takes place.

This post was also published at A Cloud Place blog.

Posted in Azure, Cloud | 2 Comments

Moving applications to the cloud: Part 1 – What are the considerations?

Windows Azure provides many remarkable services that benefit its customers. Assuming that you’ve already decided to hop on Windows Azure, some questions you might be asking include: What are the key considerations when moving applications to the cloud? How do you move an application to the cloud?

The goal of this article is to discuss several common considerations (including any changes that might apply) when moving your application to Windows Azure. Though there are also significant concerns from business perspective, this article will focus on the technical aspects.

1. Architecture Change

The first and probably the most significant consideration is the architecture. Your current architecture may or may not work perfectly on the cloud. Some applications may be moved easily and without many changes, while others may require a certain degree of alignment to fit a cloud-centric architecture.

Designing architecture that fits into the cloud model sometime is not enough.

More important is designing the architecture that brings optimal results. For instance: faster response time, elastically scalable system, and cost effective solution.

Single instance vs Web farm

If your current application is deployed on multiple instances (a.k.a. a web farm), you are one step closer to a cloud-centric architecture. I would recommend you to check out this post on the web farm concept to see where the differences are compared to single-instance deployment. The web farm architecture is naturally very similar to Windows Azure multiple-instance deployment.

Even though you can have a single instance for your Windows Azure deployment, it’s recommended to have at least two instances per role to meet the 99.95% SLA. The instances sitting behind Windows Azure load-balancer will be load-balanced in round-robin.

In web farm architecture, storing information in each individual instance will not work when the information should be shared across instances. The information could refer to session state, any relational data, or any unstructured files. Thus, a central repository is required to ensure that each request from the client will be consistently handled. Figure 1 illustrates how the multiple-instances are deployed in Windows Azure.

multiple-instance architecture

Figure 1: Multi-instance architecture

What are the options for a central repository

Pertaining to central repository, the following summarizes various options that best suit shared information.

  • Session state: several options such as Windows Azure Caching, Windows Azure Storage, and SQL Azure could be used. The detail explanations on the options are discussed here.
  • Relational data: SQL Azure is the highly available cloud database service and is your best option. SQL Azure is built on top of SQL Server technologies, so migration from SQL Server is typically quite straightforward.
  • Unstructured files: Windows Azure Storage (particularly Blob Storage) is the preferable option to store unstructured documents or files.

2. Application-Level Security

The second aspect that should be taken into account is application-level security. This will eventually lead to the question: How do you manage your user account and profile? Many applications use database or Active Directory to keep their user profile. There are also some that rely on third-party identity providers.

Below describes how each method will be reformed when moving the application to Windows Azure.

  • Database

Storing user accounts inside the database is perhaps the simplest method. As long as the database you are using is compatible with SQL Server 2008, to migrate it to SQL Azure shouldn’t be too much trouble. The user account tables should be migrated along with the other tables in your database.

If you are using ASP.NET Membership Provider, migrating to SQL Azure is even easier with the availability of ASP.NET Universal Provider Nuget Package.

  • Active Directory

Active Directory is popular choice, especially for corporate applications. This avoids having one person (with a single user ID) manage different accounts across many applications. With the release of ADFS (Active Directory Federation Service) 2.0, third party applications, regardless of whether they’re residing on-premise or in the cloud, can authenticate to corporate Active Directory account using claim-based authentication.

  • Third Party Identity Provider

Nowadays, many applications, especially public facing websites, rely on third-party identity providers (such as Google ID, Live ID, Facebook, etc.) to perform authentication. Fortunately, Windows Azure offers Access Control Service which simplifies the authentication process with multiple identity providers.

3. Overcoming the Shortcomings

Even though cloud solutions provide a wide-range of services, there are also some limitations.  To know what’s available and what isn’t is the responsibility of cloud architects when designing a cloud solution for their customers. For the features that are unavailable, the architects should provide alternate solutions that meet the requirements.

The following discusses an example of a potential limitation in Windows Azure and how it could be overcome.

Migrating Windows Service to Worker Role

  • Running a batch-job as the Windows Service is common. However, installing the Windows Service in a Windows Azure environment can be pretty challenging. In fact, Windows Service is not available out-of-the-box on Windows Azure.
  • The recommended approach is to convert the Windows Service to a Windows Azure Worker Role. This could be implemented in several ways:
    • Some people prefer to migrate it manually so that they have more control. The following code snippets illustrates the changes should be made when migrating a Windows Service to a Worker Role.

4. Diagnostics: Logging and Monitoring

Logging and monitoring are important as they could be used to tracing exceptions, monitoring performance, and planning for capacity.

Although configuring them is normally not difficult, there are some differences between performing these tasks on-premise or in the cloud. For one thing, you might have many instances in a cloud environment, the cloud instances aren’t persistent and, they might have a massive amount of data.

Now, the goal is to store the diagnostic information persistently, accessibly, and cost-effectively so that the diagnostic information can be viewed and monitored easily.

Windows Azure Diagnostic to collect diagnostic information

Windows Azure Diagnostic (WAD) enables you to collect diagnostic information from your Windows Azure application. WAD transfers the diagnostic information to Windows Azure Storage to ensure its persistency. The transfer can happen either on a schedule or on-demand. As we know that Windows Azure Storage is a highly-accessible service that’s competitively priced, so that goal can be accomplished.

Viewing and Monitoring Diagnostic information with tools

Data transferred to Windows Azure Storage can be accessed either with tools or API. Some tools (such as Cerebrata’s Azure Diagnostic Manager) enable us to view and monitor the diagnostic information easily through GUI (Graphical User Interface) as is shown in Figure 2. With that, we are able to take appropriate actions.

Cerebrata Azure Diagnostic Manager

Figure 2 Cerebrata Azure Diagnostic Manager

Conclusion

I haven’t discussed everything that needs to be taken into account, but the four points discussed above are the some of the key considerations when moving your applications to Windows Azure. Although some changes might apply, the changes are normally around the architecture and design. You don’t have to change the business logic.

In the next article, I will elaborate in more detail with a case study on moving an application to the cloud: starting from the current scenario, challenges that customer faced, architectural changes, and the final outcome.

This post was also published at A Cloud Place blog.

Posted in Azure, Cloud | 1 Comment

Managing session state in Windows Azure: What are the options?

One of the most common questions in developing ASP.NET applications on Windows Azure is how to manage session state. The intention of this article is to discuss several options to manage session state for ASP.NET applications in Windows Azure.

What is session state?

Session state is usually used to store and retrieve values for a user across ASP.NET pages in a web application. There are four available modes to store session values in ASP.NET:

  1. In-Proc, which stores session state in the individual web server’s memory. This is the default option if a particular mode is not explicitly specified.
  2. State Server, which stores session state in another process, called ASP.NET state service.
  3. SQL Server, which stores session state in a SQL Server database
  4. Custom, which lets you choose a custom storage provider.

You can get more information about ASP.NET session state here.

In-Proc session mode does not work in Windows Azure

The In-Proc option, which uses an individual web server’s memory, does not work well in Windows Azure. This may be applicable for those of you who host your application in a multi-instance web-farm environment; Windows Azure load balancer uses round-robin allocation across multi-instances.

For example: you have three instances (A, B, and C) of a Web Role. The first time a page is requested, the load balancer will allocate instance A to handle your request. However, there’s no guarantee that instance A will always handle subsequent requests. Similarly,the value that you set in instance A’s memory can’t be accessed by other instances.

The following picture illustrates how session state works in multi-instances behind the load balancer.

Figure 1 – WAPTK BuildingASP.NETApps.pptx Slide 10

The other options

1.     Table Storage

Table Storage Provider is a subset of the Windows Azure ASP.NET Providers written by the Windows Azure team. The Table Storage Session Provider is,in fact, a custom provider that is compiled into a class library (.dll file), enabling developers to store session state inside Windows Azure Table Storage.

The way it actually works is to store each session as a record in Table Storage. Each record will have an expired column that describe the expired time of each session if there’s no interaction from the user.

The advantage of Table Storage Session Provider is its relatively low cost: $0.14 per GB per month for storage capacity and $0.01 per 10,000 storage transactions. Nonetheless, according to my own experience, one of the notable disadvantages of Table Storage Session Provider is that it may not perform as fast as the other options discussed below.

The following code snippet should be applied in web.config when using Table Storage Session Provider.

<sessionState mode="Custom" customProvider="TableStorageSessionStateProvider">   <providers>     <clear/>    <add name="TableStorageSessionStateProvider"         type="Microsoft.Samples.ServiceHosting.AspProviders.TableStorageSessionStateProvider" />   </providers>
</sessionState>

You can get more detail on using Table Storage Session Provider step-by-step here.

2.     SQL Azure

As SQL Azure is essentially a subset of SQL Server, SQL Azure can also be used as storage for session state. With just a few modifications, SQL Azure Session Provider can be derived from SQL Server Session Provider.

You will need to apply the following code snippet in web.config when using SQL Azure Session Provider:

<sessionState mode="SQLServer"
sqlConnectionString="Server=tcp:[serverName].database.windows.net;Database=myDataBase;User ID=[LoginForDb]@[serverName];Password=[password];Trusted_Connection=False;Encrypt=True;"
cookieless="false" timeout="20" allowCustomSqlDatabase="true"
/>

For the detail on how to use SQL Azure Session Provider, you can either:

The advantage of using SQL Azure as session provider is that it’s cost effective, especially when you have an existing SQL Azure database. Although it performs better than Table Storage Session Provider in most cases, it requires you to clean the expired session manually by calling the DeleteExpiredSessions stored procedure. Another drawback of using SQL Azure as session provider is that Microsoft does not provide any official support for this.

3.     Windows Azure Caching

Windows Azure Caching is probably the most preferable option available today. It provides a high-performance, in-memory, distributed caching service. The Windows Azure session state provider is an out-of-process storage mechanism for ASP.NET applications. As we all know, accessing RAM is very much faster than accessing disk, so Windows Azure Caching obviously provides the highest performance access of all the available options.

Windows Azure Caching also comes with a .NET API that enables developers to easily interact with the Caching Service. You should apply the following code snippet in web.config when using Cache Session Provider:

<sessionState mode="Custom" customProvider="AzureCacheSessionStoreProvider">   <providers>     <add name="AzureCacheSessionStoreProvider"           type="Microsoft.Web.DistributedCache.DistributedCacheSessionStateStoreProvider, Microsoft.Web.DistributedCache"           cacheName="default" useBlobMode="true" dataCacheClientName="default" />   </providers>
</sessionState>

A step-by-step tutorial for using Caching Service as session provider can be found here.

Other than providing high performance access, another advantage about Windows Azure Caching is that it’s officially supported by Microsoft. Despite its advantages, the charge of Windows Azure Caching is relatively high, starting from $45 per month for 128 MB, all the way up to $325 per month for 4 GB.

Conclusion

I haven’t discussed all the available options for managing session state in Windows Azure, but the three I have discussed are the most popular options out there, and the ones that most people are considering using.

Windows Azure Caching remains the recommended option, despite its cons but developers and architects shouldn’t be afraid to decide on a different option, if it’s more suitable for them in a given scenario.

This post was also published at A Cloud Place blog.

Posted in ASP.NET, Azure, Azure Development | 1 Comment

An Introduction to Windows Azure (Part 2)

This is the second article of a two-part introduction to Windows Azure. In Part 1, I discussed the Windows Azure data centers and examined the core services that Windows Azure offers. In this article, I will explore additional services available as part of Windows Azure which enable customers to build richer, more powerful applications.

Additional Services

1. Building Block Services

‘Building block services’ were previously branded ‘Windows Azure AppFabric’. The main objective of building block services is to enable developers to build connected applications. The three services under this category are:

(i) Caching Service

Generally, accessing RAM is much faster than accessing disk, including storage and databases. For that reason, Microsoft have developed an in-memory and distributed caching service to deliver low latency, high-performance access, namely Windows Server AppFabric Caching. However, there are some activities, such as installing and managing, and some hardware requirements like investing in clustered servers, which have to be handled by the end-user.

Windows Azure Caching Service is a self-managed, yet distributed, in-memory caching service built on top of the Windows Server AppFabric Caching Service. Developers will no longer have to install and manage the Caching Service / Clusters. All they need to do is to create a namespace, specify the region, and define the Cache Size. Everything will get provisioned automatically in just a few minutes.

Creating new Windows Azure Caching Service

Additionally, Azure Caching Service comes along with a .NET client library and session providers for ASP.NET, which allow the developer to quickly use them in the application.

(ii) Access Control Service

Third Party Authentication

With the trend for federated identity / authentication becoming increasingly popular, many applications have relied on authentication from third party identity providers (IdPs) such as Live ID, Yahoo ID, Google ID, and Facebook.

One of the challenges developers face when dealing with different IdPs is that they use different standard protocols (OAuth, WS-Trust, WS-Federation) and web tokens (SAML 1.1, SAML 2.0, SWT).

Multiple ID Authentication

Access Control Service (ACS) allows application users to authenticate using multiple IdPs. Instead of dealing with different IdPs individually, developers just need to deal with ACS and let it take care of the rest.

AppFabric Azzess Control Services

(iii) Service Bus

Windows Azure’s Service Bus allows secure messaging and connectivity across multiple network hierarchies. It enables hybrid model scenarios, such as connecting cloud applications with on-premise systems. The Service Bus allows applications running on Windows Azure to call back to on-premise applications located behind firewalls and NATs.

Service Bus Diagram

Migrating from an on-premise Windows Communication Foundation (WCF) framework to the Service Bus is trivial as they use a similar programming approach.

2. Data Services

Data Services consists of SQL Azure Reporting and SQL Azure Data Sync, both of which are still currently available as Community Technology Previews (CTP).

(i)  SQL Azure Reporting

SQL Azure Reporting aims to provide developers with a service similar to that of the current SQL Server Reporting Service (SSRS), with the advantages of being in the cloud. Developers are still able to use familiar tools such as SQL Server Business Intelligence Development Studio. Migrating on-premise reports is also easy as SQL Azure Reporting is essentially built on top of SSRS architecture.

(ii) SQL Azure Data Sync

SQL Azure Data Sync is a cloud-based data synchronization service built on top of theMicrosoft Sync Framework. It enables synchronization between a cloud database and another cloud database, or with an on-premise database.

SQL Azure Data Sync

(from Windows Azure Bootcamp)

3. Networking

Three networking services are available today:

(i) Windows Azure CDN

The Content Delivery Network (CDN) caches static content such as video, images, JavaScript, and CSS at the closest node to users. By doing so, it improves performance and provides the best user experience. There are currently 24 nodes available globally.

Windows Azure CDN Locations

(ii) Windows Azure Traffic Manager

Traffic Manager is designed to enable high performance and high availability of web applications, by providing load-balancing across multiple hosted services in the six available data centers. In its current CTP guise, developers can select one of the following rules:

  • Performance – detects the location of the user traffic and routes it to the best online hosted service based on network performance.
  • Failover – based on an ordered list of hosted services, traffic is routed to the online service highest on the list.
  • Round Robin – equally distributes traffic to all hosted services.

(iii) Windows Azure Connect

Windows Azure Connect supports secure network connectivity between on-premise resources and the cloud by establishing a virtual network environment between them. With Windows Azure Connect, cloud applications appear to reside on the same network environment as on-premise applications.

Windows Azure Connect

(from the Windows Azure Platform Training Kit)

Windows Azure Connect enables scenarios such as:

  • Using an on-premise SMTP Server from a cloud application.
  • Migrating enterprise apps which require an on-premise SQL Server to Windows Azure.
  • Domain-join a cloud application running in Azure to an Active Directory.

4. Windows Azure Marketplace

Windows Azure Marketplace is a centralized online market where developers are able to easily sell their applications or datasets.

(i) Marketplace for Data

Windows Azure Marketplace for Data is an information marketplace allowing ISVs to provide datasets (either free or paid) on any platform, and available to the global market. For example, Average House Prices, Borough provides annual and quarterly house prices based on Land Registry data in the UK. Developers can then subscribe and utilize this dataset to develop their application.

(ii) Marketplace for Applications

Windows Azure Market Place for Applications enables developers to publish and sell their applications. Many, if not all of these applications are SAAS applications built on Windows Azure. Applications submitted to the Marketplace must meet a set of criteria.

Conclusion

To conclude, we have examined the huge investment that Microsoft is making and will continue to make in Windows Azure, the core of its cloud strategy. Three fundamental services (Compute, Storage, and Database) are offered to developers to satisfy the basic needs of developing cloud applications. Additionally, with Windows Azure services, (Building Blocks Services, Data Services, Networking, and Marketplace) developers will find it increasingly easy to develop rich and powerful applications. The foundations of this cloud offering are robust and we should continue to look out for new features to be added to this platform.

References

This article was written using the following resources as references:

This post was also published at A Cloud Place blog.

Posted in Azure | Leave a comment

Editing your XML documents with Liquid XML Studio

As we know XML is a popular file format and standard that has been used for many purposes in the IT industry. Starting from storing configuration file, storing data, transferring via web service, and so many more.

Nonetheless, I believe most of you have ever got frustrated with editing and manipulating XML document.

Recently, I have been introduced to try out a powerful XML editor, Liquid XML Studio. In fact, it is more than an editor. Liquid XML Studio comes with the following features:

                                      

Check out this link http://www.liquid-technologies.com/xml-studio.aspx for more detail of Liquid XML Studio!

Posted in Uncategorized | Leave a comment