Restricting access between multi-tenant App Services with Service Endpoint and Access Restriction

The Requirement

Recently, one of my partners has the requirement:

  • Three micro-services are deployed as App Service (Web App for Containers) within one App Service Plan.
  • They’d like to control the traffic, in which App X can access App Y, but any other app else (including App Z or other external parties) can’t access App Y. Refer to the following diagram:

Figure 1. Access Requirement

  • They’d like to achieve this without App Service Environment.

The solution

This requirement can be achieved in 3 primary steps as can be seen in this diagram:

Figure 2. The solution

Step 1. Virtual Network and Service Endpoint

Create a Virtual Network and Subnet.

Create a Service Endpoint with the type of Microsoft.Web as it indicates App Service (Web App).

Figure 3. Create a service endpoint in Virtual Network

Step 2. VNet Integration in “source” App Service

Using VNet Integration feature from source / origin App to integrate with the virtual network.

To do that, in your source App Service (in my diagram as X), navigate to the Networking and click on “Click here to configure” in the VNet Integration section.

Figure 4. Configure VNet Integration in App Service

Click the “+ Add VNet” button and choose respective Virtual Network and Subnet that you’ve created earlier.

Figure 5. Choose your Virtual Network and Subnet in VNet Integration

Step 3. Access Restriction in the “target” App Service

Access Restriction is a feature in App Service which allows you to allow or deny the incoming traffic to your App Service.

The next step is to configure the Access Restriction in your target App Service (illustrated as App Y in my diagram).

To do that, go to your target App Service, click Networking, and choose Configure Access Restriction. We will be creating 2 rules:

  • The first one is to deny all traffic.

    Click on Add Rule, then name it “deny all traffic”, choose “Deny” in Action, leave the type to IPv4, and finally fill the IP Address Block with “”. Finally click “Add rule”.

Figure 6. Configure Access Restriction to deny all traffic

  • The second rule is to ONLY allow the Virtual Network (with enabled Service Endpoint), which eventually allowing the traffic from App X to flow in.

    Click on Add Rule, then name the “allow traffic from App X”, choose “Allow” in Action. Make sure that your priority number for this rule is set to higher value than the “deny all”. This is to ensure that this rule overrides the “deny all” rule.

    Change the type to Virtual Network, choose respective Virtual Network and Subnet which you’ve done in Step 1. Finally click “Add rule”.

Figure 7. Configure Access Restriction to allow traffic from a Virtual Network

All set now!

Testing against the setup

Let’s now perform some test to ensure that it works as expected.

Test 1. Accessing App Z Y from your local browser.

This can be done easily by just browsing the App Y’s URL in your local browser. Here’s my result which returning Error 403 — Forbidden.

Figure 8. Forbidden access to App Y from local browser

Test 2. Accessing App Y from App Z.

Let’s recall that App Z is deployed within the same App Service plan as App X and App Y. But our rule indicates that it’s restricted to access App Y.

To test it, let’s navigate to App Service of App Z, then choose SSH, and click “Go ->”.

Figure 9. SSH into the App Z

Another browser tab will be opened with a web-based SSH terminal ready to take your action.

Type “curl https://[app-y]“. Remember to change the [app-y] to your target app name.

As can be seen, i am also getting the Error 403 from the curl, like the Test 1.

Figure 10. Accessing App X from App Z’s SSH

Test 3. Accessing App Y from App X.

Let’s recall that App X is the one which we’ve configured Service Endpoint and VNet Integration, which we’d expect it to work.

With the similar technique as Test 2, perform the same curl command from the App X’s SSH session.

Tada! It works! (even though it just returns me bunch of html tag, well that’s just what curl can do for you)

Figure 11. Successfully access App Y from App X


You’ve seen how i make use of Service Endpoint, VNet Integration, and Access Restriction to meet the requirement. This is rather an inexpensive way to achieve the goal. As you notice, by using Access Restriction, that the traffic was blocked on the web server level, not the network level.

Another (more powerful) alternative is to deploy the apps in App Service Environment, then making use of NSG rules to restrict the traffic on the network level. However, i reckon that it will be a more expensive (and complicated setup).

You can find more details about App Service Networking features here.

I want to thank Stefan Schackow and Christina Compy for the tips on the solution.

Hope this post is useful for you!

Posted in App Services, Azure | Leave a comment

Implementing Multi-region Deployment and Disaster Recovery in Azure


Every application has its own criticality level, some are mission critical in nature, while other aren’t. Mission critical apps typically requires very high uptime or availability. Many Azure PaaS services such as App Service and SQL Database have built-in HA (high availability) features to tolerate failure within the region with minimum or even zero configuration requires. However, most of the HA features are in-region, meaning the redundancy takes place within the same region or datacenter. In the unfortunate events such or large-scale disasters such as earthquake, flooding, etc., it may affect the whole region which bring down your applications.

One common strategy people when dealing with mission critical application is to have multi-region deployment with DR (disaster recovery) strategy. Nonetheless, it’s very difficult and complicated to setup such architecture.

Solution Demo Video Series

Join me and my colleague Adityo Setyonugroho for this video series of: Implementing multi-region deployment and disaster recovery in Azure. We implemented the demo from the reference architecture on Azure within two regions serving as primary and secondary, active/passive model.

We cover 4 different scenarios how the failover takes place:

  • Scenario 1: the happy flow when both deployment in each region are well
  • Scenario 2: when the Web App in primary region is down
  • Scenario 3: when the Database in the primary region is down
  • Scenario 4: when entire primary region is down, we could failover to secondary region in less than 3 minutes!

Video 1 – Introduction to multi-region deployment and disaster recovery on Azure

This video covers the background of this initiative, fundamental of HA vs DR in general, and how we plan to implement the solution architecture in real actions.

Video 2 – Crash Course: App Service, SQL Database, Traffic Manager, Event Hubs, Stream Analytics, Functions, Logic App

This video covers fundamental and introduction of some of the Azure services and tools (Postman and JMeter) used in this video series


Video 3 – Sample app, database, and deployment on Azure

This video walk-throughs the sample application and database used for this demo series. We will also show how they are deployed on Azure.

Video 4 – Everything goes well. Opps… suddenly Web in primary region is down

This video firstly shows the happy flow, when all running well in Region A. Suddenly, Web app in Region A goes down… How will Azure Traffic Manager handle this?

Video 5 – Database in Region A is down

This video shows scenario 3, what happens when database in region A goes down? We will perform a database failover without having to change the application at all.

Video 6 – The setup for Scenario 4 (entire primary region is down)

This demo shows the setup to anticipate the scenario 4 (when entire primary region goes down). We design a solution to streamline the failover process for app and database.

Video 7 – The demo of failing over to secondary region when primary region is down

This video demonstrates the actual failing over to secondary region when entire primary region goes down.


What’s next?

We are not done yet! Adityo and I are currently working on automating the deployment of the solution to achieve single button deployment. We will be also sharing the sample code along with the ARM template in GitHub.

Hope you enjoy this video series. Stay tuned!

Posted in Uncategorized | 7 Comments

Learning materials and 8 tips to pass CKAD (Certified Kubernetes Application Developer) exam

Recently i passed CKAD (Certified Kubernetes Application Developer) Exam. I must admit, this is the most challenging IT exam that I’ve ever taken. The main reason was the exam is 100% hands-on. No multiple answer nor True / False. But i also feel this was the most fulfilling exam. Here’s my cert

There are two variants of Kubernetes Certification, namely CKAD and CKA. In nutshell CKAD is designed for software developers who’d like to develop and deploy their apps in Kubernetes. While CKA is designed for the IT administrators who are managing the Kubernetes clusters. In general, CKA covers broader topic than CKAD does. You can learn about the similarities and differences between the two exams here.

In this article, I’d like to share with you some of the tips to acquire it. 

But before talking about the tips, I wanna start with the learning material first.

Learning Materials

There are a lot of learning material online. Some of them are free, while others are not. I’d recommend you enrolling to Introduction to Kubernetes if you are new to K8S. But i don’t think it’s enough to pass the exam only with this introduction course. 

The official training material is from CNCF: Kubernetes for Developers. You can checkout the offer here such as learning material + exam at discounted price.

I mainly leveraged on three contents:

  1. Kubernetes Learning Path which can be download here. It has 50 days from zero to hero learning plan, including videos, hands-on, documentation, etc. It cover the basic Kubernetes concept and uses Azure Kubernetes Services as example. The content used in this learning path is mostly free.

  2. I think it’s an excellent paid course. Not only Mumshab explains the topic really well (with nice animated deck) but also the hands-on exercise at the end of each topic as can be seen below. The left hand side displays the quiz portal while the right hand side is the terminal that you’d have to type in some command to get the right answer.

    I also appreciate the Lightning Exams and the Mock Exams (along with the solutions) at the end of the course to provide us the feeling of the exam. It’s helpful.

  3. This is to cross validates what I’ve learnt so far and be more fluent in dealing the questions.

Tips of passing the exam

Now, let’s move on to some tips on how to conquer the exam!

The main challenge of this exam is TIME MANAGEMENT so you have to manage it very well.

1) You need to be fluent with vi/vim in editing YAML! 

You will have to deal with a lot of yamls in the exam. As such, vi/vim is the to-go option. Vi/Vim is no doubt a very powerful and popular text editing tool in Linux world but i in my opinion, the learning curve is very high (especially for folks who are coming from the “Windows” world). It has lots of “secret / magic” commands. I would strongly encourage you to invest your time on vi/vim if you are not familiar with it because it will save your time significantly during the exam.

Here are my top 5 useful command in yaml editing:

  1. [in command mode] SHIFT zz to exit with saving (I think it’s much effective than :wq!)
  2. [in command mode] dd to cut, or yy to copy the whole line, and p to paste
  3. [in command mode] :set number to display line number
  4. [in command mode] u to undo
  5. [in command mode] :%s/foo/bar/g to search all occurrence of foo and replace it with bar

You can find many of vim tutorial online but please be mindful don’t overload your brain with vim, leave some space for the CKAD content .

2) You need to invest some effort and time in Unix / Linux command

During the exam, you will be using a web-based unix/linux terminal (yeah NO GUI). As such you will need rudimentary linux knowledge. Again, similar to the first tip, Linux engineer shouldn’t find this challenging but if you’re coming from Windows world, make sure than you invest some time and effort on it. Other than the super basic commands such as cd, ls, cat, cp, mv, mkdir, rm, and rmdir, I found these pretty useful too:

  • Grep which mainly used to filter some portion of the output

  • Diff which used to compare files (line by line). The following diagram shows how I used diff with -c option to compare 2 yaml files.

You can learn more about other Linux commands here.

3) Get familiar with Kubernetes docs especially the example. 

During the exam, you can browse the docs like ( and but you’re prohibited to access other links. Even though you can access the Kubernetes doc, time is a big challenge and i bet you won’t have enough time to read thoroughly. As such, get familiar with the structure of the docs including the examples. So that you can copy-paste the appropriate example to your terminal with minimum modification. 

As an example, if you search the keyword “persistent volumes” (aka pv) in the you will find the following result. It’s important to understand the objective of each links / result.

  • The first link shows many details including the concept of pv and pvc
  • The third link provides more holistic example on how we first create pv, then pvc. Finally consume it from the pod. No doubt it’s not as detail as the first link about pv, but you’ll notice that the example of the yaml are very helpful as an end-to-end lifecycle.


4) Get familiar with the Kubernetes’ object’s short names.

If you type ‘kubectl api-resources’ you will see:

I’d encourage you to memorize them at least the commonly used ones such as:

  • no for nodes
  • po for pods
  • deploy for deployments
  • rs for replicasets
  • pv for persistentvolumes
  • pvc for persitentvolumeclaims
  • ns for namespaces

It will save you a few seconds on each command. But when you add them all, you will save a good amount of time.

5) Create your own short-forms aka alias

This is another time-saving tip. Create your own alias to save typing the long command. Always do these at the beginning of the exam.

Here are some of mine:

  • alias k=’kubectl’
  • alias kgp=’kubectl get pods’
  • alias kgpan=’kubectl get pods –all-namespaces’
  • alias kdp=’kubectl delete pods’
  • alias kd=’kubectl describe’

You may find out how other folks improve their productivity here:

6) Imperative follow by declarative (-o yaml –dry-run)

There are typically 2 ways to creating Kubernetes objects. Imperative way (thru kubectl) or declarative way (thru yaml then kubectl apply/create). Obviously, you don’t have to hand-write every single yaml lines, you may copy and paste the example from Kubernetes doc and modify them accordingly.

  • Generally saying, it’s faster to do it in imperative way. But not all the options can be specified with imperative.
  • On another hand, declarative is more powerful as you can basically specify whatever options. But navigating and editing a yaml (especially a long yaml file) is NOT FUN . Ouch, you got an error when there’s just an extra space – you know what I mean!


    “YAML is very readable but not very writeable.” – Wely Lau

Wouldn’t be great if we can combine the two techniques? Yes, you can, and here’s how you would do it with the help of –dry-run and -o yaml options.

  • –dry-run option is commonly used to validate if the command (and parameters) are correctly specify without really running it
  • -o yaml option is used to produce an output in yaml format

The idea is to use imperative command (with any option that you perform) to produce a yaml file, then edit those options which can’t be specified in imperative.

Assuming you are asked to create a deployment with 4 replicas, nginx image. You may do it as following:

$ k create deploy nginx-deploy –image=nginx –dry-run -o yaml > nginx-deploy.yaml

This produce a yaml file named nginx-deploy.yaml. Then you use vi/vim to edit replicas option in the yaml file. This is because we are unable to specify the replica option in the imperative kubectl create deploy command.

Then finally run kubectl apply -f nginx-deploy.yaml.

For me, this technique saves more time than copy-paste from the example, especially when there are too many things to be edited. It increases the risk of yaml syntax error.

7) Be extra careful with namespace

This sounds simple but it’s a very common pitfall. Some of the command requires you to place the resource on specific namespace.

But if you forgot to specify it, it will be placed on the “default” namespace as the default behavior. And you’ll be penalized with 0 mark for that answer. 

You can specify the namespace either thru imperative kubectl command with -n (or –namespace) option such as:

Or do it declaratively in metadata section as can be seen here:

8) Pick the “right” questions strategically.

The priority is to attempt the questions which you have high confident that you can complete it. Keep in mind that the percentage of each question is displayed on top. This will help you determine if it’s worthwhile to spend time on the question of skip over to attempt the next one.

For sure, don’t spend too much time on 1 question. This is a pitfall for many engineers (me included), especially when troubleshooting. The curiosity of getting it fixed! Arh… you know what I mean. But please, do practice this during the exam. Move on to attempt others, you can go back if you have time.

Sum it up

That’s all for this post, folks. I shared with you 3 learning materials that I used to prepare my exam. Also shared with you 8 tips on how to manage your time strategically in the exam.

Finally, I wish you good luck to take your CKAD/CKA exam!

Posted in CKAD | 4 Comments

Should you upgrade from App Service Standard Plan to Premium v2?

Updated (17 August 2020)!

Things change fast the cloud!
I mentioned in the original post that redeployment was required to move to Premium v2. Now, glad to see that the App Service Engineering team has simplified the experience so that you now can move to Premium v2 just like other Tiers without redeployment.
However, there’s a catch to that: “The IP addresses (both inbound and outbound) will change, as can be seen in the diagram below.

IP addresses will change when upgrade to Premium V2

IP addresses will change when upgrade to Premium v2

As such, you may need to make necessary arrangement such as update the ip address whitelist should there be any dependency over it.


App Service Premium (v2) Plan has been made generally available since Oct 2017. This premium plan which uses Dv2-series VM promises faster processors, SSD storage, and double the memory-to-core ratio compared to the older plan which uses “the legacy” A-series VM. However, i can’t find any article or exercise showing the actual comparison, how much the improvement would bring. And for this reason, i decide to perform an experiment and write this blog.

At a glance by peeking at Azure App Service pricing page, comparing the core count seems like bringing the Pv2 to a higher price point. As such, it might be a disappointment for the existing customers which are looking to upgrade.

The big questions for the current App Service customers are:

  • “Should we upgrade to this new plan?”
  • “Is the upgrade worth the effort? Note that it will be a re-deploy as there is no One-Click-Upgrade for this.”

The objective of this article is to demystify that it’s actually more cost-effective in conjunction with the performance. It will be backed with my synthetically load testing result.

Here’s a specs and pricing screenshot from the App Service pricing page:

Load Test Result of Test #1

As comparison, shall should we compared S2 with P1v2 or P2v2? I’ll be comparing S2 with P1v2 in this article since it offers at the same price point.

Methodology: how the tests were being performed

  • I used cloud-based load testing of Azure DevOps (aka VSTS)
  • The app used for this exercise was NopCommerce (ASP.NET based) which is available from the marketplace.
  • The architecture is rather simple, WebApp accessing SQL Database.
  • The first app (let’s call it NopS2App run on S2 App Service Plan: 2 core CPU and 3.5 GB RAM); It accessed SQL database with GP_Gen5_2 spec (2 vCores; 32 GB RAM)
  • The second app (let’s call it NopP1v2App run on P1v2 App Service Plan: 1 core CPU, 3.5 GB RAM, and SSD storage); Same with the first app, the second app also accessed SQL database with GP_Gen5_2 spec (2 vCores; 32 GB RAM)
  • I will be capturing 2 sets of outputs: First, performance under load including: number of successful request and RPS. I would consider these as my primary measures. Second, the memory and CPU percentage of the app service plan during load testing period. These are more for informational measure as we’d like to observe how the utilization look like during the load tests.
  • For a fair comparison, both of the apps are freshly deployed own their own resource group with identical setup.

Test #1

This is a very simple test as i just load an URL endpoint (which showed a product category)

  • Duration of test: 5 minutes
  • Number of user: 250 users
  • Load was generated from Southeast Asia Azure region

Here’s the result:

Load Test Result of Test #1
Load Test Result of Test #1
Load Test Result of Test #1

You’ll notice that P1v2 would be able to handle about 27% more successful request than S2. Likewise, about 27% more RPS than S2.

Test #2

Test #2 is similar to #1 except with higher load and slightly longer duration.

  • Duration of test: 10 minutes
  • Number of user: 500 users
  • Load was generated from Southeast Asia Azure region


Load Test Result of Test #1
Load Test Result of Test #1
Load Test Result of Test #1

You’ll notice that P1v2 would be able to handle about 35% more successful request than S2. Likewise, about 34% more RPS than S2. It’s more significant improvement than Test #1.

Test #3

Test #3 and #4 are different from #1 and #2, in the sense that i used Visual Studio Web Test and Load Test to record a scenario, basically to simulate a user who is trying to browse a product and eventually check out and purchase.

  • Duration of test: 5 minutes
  • Number of user: start with 10 users; with the increment of additional 10 users per 10 seconds, till maximum of 200 users.
  • Load was generated from East US 2 Azure region
Load Test Result of Test #1
Load Test Result of Test #1
Load Test Result of Test #1

As can be seen, we see about 18% improvement.

Test #4

Test #4 used the same scenario as #3 except with a longer duration and higher load.

  • Duration of test: 10 minutes
  • Number of user: start with 50 users; with the increment of additional 20 users per 5 seconds, till maximum of 500 users.
  • Load was generated from East US 2 Azure region

Let’s review the result.

Load Test Result of Test #1
Load Test Result of Test #1
Load Test Result of Test #1

Test #4 shows about 4% improvement.


By doing 4 round of tests, we could conclude the following:

  1. We’ve seen the RPS and Successful request improvement ranging from 4%–35%.
  2. We observe that both Memory and CPU consumed slightly increased about 7%–18%, except the CPU for both Test #3 and #4. I think this is very reasonable.
  3. Obviously this is synthetic test and may very depending on other factors such as the application, etc.

I hope this exercise would give you more confidence to upgrade to Premium v2 App Service plan.

Posted in App Services, Azure, Uncategorized | Leave a comment

Using Your Smartphone’s Camera to Live Stream Through Azure Media Services

You might have seen many examples of Azure Media Services (AMS) Live Streaming demo through Wirecast installed on the laptop as shown in below links:

Now, I’d like to share a different way to live stream, by using your smartphone’s camera. Interesting, isn’t it?

Mingfei has a post leveraging Wirecast’s iOS app here. The idea is that approach is to leveraging a camera on your phone while still requires Wirecast at the desktop.

In this post, I’ll be showing a different of my approach, by having a lightweight encoder installed on our smart phone (Windows Phone) and push the feed directly to AMS Live Channel.

Azure Media Capture in Windows Phone

I’m leveraging Azure Media Services Capture that you can download from Store free. If you need to integrate this capability into your mobile application, you may download the source code and SDK from Codeplex

I assume you are familiar of how to do a live streaming through on-premise encoder like Wirecast. But if you’re not, no issue at all. Please check the 3rd video of this post where I recorded how to do live streaming step-by-step.

I’ll be using Azure Media Services Explorer tools to manage the live channel, similar to the above mentioned video. The only different on this approach is, you should create a live channel with Fragmented MP4 (Smooth) as the Input Protocol.


Figure 1. Creating Live Channel with Live Encoding and Smooth Protocol

Optionally, you may select Live (cloud) Encoding which makes a lot of sense to offload the multi-bitrates encoding from your phone to the cloud as shown as below diagram.

*It’s not mandatory to enable live (cloud) encoding in the demo. enabling live/cloud encoding, will take much longer channel’s starting time*


Figure 2. Architecture of Live Streaming (with Live Encoding) via Windows Phone

Once the channel is running, copy the Primary Input URL of that channel.


Figure 3. Copy the Input URL of the Live Channel

Next, open the Azure Media Capture app on your Windows Phone. Click the setting icon and paste the Primary Input URL to the “Channel Ingest URL”.

*Notice that, you actually can push multiple bitrates / resolution from your phone if prefer to, but your phone will suffer as encoding generally is a very processor intensive task*

image image

Figure 4. Azure Media Capture Settings

Click Start Broadcast “Red Dot” button when you’re ready. When live/cloud encoding is enabled, anticipate longer delay (about 45 seconds).

Go back to your Azure Media Services Explorer, right click on the channel and playback the preview with Azure Media Services Explorer.


Figure 5. Playback the Preview URL

And if everything goes well, you should be able to see the live stream that pushed from your phone:


Figure 6. Multi-bitrates result from phone

Updates (5 Feb 2016)

Above post shows how to playback with “preview” on the channel level. That is really good for testing, making sure that the right stream coming in, and got played correctly.

Once, you’re ready to publish the URL to your end customers (with player), you should create a program on the channel. This will enable you to have capabilities like Dynamic Packaging (to be able to reach various delivery protocols), Dynamic Encryption, Dynamic Manifest, etc.

image  image

Figure 7. Create Program and Get Output URL

What about Android?

Theoretically, you can do similar concept with Android phone. There are several RTMP encoder for Android Phone such as Nano Cosmos and Broadcaster for Android.

I tried Nano Cosmos and worked well with AMS Live Channel (via RTMP).

Hope this helps.

Posted in Azure | Tagged | 3 Comments

Using Dynamic Manifest for Bitrates Filtering in Azure Media Services: Scenario-based walkthrough

I’m very excited about the release of this features in Azure Media Services. In fact, in the past few months there have been several asks from my customers which I personally engaged with.

Jason and Cenk from Media Services team have explained how the feature works in technically details. In this post, I’ll explain it differently, specifically from scenario-driven perspective, follow by the “how-to” with the UI-based Azure Media Services Explorer and also through .NET SDK.

Customer Requirement: Bitrates filtering for different clients (browser-based and native mobile-based)

Imagine that, as an OTT provider, I’ve encoded all my video library with H264 Adaptive Bitrates MP4 Set 720p which has 6 video bitrates (3400, 2250, 1500, 1000, 650, 400 all in kbps).

And here is what I’d like to achieve:

  • User connecting through browsers – larger screen (PC-based browsers which typically bigger screen) to only see highest four bitrates (3400, 2250, 1500, 1000 kbps). This is because I want to avoid the end-user to view “blocky” video experience (with 400 kbps).
  • User connecting through native apps – smaller screen (Android, iOS, or Windows Phone) to only see lowest four bitrates (1500, 1000, 650, and 400 kbps).
    • This could be either the mobile phones are not capable to playback highest bitrates due to screen-size limitation.
    • Or this could be because I’d like to save end-users’ bandwidth especially when they’re connecting via 3G or 4G network through their data-plan.


Figure 1: Larger screen vs. smaller screen playback experience

How do we design the most effective media workflow to handle such scenario?

You can definitely produce / encode different assets to serve different purposes: one asset for larger screen (which encoded with 4 highest bitrates), another one for smaller screen (which encoded with 4 lowest bitrates)

Although it works, I don’t think it’s a great idea since you face these challenges:

  1. Management overhead as you’ve have different physical files / assets / locator URL
  2. Redundant storage which cause higher storage cost
  3. Not-future proven: imagine that in the future, you have “paid silver tier” which the user can watch 5 bitrates, you’ll need to re-encoded your library again which can be cumbersome process.

A. Using Dynamic Manifest for Bitrate Filtering through Azure Media Services Explorer (AMSE)

Let me show how you can leverage Dynamic Manifest capability (with AMSE tool) to achieve this. The following step-by-step guide will cover how this can be done in more “elegant” way.

1. Download and install AMSE here if you haven’t done so. Since version 3.24 onward the features have been added. But I’ll still recommend you to use the latest one.

2. Connect to your Azure Media Services account.

3. Prepare your content and encode them with H264 Adaptive Bitrates MP4 Set 720p

(for step 2 and 3, you may refer to Video 1 in this post on how it can be done)

4. Navigate to tab “Global filters” and right click select “Create a global filter…”.

Note: there are 2 types of filters: asset-level and global-level. We’re using global filter in this tutorial.


Figure 2 – Creating a global filter

5. Give the global filter a name, in my example “smallerscreen” and then navigate to “Tracks filtering” tab.


Figure 3 – Track Filtering in creating global filter

6. Although you may add the track rules and conditions manually, I’d recommend you to insert the “track filtering example” and modify from there. To do so, click “Insert tracks filtering example”.


Figure 4 – Defining bitrates in tracks filtering

Notice that in Rule1, the condition of bitrate is 0-1500000, which is 1500 kbps encoding profile I’ve set. Of course you may adjust it accordingly to the bitrates that you’re expecting. Click “Create Filter”.

7. Go back to the Asset tab. Navigate to the asset that you’ve encoded earlier, publish a locator if you haven’t done so. Then right click on the asset and select Playback – with Azure Media Player – with a global filter – smallerscreen.


Figure 5 – Playing back the video with global filter

8. Now you can see that the video is played through Azure Media Player with the following URL:

http://<media services account><GUID>/<video>.ism/manifest(filter=smallerscreen)

Navigate to the “quality” selection button left to the “sound” icon and notice the quality that the “small scree” user can select.


Figure 6 – Playback experience for smaller screen user

With that URL, you will see that the “smallerscreen” user can only watch the lowest 4 quality (bitrates). Likewise, you may create another filter that indicates “largerscreen” user.

The interesting thing to note here is we only store 1 set of asset in Media Services without having to store multiple times.

B. Using Dynamic Manifest for Bitrate Filtering through .NET SDK

<to be updated>


Although Dynamic Manifest can be used in other use case (such as timeline trimming), this post fill focus on rendition, specifically on bitrates filtering for “larger screen vs. small screen” scenario.

The later part of the post also covers how to create a filter with UI-based tool (Azure Media Services Explorer) and .NET SDK although you can also achieve this with REST-API.


Please find the following post by Jason and Cenk explaining Dynamic Manifest features.


This article is reviewed by

  • Jason Suess – Principal PM Manager, Azure Media Services
  • Cenk Dingiloglu – Senior Program Manager, Azure Media Services
Posted in Azure | Tagged | 1 Comment

Windows Azure On-boarding Guide for Dev / Test

This post is to provide customer who are considering / has decided Windows Azure for Dev / Test environment.

Windows Azure’s Values for Different Stakeholders in Dev / Test Scenario



Application sponsor

BUIT/ Developers

Central IT/ Infrastructure Ops

Faster time to market

Faster infrastructure provisioning and rollout times on Windows Azure enable your application teams to make changes faster

Instantly provision any amount of test/development resources, when you need them

Allow your users to self-provision based on a set of policies and rules that you set upfront

Lower cost

Minimize your investment and pay only for what you use on Windows Azure for testing and development

Only pay for what you use with metered charge-back for all resources on the public cloud

Free up on-premises DC capacity by moving test/development to Windows Azure

Less risk

Minimize your upfront investment using Windows Azure, with the option to expand rapidly as required

Moving test/dev to Windows Azure gives you access to capacity when you need it, while complying with governance policies set by central IT

Get back control over your IT environment, while giving your end-users the same benefits as public cloud / infrastructure ownership


The Solution

How’s the solution going to look like for Dev Test? Well, it could be as simple as spinning up a VM and manage it from on-premise by the developer, like you can see in below Solution 1. Or it might be more advanced as shown in Solution 2, which involving Virtual Network: Site to Site VPN.

Solution 1 – Simple


Solution 2 – Advanced


Get Started Resources

Here’re are some of the on-boarding guide for you go get started dev / test in Windows Azure:

Creating and preparing Infrastructure Services

Managing Infrastructure Services via Scripting


A recording session that’s worth to be checked out: Building Your Lab, Dev, and Test Scenarios in Windows Azure Infrastucture Services (IaaS)

channel9 Hope this helps.

Posted in Azure, IaaS | Tagged | Leave a comment

Windows Azure outstands amongst 5 large IaaS providers in an independent comparative analysis by Cloud Spectator

Recently, I found an analysis paper about cloud server performance conducted by an independent cloud performance metrics company, Cloud Spectator.

This post is to summarize the paper and I definitely encourage you to read the full report over here:….pdf

Objective of analysis study

The objective of the paper is determine the price-performance value of the cloud providers. Providing some valuable insight for customer when selecting their prefer cloud vendor.


Figure 1 – Principle of value proposition [figure from the paper]

Who are being compared

The study (done in June 2013) compared five large IaaS providers in the industry:



The tests were run for 3 times in 5 consecutive days: May 25, 2013 – May 29, 2013.

VM Size

The most common size for cloud server, Medium Size (or equivalent / similar setup) was chosen from the 5 cloud vendors:


Figure 2 – Medium VM Spec [figure from the paper]


The tests used Unixbench 5.1.3 as benchmarking the performance of Linux OPS running on virtualized infrastructure, producing rating out of 10 stars. Details of Unixbench can be found here:


Two important pieces of info are collected:

  • Performance: how well the provider scores on Unixbench, and how consistent the scores are.
  • Price-Performance: after performance scores are established, we factor in cost to understand how much performance a user can expect on return for every amount of money spent, i.e., the value.

The Results

Performance Only

The performance result shows that Windows Azure provides the best performance and notably 3 times higher than AWS EC2 on average!


Figure 3 – Performance Only Result [figure from the paper]


Figure 4 – Average Unixbench Score, derived from Figure 3 [figure from the paper]


Retail hourly price of the cloud providers are captured (pay-as-you-go) basis on date of experiment.


Figure 5 – Pay-per-hour price [figure from the paper]

By taking each score and dividing by the price, we can get a relative price-to-performance score for each provider. And here are the score (The higher the score, the better):


Figure 6 – Price-Performance Result [figure from the paper]

CloudSpecs Score

CloudSpecs score is a further normalized value from Figure 6, taking the highest value to 100. And here’re the scores:


With the cloudspecs score, the ratio of each of the providers are formed as following



While acknowledging that Unixbench is just one test, customers may always consider other factors when selecting their cloud vendor.

To conclude, Amazon EC2 and Windows Azure offers the lowest price at $0.12 per hour. However, Windows Azure performs much better than EC2 in this experiment (approximately 3 times). The experiment also shows that Rackspace scores worst in term of price-performance.

Posted in Azure, Cloud | 2 Comments

SQL Database Automated Backup–Before and Now

SQL Database and its three replicas

You might have heard that SQL Database (formally SQL Azure) is a scalable and highly durable database service on the cloud and there’re multiple replicas automatically provisioned when we create a database. It’s true that there will be three replicas store for each database. This is in fact purely for HA purpose in case one of the machine hosting the SQL Database service goes down.

Customers are transparent and inaccessible to these three replicas. In another word, if we accidentally delete one of the table (or entire database), it’s really gone Sad smile! (Luckily it’s only a demo database)

I had experienced that before and tried to contact the Azure Support. There’s no way to revive our deleted database anymore.

Design and archive it our own

As a cloud architect, we should really be aware of this. In fact, for many projects I’ve worked on of the last three years, the archival or backup mechanism has been always be part of my design. This is because at that time, there’s no built-in automated backup in SQL Database for customers.

How I did that?

V1. sqlcmd and bcp + Worker Role = Automated Backup

At earlier day, we used sqlcmd to backup the script and bcp to backup the data. This may sound a bit surprising for some of you and that’s really what we can do at that time. We created a worker role and ran in schedule (typically daily) to perform backup and push the data to Azure Blob Storage.

The output is 1 .tsql file and copies of .dat file per database table.

V2. bacpac + Worker Role = Automated Backup

Later, Microsoft introduced bacpac as part of import and export solution for both SQL Server and SQL Azure. The output of this technique is .bacpac file which is similar to .bak file as we familiar of.

There was also an UI in management portal that allow us to export and import the database to Azure Storage on-demand basis, but still lack of automated way.  Alternately, there’s exe (command line interface) that eventually calls WCF service to perform backup. We twisted our design from sqlcmd + bcp to just simply use the command line.

Now, it’s built-in supported!

Finally, I notice that it’s built-in provided in management portal. SQL Database – Configuration. And you can find it by choosing Export Status to Automatic.

You can further specify the frequency of the backup on every N days. You can also specify the retention to only keep the last N days (so that your storage account won’t grow too big over the time).


After the configuration, you can see that the bacpac is finally pushed to my storage account.


Posted in Azure, SQL Azure Database | Leave a comment

Invitation – Community Technology Update 2013, Singapore

Community Technology Update (CTU) 2013 will be held on 27th July 2013, organised by the Community Leads from various Singapore based User Groups and MVPs. We’re putting together some of the best talents from the island (and our closest neighbour, Malaysia), in order to share our experiences across the series of Microsoft Technologies that we believe all of us truly care about.

Register now!

How do I sign up?

Follow the instructions in the URL to register –

How much does it cost?

For early bird registration, it’ll cost you $12.00.

For walk-ins on actual day, it’ll cost you $20.00. So we strongly encourage you to register beforehand so that we can cater sufficient food for everyone.

What is CTU?

CTU is in our 10th Iteration – We’re proud to be organised by the Community, for the Community. In true spirit of sharing, our speakers all purely volunteers from the field like anyone of you within the Microsoft ICT industry. CTU is held bi-annually, and is the biggest community event in Singapore.

Who should Attend?

Anyone who’s interested in the Microsoft technologies, we’ve a range of topics meant for

  • IT Professionals
  • Developers
  • Database administrators

And it’s reserved specially for user group members!

Session Information

0830 – 0900 Registration
0900 – 0930 Key Note
Level 22CF-15 Level 22CF-12 Level 22BR-01
0945 – 1100 WAV01Technical Overview of SVC video in Lync 2013 (Level 200)

Speaker: Brenon Kwok

ITP01Accelerate your Windows XP Deployment via Application Compatibility Testing with Citrix AppDNA (Level 200)

Speaker: Jay Paloma

DEV01Customizing SharePoint 2013 Search Experiences

Speaker: Mohd Faizal

1115 – 1230 WAV02Discover the new Exchange 2013 and benefit from it’s improvement (Level 200)

Speaker: Triston Woon

ITP02Windows 8.1

Speaker: Desmond Tan

DEV02What’s new, branding in SharePoint 2013

Speaker: Loke Kit Kai

1230 – 1330 Lunch Break
1330 – 1445 WAV03Microsoft IO (Infrastructure Optimization) and Microsoft Technologies. (Level 200)

Speaker: Sarbjit Singh

ITP03Secure, Centralised Administration Using PowerShell Web Access (Level 200)

Speaker: Matt Hitchcock

DEV03Building on the new SharePoint 2013 Apps Model? 10 things to look out for

Speaker: Patrick Yong

1500 – 1615 WAV04Microsoft Business Intelligence with Excel and SharePoint 2013 (Level 200)

Speaker: Tian Ann

ITP04Evaluating options for tiered storage in the enterprise – a look at the options, benefit, features and use cases (Level 200)

Speaker: Daniel Mar

DEV04Changes on SharePoint Workflow Authoring Tools

Speaker: Emerald Tabirao

1630 – 1700 Closing Address & Lucky DrawLevel 21 Auditorium

Useful Links

Track Information

Frequent Asked Question

Lucky Draw

Stand a chance to win a Microsoft Surface Pro (128GB w Type Cover) worth close to $1500 in the LUCKY DRAW!!!

Surface Pro

Posted in Invitation | Leave a comment