Using Your Smartphone’s Camera to Live Stream Through Azure Media Services

You might have seen many examples of Azure Media Services (AMS) Live Streaming demo through Wirecast installed on the laptop as shown in below links:

Now, I’d like to share a different way to live stream, by using your smartphone’s camera. Interesting, isn’t it?

Mingfei has a post leveraging Wirecast’s iOS app here. The idea is that approach is to leveraging a camera on your phone while still requires Wirecast at the desktop.

In this post, I’ll be showing a different of my approach, by having a lightweight encoder installed on our smart phone (Windows Phone) and push the feed directly to AMS Live Channel.

Azure Media Capture in Windows Phone

I’m leveraging Azure Media Services Capture that you can download from Store free. If you need to integrate this capability into your mobile application, you may download the source code and SDK from Codeplex

I assume you are familiar of how to do a live streaming through on-premise encoder like Wirecast. But if you’re not, no issue at all. Please check the 3rd video of this post where I recorded how to do live streaming step-by-step.

I’ll be using Azure Media Services Explorer tools to manage the live channel, similar to the above mentioned video. The only different on this approach is, you should create a live channel with Fragmented MP4 (Smooth) as the Input Protocol.


Figure 1. Creating Live Channel with Live Encoding and Smooth Protocol

Optionally, you may select Live (cloud) Encoding which makes a lot of sense to offload the multi-bitrates encoding from your phone to the cloud as shown as below diagram.

*It’s not mandatory to enable live (cloud) encoding in the demo. enabling live/cloud encoding, will take much longer channel’s starting time*


Figure 2. Architecture of Live Streaming (with Live Encoding) via Windows Phone

Once the channel is running, copy the Primary Input URL of that channel.


Figure 3. Copy the Input URL of the Live Channel

Next, open the Azure Media Capture app on your Windows Phone. Click the setting icon and paste the Primary Input URL to the “Channel Ingest URL”.

*Notice that, you actually can push multiple bitrates / resolution from your phone if prefer to, but your phone will suffer as encoding generally is a very processor intensive task*

image image

Figure 4. Azure Media Capture Settings

Click Start Broadcast “Red Dot” button when you’re ready. When live/cloud encoding is enabled, anticipate longer delay (about 45 seconds).

Go back to your Azure Media Services Explorer, right click on the channel and playback the preview with Azure Media Services Explorer.


Figure 5. Playback the Preview URL

And if everything goes well, you should be able to see the live stream that pushed from your phone:


Figure 6. Multi-bitrates result from phone

What about Android?

Theoretically, you can do similar concept with Android phone. There are several RTMP encoder for Android Phone such as Nano Cosmos and Broadcaster for Android.

I tried Nano Cosmos and worked well with AMS Live Channel (via RTMP).

Hope this helps.

Posted in Azure | Tagged | Leave a comment

Using Dynamic Manifest for Bitrates Filtering in Azure Media Services: Scenario-based walkthrough

I’m very excited about the release of this features in Azure Media Services. In fact, in the past few months there have been several asks from my customers which I personally engaged with.

Jason and Cenk from Media Services team have explained how the feature works in technically details. In this post, I’ll explain it differently, specifically from scenario-driven perspective, follow by the “how-to” with the UI-based Azure Media Services Explorer and also through .NET SDK.

Customer Requirement: Bitrates filtering for different clients (browser-based and native mobile-based)

Imagine that, as an OTT provider, I’ve encoded all my video library with H264 Adaptive Bitrates MP4 Set 720p which has 6 video bitrates (3400, 2250, 1500, 1000, 650, 400 all in kbps).

And here is what I’d like to achieve:

  • User connecting through browsers – larger screen (PC-based browsers which typically bigger screen) to only see highest four bitrates (3400, 2250, 1500, 1000 kbps). This is because I want to avoid the end-user to view “blocky” video experience (with 400 kbps).
  • User connecting through native apps – smaller screen (Android, iOS, or Windows Phone) to only see lowest four bitrates (1500, 1000, 650, and 400 kbps).
    • This could be either the mobile phones are not capable to playback highest bitrates due to screen-size limitation.
    • Or this could be because I’d like to save end-users’ bandwidth especially when they’re connecting via 3G or 4G network through their data-plan.


Figure 1: Larger screen vs. smaller screen playback experience

How do we design the most effective media workflow to handle such scenario?

You can definitely produce / encode different assets to serve different purposes: one asset for larger screen (which encoded with 4 highest bitrates), another one for smaller screen (which encoded with 4 lowest bitrates)

Although it works, I don’t think it’s a great idea since you face these challenges:

  1. Management overhead as you’ve have different physical files / assets / locator URL
  2. Redundant storage which cause higher storage cost
  3. Not-future proven: imagine that in the future, you have “paid silver tier” which the user can watch 5 bitrates, you’ll need to re-encoded your library again which can be cumbersome process.

A. Using Dynamic Manifest for Bitrate Filtering through Azure Media Services Explorer (AMSE)

Let me show how you can leverage Dynamic Manifest capability (with AMSE tool) to achieve this. The following step-by-step guide will cover how this can be done in more “elegant” way.

1. Download and install AMSE here if you haven’t done so. Since version 3.24 onward the features have been added. But I’ll still recommend you to use the latest one.

2. Connect to your Azure Media Services account.

3. Prepare your content and encode them with H264 Adaptive Bitrates MP4 Set 720p

(for step 2 and 3, you may refer to Video 1 in this post on how it can be done)

4. Navigate to tab “Global filters” and right click select “Create a global filter…”.

Note: there are 2 types of filters: asset-level and global-level. We’re using global filter in this tutorial.


Figure 2 – Creating a global filter

5. Give the global filter a name, in my example “smallerscreen” and then navigate to “Tracks filtering” tab.


Figure 3 – Track Filtering in creating global filter

6. Although you may add the track rules and conditions manually, I’d recommend you to insert the “track filtering example” and modify from there. To do so, click “Insert tracks filtering example”.


Figure 4 – Defining bitrates in tracks filtering

Notice that in Rule1, the condition of bitrate is 0-1500000, which is 1500 kbps encoding profile I’ve set. Of course you may adjust it accordingly to the bitrates that you’re expecting. Click “Create Filter”.

7. Go back to the Asset tab. Navigate to the asset that you’ve encoded earlier, publish a locator if you haven’t done so. Then right click on the asset and select Playback – with Azure Media Player – with a global filter – smallerscreen.


Figure 5 – Playing back the video with global filter

8. Now you can see that the video is played through Azure Media Player with the following URL:

http://<media services account><GUID>/<video>.ism/manifest(filter=smallerscreen)

Navigate to the “quality” selection button left to the “sound” icon and notice the quality that the “small scree” user can select.


Figure 6 – Playback experience for smaller screen user

With that URL, you will see that the “smallerscreen” user can only watch the lowest 4 quality (bitrates). Likewise, you may create another filter that indicates “largerscreen” user.

The interesting thing to note here is we only store 1 set of asset in Media Services without having to store multiple times.

B. Using Dynamic Manifest for Bitrate Filtering through .NET SDK

<to be updated>


Although Dynamic Manifest can be used in other use case (such as timeline trimming), this post fill focus on rendition, specifically on bitrates filtering for “larger screen vs. small screen” scenario.

The later part of the post also covers how to create a filter with UI-based tool (Azure Media Services Explorer) and .NET SDK although you can also achieve this with REST-API.


Please find the following post by Jason and Cenk explaining Dynamic Manifest features.


This article is reviewed by

  • Jason Suess – Principal PM Manager, Azure Media Services
  • Cenk Dingiloglu – Senior Program Manager, Azure Media Services
Posted in Azure | Tagged | Leave a comment

Azure Media Services Demo Videos

Hey folks, recently I created 3 Azure Media Services videos, hosted on Office Mix.

Please check them out:

  1. Simple Video On Demand Workflow (14 mins)
    This is where ideally you should start Azure Media Services. This demo show you how to create a simple on-demand media workflow with Azure Media Services Explorer.

  2. Transcript Generation with Media Indexer (16 mins)
    Media Indexer is powerful speech-to-text engine, similar technology used in Xbox and Cortana. This demo shows you how to generating transcript with Media Indexer to improve the search-ability and providing caption to video.

  3. Live Streaming (17 mins)
    Heard about Sochi Olympic and FIFA Worldcup digital streaming? Yes, these both events were digitally streamed through Azure Media Services Live Streaming? Check-out the video on how easily you can live-stream your feed to the world.
Posted in Azure | Tagged | 3 Comments

Windows Azure On-boarding Guide for Dev / Test

This post is to provide customer who are considering / has decided Windows Azure for Dev / Test environment.

Windows Azure’s Values for Different Stakeholders in Dev / Test Scenario



Application sponsor

BUIT/ Developers

Central IT/ Infrastructure Ops

Faster time to market

Faster infrastructure provisioning and rollout times on Windows Azure enable your application teams to make changes faster

Instantly provision any amount of test/development resources, when you need them

Allow your users to self-provision based on a set of policies and rules that you set upfront

Lower cost

Minimize your investment and pay only for what you use on Windows Azure for testing and development

Only pay for what you use with metered charge-back for all resources on the public cloud

Free up on-premises DC capacity by moving test/development to Windows Azure

Less risk

Minimize your upfront investment using Windows Azure, with the option to expand rapidly as required

Moving test/dev to Windows Azure gives you access to capacity when you need it, while complying with governance policies set by central IT

Get back control over your IT environment, while giving your end-users the same benefits as public cloud / infrastructure ownership


The Solution

How’s the solution going to look like for Dev Test? Well, it could be as simple as spinning up a VM and manage it from on-premise by the developer, like you can see in below Solution 1. Or it might be more advanced as shown in Solution 2, which involving Virtual Network: Site to Site VPN.

Solution 1 – Simple


Solution 2 – Advanced


Get Started Resources

Here’re are some of the on-boarding guide for you go get started dev / test in Windows Azure:

Creating and preparing Infrastructure Services

Managing Infrastructure Services via Scripting


A recording session that’s worth to be checked out: Building Your Lab, Dev, and Test Scenarios in Windows Azure Infrastucture Services (IaaS)

channel9 Hope this helps.

Posted in Azure, IaaS | Tagged | Leave a comment

Windows Azure outstands amongst 5 large IaaS providers in an independent comparative analysis by Cloud Spectator

Recently, I found an analysis paper about cloud server performance conducted by an independent cloud performance metrics company, Cloud Spectator.

This post is to summarize the paper and I definitely encourage you to read the full report over here:….pdf

Objective of analysis study

The objective of the paper is determine the price-performance value of the cloud providers. Providing some valuable insight for customer when selecting their prefer cloud vendor.


Figure 1 – Principle of value proposition [figure from the paper]

Who are being compared

The study (done in June 2013) compared five large IaaS providers in the industry:



The tests were run for 3 times in 5 consecutive days: May 25, 2013 – May 29, 2013.

VM Size

The most common size for cloud server, Medium Size (or equivalent / similar setup) was chosen from the 5 cloud vendors:


Figure 2 – Medium VM Spec [figure from the paper]


The tests used Unixbench 5.1.3 as benchmarking the performance of Linux OPS running on virtualized infrastructure, producing rating out of 10 stars. Details of Unixbench can be found here:


Two important pieces of info are collected:

  • Performance: how well the provider scores on Unixbench, and how consistent the scores are.
  • Price-Performance: after performance scores are established, we factor in cost to understand how much performance a user can expect on return for every amount of money spent, i.e., the value.

The Results

Performance Only

The performance result shows that Windows Azure provides the best performance and notably 3 times higher than AWS EC2 on average!


Figure 3 – Performance Only Result [figure from the paper]


Figure 4 – Average Unixbench Score, derived from Figure 3 [figure from the paper]


Retail hourly price of the cloud providers are captured (pay-as-you-go) basis on date of experiment.


Figure 5 – Pay-per-hour price [figure from the paper]

By taking each score and dividing by the price, we can get a relative price-to-performance score for each provider. And here are the score (The higher the score, the better):


Figure 6 – Price-Performance Result [figure from the paper]

CloudSpecs Score

CloudSpecs score is a further normalized value from Figure 6, taking the highest value to 100. And here’re the scores:


With the cloudspecs score, the ratio of each of the providers are formed as following



While acknowledging that Unixbench is just one test, customers may always consider other factors when selecting their cloud vendor.

To conclude, Amazon EC2 and Windows Azure offers the lowest price at $0.12 per hour. However, Windows Azure performs much better than EC2 in this experiment (approximately 3 times). The experiment also shows that Rackspace scores worst in term of price-performance.

Posted in Azure, Cloud | 2 Comments

SQL Database Automated Backup–Before and Now

SQL Database and its three replicas

You might have heard that SQL Database (formally SQL Azure) is a scalable and highly durable database service on the cloud and there’re multiple replicas automatically provisioned when we create a database. It’s true that there will be three replicas store for each database. This is in fact purely for HA purpose in case one of the machine hosting the SQL Database service goes down.

Customers are transparent and inaccessible to these three replicas. In another word, if we accidentally delete one of the table (or entire database), it’s really gone Sad smile! (Luckily it’s only a demo database)

I had experienced that before and tried to contact the Azure Support. There’s no way to revive our deleted database anymore.

Design and archive it our own

As a cloud architect, we should really be aware of this. In fact, for many projects I’ve worked on of the last three years, the archival or backup mechanism has been always be part of my design. This is because at that time, there’s no built-in automated backup in SQL Database for customers.

How I did that?

V1. sqlcmd and bcp + Worker Role = Automated Backup

At earlier day, we used sqlcmd to backup the script and bcp to backup the data. This may sound a bit surprising for some of you and that’s really what we can do at that time. We created a worker role and ran in schedule (typically daily) to perform backup and push the data to Azure Blob Storage.

The output is 1 .tsql file and copies of .dat file per database table.

V2. bacpac + Worker Role = Automated Backup

Later, Microsoft introduced bacpac as part of import and export solution for both SQL Server and SQL Azure. The output of this technique is .bacpac file which is similar to .bak file as we familiar of.

There was also an UI in management portal that allow us to export and import the database to Azure Storage on-demand basis, but still lack of automated way.  Alternately, there’s exe (command line interface) that eventually calls WCF service to perform backup. We twisted our design from sqlcmd + bcp to just simply use the command line.

Now, it’s built-in supported!

Finally, I notice that it’s built-in provided in management portal. SQL Database – Configuration. And you can find it by choosing Export Status to Automatic.

You can further specify the frequency of the backup on every N days. You can also specify the retention to only keep the last N days (so that your storage account won’t grow too big over the time).


After the configuration, you can see that the bacpac is finally pushed to my storage account.


Posted in Azure, SQL Azure Database | Leave a comment

Invitation – Community Technology Update 2013, Singapore

Community Technology Update (CTU) 2013 will be held on 27th July 2013, organised by the Community Leads from various Singapore based User Groups and MVPs. We’re putting together some of the best talents from the island (and our closest neighbour, Malaysia), in order to share our experiences across the series of Microsoft Technologies that we believe all of us truly care about.

Register now!

How do I sign up?

Follow the instructions in the URL to register –

How much does it cost?

For early bird registration, it’ll cost you $12.00.

For walk-ins on actual day, it’ll cost you $20.00. So we strongly encourage you to register beforehand so that we can cater sufficient food for everyone.

What is CTU?

CTU is in our 10th Iteration – We’re proud to be organised by the Community, for the Community. In true spirit of sharing, our speakers all purely volunteers from the field like anyone of you within the Microsoft ICT industry. CTU is held bi-annually, and is the biggest community event in Singapore.

Who should Attend?

Anyone who’s interested in the Microsoft technologies, we’ve a range of topics meant for

  • IT Professionals
  • Developers
  • Database administrators

And it’s reserved specially for user group members!

Session Information

0830 – 0900 Registration
0900 – 0930 Key Note
Level 22CF-15 Level 22CF-12 Level 22BR-01
0945 – 1100 WAV01Technical Overview of SVC video in Lync 2013 (Level 200)

Speaker: Brenon Kwok

ITP01Accelerate your Windows XP Deployment via Application Compatibility Testing with Citrix AppDNA (Level 200)

Speaker: Jay Paloma

DEV01Customizing SharePoint 2013 Search Experiences

Speaker: Mohd Faizal

1115 – 1230 WAV02Discover the new Exchange 2013 and benefit from it’s improvement (Level 200)

Speaker: Triston Woon

ITP02Windows 8.1

Speaker: Desmond Tan

DEV02What’s new, branding in SharePoint 2013

Speaker: Loke Kit Kai

1230 – 1330 Lunch Break
1330 – 1445 WAV03Microsoft IO (Infrastructure Optimization) and Microsoft Technologies. (Level 200)

Speaker: Sarbjit Singh

ITP03Secure, Centralised Administration Using PowerShell Web Access (Level 200)

Speaker: Matt Hitchcock

DEV03Building on the new SharePoint 2013 Apps Model? 10 things to look out for

Speaker: Patrick Yong

1500 – 1615 WAV04Microsoft Business Intelligence with Excel and SharePoint 2013 (Level 200)

Speaker: Tian Ann

ITP04Evaluating options for tiered storage in the enterprise – a look at the options, benefit, features and use cases (Level 200)

Speaker: Daniel Mar

DEV04Changes on SharePoint Workflow Authoring Tools

Speaker: Emerald Tabirao

1630 – 1700 Closing Address & Lucky DrawLevel 21 Auditorium

Useful Links

Track Information

Frequent Asked Question

Lucky Draw

Stand a chance to win a Microsoft Surface Pro (128GB w Type Cover) worth close to $1500 in the LUCKY DRAW!!!

Surface Pro

Posted in Invitation | Leave a comment

ASP.NET Bad Practices: What you shouldn’t do in ASP.NET (Part 4)

I’ve so far covered 15 bad practices in the past three posts and I truly hope that all ASP.NET developers be aware of them including the consequences of each.

Today, I’ll be covering another 5 as the part four.

16. Style Properties on Controls


  • The four thousand specific control style properties, e.g.
    • EditItemTemplate-AlternateItem-Font-ForeColor-Opacity-Level :S


  • Maintainability
  • Bigger page size resulting slower performance since it not being cached


  • CSS stylesheets


17. Filtering records on app level, not database level


  • Bringing whole list of records from database and filter them on the application level
using (NorthwindEntities ent = new NorthwindEntities())
    var productList = ent.Products.ToList();

    foreach (var product in productList)
        if (product.UnitPrice > 1000)


  • Unnecessary traffic
  • Unnecessary processing resource


  • Write proper query (or LINQ Query) to database
  • Get only what you need
using (NorthwindEntities ent = new NorthwindEntities())
    var productList = ent.Products.Where(x => x.UnitPrice > 1000).ToList();
    foreach (var product in productList)

18. Cookieless Form Auth & Session


  • Enable cookieless forms authentication or session


  • It could make your users being the victim to hijacking attacks


  • Enable “require cookies” for these features
  • Consider using only secure (SSL) cookies for sites serving sensitive information


19. Missing “!IsPostback” check


  • Forget the !IsPostBack check if you’re not expecting the execution on every postbacks.
  • You can say that, this is so fundamental.
  • Yes it is, but I’ve still seen quite couple of developers make this mistake!
protected void Page_Load(object sender, EventArgs e)
    //initialize the code here


  • Overhead on the unnecessary calls might occurs
  • Trigger incorrect / unexpected value


  • Understand what you’re really trying to achieve
  • Put !IsPostBack check if you’re to only set the value for one first time.
protected void Page_Load(object sender, EventArgs e)
    if (!IsPostBack)
        //initialize the code here

20. Putting Non-common scripts in MasterPages


  • Putting unnecessary / non-common scripts / codes in masterpages


  • All pages using the masterpages will be inherited the scripts
  • Inappropriate usage may cause inefficiency
  • Huge page size


  • Put only what really needed to be shared across child pages
  • Consider using NestedMasterPages while part of scripts need to be inherited


That’s all for today’s 5 bad practices. Hope that I can compile some more and share with you again in future posts.

See you!

Posted in ASP.NET, Bad Practices | Tagged | Leave a comment

ASP.NET Bad Practices: What you shouldn’t do in ASP.NET (Part 3)

Hello everyone! Hope the first and second articles are useful to you. This is third article of ASP.NET Bad Practices: What you shouldn’t do in ASP.NET. The next five bad practices are equally important as those discussed earlier.

Some of them are related to web.config. They are as following:

11. Turning “off” Custom Error in Production


  • Set Custom Error to OFF in Production


  • Source code, stack trace, others info will be exposed
  • Version of ASP.NET, Servers, etc. exposed


  • Of course, set it ON or RemoteOnly
  • Consider using “friendly” custom error page


12. Setting EnableViewStateMac=false in production


  • Set EnableViewStateMac = false
  • Do not set it to false even though you’re not using viewstate




  • Always set it to TRUE



13. Turning Off Request validation


  • Turning off the RequestValidation
  • RequestValidation will help to warn developer that there’s potential XSS (Cross Site Scripting) occur when it’s turned off.
  • Here’s the screenshot of the warning



  • You know what you’re doing
  • Make sure that everything are properly HTML-encoded


  • It creates opportunity for Cross Site Scripting


  • It’s actually on by default.
  • Use a rich editor with built-in-HTML-encoded feature


14. Too Much “inline” javascript / css


  • Writing too much inline javascript / css on the ASPX / HTML pages


  • Lack of caching
  • Code maintenance


  • Have it on the different files
  • The files will be cached on browsers
  • *Make use of CDN (Content Delivery Network) to improve performance further


15. Impersonation: do you really need to do so?


  • Overuses / improper usage of impersonation
  • Especially, impersonate to “admin” user


  • Posing security risk
  • Prevents the efficient use of connection pooling
  • When accessing downstream databases
  • Performance degradation


  • Clarify:
    • Do you really need to impersonate?
  • If you do, remember these:
    • Consider using programmatic instead of declarative impersonation
    • When impersonating programmatically, be sure to revert to the original context
  • Alternatives approaches are depending on scenarios

Some References On This Point:


I’ll continue to updating the post again in future, making this series of posts an awesome ones. Stay tuned.

Posted in ASP.NET, Bad Practices | Tagged | Leave a comment

ASP.NET Bad Practices: What you shouldn’t do in ASP.NET (Part 2)

This is the second article of ASP.NET Bad Practices: What you shouldn’t do in ASP.NET series of blog post. You may want to take a look at the first article here. Now let’s carry on to discuss the next 5 taboos in ASP.NET.

6. Leftover Debug Code



*in the production


  • Longer compilation time
  • Longer code execution
  • More memory usage at runtime
  • Images and scripts are not cached

You can read the post by Scott Gu and Scott Hanselman on how serious this bad practice is.


Always set debug=“false” in the production

*By default, web.config transformation will transform the debug to false during publishing. So don’t try to be naughty by turning it to true.


7. Improper usage of static Variable


Using static variable in ASP.NET improperly

UNLESS you know what you’re doing and the impact


  • Inconsistent value during concurrent access
  • It will be shared across other requests / users
  • Value might be overridden by one and another


  • Read only scenario => use const or readonly
  • Maintaining value when post back => use viewstate
  • Maintaining value across multiple page => use session
  • To cache the data => use cache


Static variable is useful when:

  • Locking object to avoid multi-write synchronization
  • Global-wide level object sharing (static VS application object)


8. Performing heavy tasks in databound event


  • Calling heavy tasks in each databound event
  • Heavy tasks:
    • SQL calls,
    • Web service calls
  • DataBound Event
    • RowDataBound in GridView
    • ItemDataBound in Repeater
  • *Especially NO PAGING


It will be called N times, while N denotes page sizes


If your page size is relatively small. But still be careful!


  • Using custom data source to enable server side paging
  • Generated from view (in you’re using EF)


9. Breaking the stack unnecessarily when you re-throw an exception


Breaking the stack trace unnecessarily when re-throwing exception with “throw ex”


  • Losing the original stack trace
  • It’s harder to trace back / debug which codes really causes error during production


You really expect the outcome


  • “throw”
  • Wrapping up the exception with another exception while retaining the original as the inner exception


10. Storing clear-text password in config files


Storing clear-text password in config files


  • Easy to get stolen
  • Unauthorized access


  • Encryption

aspnet_regiis -pe "connectionStrings" -app "/SampleApplication"

  • Combined with another mechanism such as certificate


Ok, I think we’re good to stop here today. Shall continue this again in the next post. See you!

Posted in ASP.NET, Bad Practices | Tagged | 1 Comment