Learning materials and 8 tips to pass CKAD (Certified Kubernetes Application Developer) exam

Recently i passed CKAD (Certified Kubernetes Application Developer) Exam. I must admit, this is the most challenging IT exam that I’ve ever taken. The main reason was the exam is 100% hands-on. No multiple answer nor True / False. But i also feel this was the most fulfilling exam. Here’s my cert

There are two variants of Kubernetes Certification, namely CKAD and CKA. In nutshell CKAD is designed for software developers who’d like to develop and deploy their apps in Kubernetes. While CKA is designed for the IT administrators who are managing the Kubernetes clusters. In general, CKA covers broader topic than CKAD does. You can learn about the similarities and differences between the two exams here.

In this article, I’d like to share with you some of the tips to acquire it. 

But before talking about the tips, I wanna start with the learning material first.

Learning Materials

There are a lot of learning material online. Some of them are free, while others are not. I’d recommend you enrolling to Introduction to Kubernetes if you are new to K8S. But i don’t think it’s enough to pass the exam only with this introduction course. 

The official training material is from CNCF: Kubernetes for Developers. You can checkout the offer here such as learning material + exam at discounted price.

I mainly leveraged on three contents:

  1. Kubernetes Learning Path which can be download here. It has 50 days from zero to hero learning plan, including videos, hands-on, documentation, etc. It cover the basic Kubernetes concept and uses Azure Kubernetes Services as example. The content used in this learning path is mostly free.

  2. I think it’s an excellent paid course. Not only Mumshab explains the topic really well (with nice animated deck) but also the hands-on exercise at the end of each topic as can be seen below. The left hand side displays the quiz portal while the right hand side is the terminal that you’d have to type in some command to get the right answer.

    I also appreciate the Lightning Exams and the Mock Exams (along with the solutions) at the end of the course to provide us the feeling of the exam. It’s helpful.

  3. This is to cross validates what I’ve learnt so far and be more fluent in dealing the questions.

Tips of passing the exam

Now, let’s move on to some tips on how to conquer the exam!

The main challenge of this exam is TIME MANAGEMENT so you have to manage it very well.

1) You need to be fluent with vi/vim in editing YAML! 

You will have to deal with a lot of yamls in the exam. As such, vi/vim is the to-go option. Vi/Vim is no doubt a very powerful and popular text editing tool in Linux world but i in my opinion, the learning curve is very high (especially for folks who are coming from the “Windows” world). It has lots of “secret / magic” commands. I would strongly encourage you to invest your time on vi/vim if you are not familiar with it because it will save your time significantly during the exam.

Here are my top 5 useful command in yaml editing:

  1. [in command mode] SHIFT zz to exit with saving (I think it’s much effective than :wq!)
  2. [in command mode] dd to cut, or yy to copy the whole line, and p to paste
  3. [in command mode] :set number to display line number
  4. [in command mode] u to undo
  5. [in command mode] :%s/foo/bar/g to search all occurrence of foo and replace it with bar

You can find many of vim tutorial online but please be mindful don’t overload your brain with vim, leave some space for the CKAD content .

2) You need to invest some effort and time in Unix / Linux command

During the exam, you will be using a web-based unix/linux terminal (yeah NO GUI). As such you will need rudimentary linux knowledge. Again, similar to the first tip, Linux engineer shouldn’t find this challenging but if you’re coming from Windows world, make sure than you invest some time and effort on it. Other than the super basic commands such as cd, ls, cat, cp, mv, mkdir, rm, and rmdir, I found these pretty useful too:

  • Grep which mainly used to filter some portion of the output

  • Diff which used to compare files (line by line). The following diagram shows how I used diff with -c option to compare 2 yaml files.

You can learn more about other Linux commands here.

3) Get familiar with Kubernetes docs especially the example. 

During the exam, you can browse the docs like (https://kubernetes.io and https://github.com/kubernetes) but you’re prohibited to access other links. Even though you can access the Kubernetes doc, time is a big challenge and i bet you won’t have enough time to read thoroughly. As such, get familiar with the structure of the docs including the examples. So that you can copy-paste the appropriate example to your terminal with minimum modification. 

As an example, if you search the keyword “persistent volumes” (aka pv) in the http://kubernetes.io you will find the following result. It’s important to understand the objective of each links / result.

  • The first link shows many details including the concept of pv and pvc
  • The third link provides more holistic example on how we first create pv, then pvc. Finally consume it from the pod. No doubt it’s not as detail as the first link about pv, but you’ll notice that the example of the yaml are very helpful as an end-to-end lifecycle.

    

4) Get familiar with the Kubernetes’ object’s short names.

If you type ‘kubectl api-resources’ you will see:

I’d encourage you to memorize them at least the commonly used ones such as:

  • no for nodes
  • po for pods
  • deploy for deployments
  • rs for replicasets
  • pv for persistentvolumes
  • pvc for persitentvolumeclaims
  • ns for namespaces

It will save you a few seconds on each command. But when you add them all, you will save a good amount of time.

5) Create your own short-forms aka alias

This is another time-saving tip. Create your own alias to save typing the long command. Always do these at the beginning of the exam.

Here are some of mine:

  • alias k=’kubectl’
  • alias kgp=’kubectl get pods’
  • alias kgpan=’kubectl get pods –all-namespaces’
  • alias kdp=’kubectl delete pods’
  • alias kd=’kubectl describe’

You may find out how other folks improve their productivity here:

6) Imperative follow by declarative (-o yaml –dry-run)

There are typically 2 ways to creating Kubernetes objects. Imperative way (thru kubectl) or declarative way (thru yaml then kubectl apply/create). Obviously, you don’t have to hand-write every single yaml lines, you may copy and paste the example from Kubernetes doc and modify them accordingly.

  • Generally saying, it’s faster to do it in imperative way. But not all the options can be specified with imperative.
  • On another hand, declarative is more powerful as you can basically specify whatever options. But navigating and editing a yaml (especially a long yaml file) is NOT FUN . Ouch, you got an error when there’s just an extra space – you know what I mean!

     

    “YAML is very readable but not very writeable.” – Wely Lau

Wouldn’t be great if we can combine the two techniques? Yes, you can, and here’s how you would do it with the help of –dry-run and -o yaml options.

  • –dry-run option is commonly used to validate if the command (and parameters) are correctly specify without really running it
  • -o yaml option is used to produce an output in yaml format

The idea is to use imperative command (with any option that you perform) to produce a yaml file, then edit those options which can’t be specified in imperative.

Assuming you are asked to create a deployment with 4 replicas, nginx image. You may do it as following:

$ k create deploy nginx-deploy –image=nginx –dry-run -o yaml > nginx-deploy.yaml

This produce a yaml file named nginx-deploy.yaml. Then you use vi/vim to edit replicas option in the yaml file. This is because we are unable to specify the replica option in the imperative kubectl create deploy command.

Then finally run kubectl apply -f nginx-deploy.yaml.

For me, this technique saves more time than copy-paste from the example, especially when there are too many things to be edited. It increases the risk of yaml syntax error.

7) Be extra careful with namespace

This sounds simple but it’s a very common pitfall. Some of the command requires you to place the resource on specific namespace.

But if you forgot to specify it, it will be placed on the “default” namespace as the default behavior. And you’ll be penalized with 0 mark for that answer. 

You can specify the namespace either thru imperative kubectl command with -n (or –namespace) option such as:

Or do it declaratively in metadata section as can be seen here:

8) Pick the “right” questions strategically.

The priority is to attempt the questions which you have high confident that you can complete it. Keep in mind that the percentage of each question is displayed on top. This will help you determine if it’s worthwhile to spend time on the question of skip over to attempt the next one.

For sure, don’t spend too much time on 1 question. This is a pitfall for many engineers (me included), especially when troubleshooting. The curiosity of getting it fixed! Arh… you know what I mean. But please, do practice this during the exam. Move on to attempt others, you can go back if you have time.

Sum it up

That’s all for this post, folks. I shared with you 3 learning materials that I used to prepare my exam. Also shared with you 8 tips on how to manage your time strategically in the exam.

Finally, I wish you good luck to take your CKAD/CKA exam!

Posted in CKAD | 1 Comment

Should you upgrade from App Service Standard Plan to Premium v2?

Should you upgrade from App Service Standard Plan to Premium v2?

Background

App Service Premium (v2) Plan has been made generally available since Oct 2017. This premium plan which uses Dv2-series VM promises faster processors, SSD storage, and double the memory-to-core ratio compared to the older plan which uses “the legacy” A-series VM. However, i can’t find any article or exercise showing the actual comparison, how much the improvement would bring. And for this reason, i decide to perform an experiment and write this blog.

At a glance by peeking at Azure App Service pricing page, comparing the core count seems like bringing the Pv2 to a higher price point. As such, it might be a disappointment for the existing customers which are looking to upgrade.

The big questions for the current App Service customers are:

  • “Should we upgrade to this new plan?”
  • “Is the upgrade worth the effort? Note that it will be a re-deploy as there is no One-Click-Upgrade for this.”

The objective of this article is to demystify that it’s actually more cost-effective in conjunction with the performance. It will be backed with my synthetically load testing result.

Here’s a specs and pricing screenshot from the App Service pricing page:

Load Test Result of Test #1

As comparison, shall should we compared S2 with P1v2 or P2v2? I’ll be comparing S2 with P1v2 in this article since it offers at the same price point.

Methodology: how the tests were being performed

  • I used cloud-based load testing of Azure DevOps (aka VSTS)
  • The app used for this exercise was NopCommerce (ASP.NET based) which is available from the marketplace.
  • The architecture is rather simple, WebApp accessing SQL Database.
  • The first app (let’s call it NopS2App run on S2 App Service Plan: 2 core CPU and 3.5 GB RAM); It accessed SQL database with GP_Gen5_2 spec (2 vCores; 32 GB RAM)
  • The second app (let’s call it NopP1v2App run on P1v2 App Service Plan: 1 core CPU, 3.5 GB RAM, and SSD storage); Same with the first app, the second app also accessed SQL database with GP_Gen5_2 spec (2 vCores; 32 GB RAM)
  • I will be capturing 2 sets of outputs: First, performance under load including: number of successful request and RPS. I would consider these as my primary measures. Second, the memory and CPU percentage of the app service plan during load testing period. These are more for informational measure as we’d like to observe how the utilization look like during the load tests.
  • For a fair comparison, both of the apps are freshly deployed own their own resource group with identical setup.

Test #1

This is a very simple test as i just load an URL endpoint https://app.azurewebsites.net/camera-photo (which showed a product category)

  • Duration of test: 5 minutes
  • Number of user: 250 users
  • Load was generated from Southeast Asia Azure region

Here’s the result:

Load Test Result of Test #1
Load Test Result of Test #1
Load Test Result of Test #1

You’ll notice that P1v2 would be able to handle about 27% more successful request than S2. Likewise, about 27% more RPS than S2.

Test #2

Test #2 is similar to #1 except with higher load and slightly longer duration.

  • Duration of test: 10 minutes
  • Number of user: 500 users
  • Load was generated from Southeast Asia Azure region

Result:

Load Test Result of Test #1
Load Test Result of Test #1
Load Test Result of Test #1

You’ll notice that P1v2 would be able to handle about 35% more successful request than S2. Likewise, about 34% more RPS than S2. It’s more significant improvement than Test #1.

Test #3

Test #3 and #4 are different from #1 and #2, in the sense that i used Visual Studio Web Test and Load Test to record a scenario, basically to simulate a user who is trying to browse a product and eventually check out and purchase.

  • Duration of test: 5 minutes
  • Number of user: start with 10 users; with the increment of additional 10 users per 10 seconds, till maximum of 200 users.
  • Load was generated from East US 2 Azure region
Load Test Result of Test #1
Load Test Result of Test #1
Load Test Result of Test #1

As can be seen, we see about 18% improvement.

Test #4

Test #4 used the same scenario as #3 except with a longer duration and higher load.

  • Duration of test: 10 minutes
  • Number of user: start with 50 users; with the increment of additional 20 users per 5 seconds, till maximum of 500 users.
  • Load was generated from East US 2 Azure region

Let’s review the result.

Load Test Result of Test #1
Load Test Result of Test #1
Load Test Result of Test #1

Test #4 shows about 4% improvement.

Conclusion

By doing 4 round of tests, we could conclude the following:

  1. We’ve seen the RPS and Successful request improvement ranging from 4%–35%.
  2. We observe that both Memory and CPU consumed slightly increased about 7%–18%, except the CPU for both Test #3 and #4. I think this is very reasonable.
  3. Obviously this is synthetic test and may very depending on other factors such as the application, etc.

I hope this exercise would give you more confidence to upgrade to Premium v2 App Service plan.

Posted in App Services, Azure, Uncategorized | Leave a comment

Using Your Smartphone’s Camera to Live Stream Through Azure Media Services

You might have seen many examples of Azure Media Services (AMS) Live Streaming demo through Wirecast installed on the laptop as shown in below links:

Now, I’d like to share a different way to live stream, by using your smartphone’s camera. Interesting, isn’t it?

Mingfei has a post leveraging Wirecast’s iOS app here. The idea is that approach is to leveraging a camera on your phone while still requires Wirecast at the desktop.

In this post, I’ll be showing a different of my approach, by having a lightweight encoder installed on our smart phone (Windows Phone) and push the feed directly to AMS Live Channel.

Azure Media Capture in Windows Phone

I’m leveraging Azure Media Services Capture that you can download from Store free. If you need to integrate this capability into your mobile application, you may download the source code and SDK from Codeplex

I assume you are familiar of how to do a live streaming through on-premise encoder like Wirecast. But if you’re not, no issue at all. Please check the 3rd video of this post where I recorded how to do live streaming step-by-step.

I’ll be using Azure Media Services Explorer tools to manage the live channel, similar to the above mentioned video. The only different on this approach is, you should create a live channel with Fragmented MP4 (Smooth) as the Input Protocol.

image

Figure 1. Creating Live Channel with Live Encoding and Smooth Protocol

Optionally, you may select Live (cloud) Encoding which makes a lot of sense to offload the multi-bitrates encoding from your phone to the cloud as shown as below diagram.

*It’s not mandatory to enable live (cloud) encoding in the demo. enabling live/cloud encoding, will take much longer channel’s starting time*

image

Figure 2. Architecture of Live Streaming (with Live Encoding) via Windows Phone

Once the channel is running, copy the Primary Input URL of that channel.

image

Figure 3. Copy the Input URL of the Live Channel

Next, open the Azure Media Capture app on your Windows Phone. Click the setting icon and paste the Primary Input URL to the “Channel Ingest URL”.

*Notice that, you actually can push multiple bitrates / resolution from your phone if prefer to, but your phone will suffer as encoding generally is a very processor intensive task*

image image

Figure 4. Azure Media Capture Settings

Click Start Broadcast “Red Dot” button when you’re ready. When live/cloud encoding is enabled, anticipate longer delay (about 45 seconds).

Go back to your Azure Media Services Explorer, right click on the channel and playback the preview with Azure Media Services Explorer.

image

Figure 5. Playback the Preview URL

And if everything goes well, you should be able to see the live stream that pushed from your phone:

image

Figure 6. Multi-bitrates result from phone

Updates (5 Feb 2016)

Above post shows how to playback with “preview” on the channel level. That is really good for testing, making sure that the right stream coming in, and got played correctly.

Once, you’re ready to publish the URL to your end customers (with player), you should create a program on the channel. This will enable you to have capabilities like Dynamic Packaging (to be able to reach various delivery protocols), Dynamic Encryption, Dynamic Manifest, etc.

image  image

Figure 7. Create Program and Get Output URL

What about Android?

Theoretically, you can do similar concept with Android phone. There are several RTMP encoder for Android Phone such as Nano Cosmos and Broadcaster for Android.

I tried Nano Cosmos and worked well with AMS Live Channel (via RTMP).

Hope this helps.

Posted in Azure | Tagged | 3 Comments

Using Dynamic Manifest for Bitrates Filtering in Azure Media Services: Scenario-based walkthrough

I’m very excited about the release of this features in Azure Media Services. In fact, in the past few months there have been several asks from my customers which I personally engaged with.

Jason and Cenk from Media Services team have explained how the feature works in technically details. In this post, I’ll explain it differently, specifically from scenario-driven perspective, follow by the “how-to” with the UI-based Azure Media Services Explorer and also through .NET SDK.

Customer Requirement: Bitrates filtering for different clients (browser-based and native mobile-based)

Imagine that, as an OTT provider, I’ve encoded all my video library with H264 Adaptive Bitrates MP4 Set 720p which has 6 video bitrates (3400, 2250, 1500, 1000, 650, 400 all in kbps).

And here is what I’d like to achieve:

  • User connecting through browsers – larger screen (PC-based browsers which typically bigger screen) to only see highest four bitrates (3400, 2250, 1500, 1000 kbps). This is because I want to avoid the end-user to view “blocky” video experience (with 400 kbps).
  • User connecting through native apps – smaller screen (Android, iOS, or Windows Phone) to only see lowest four bitrates (1500, 1000, 650, and 400 kbps).
    • This could be either the mobile phones are not capable to playback highest bitrates due to screen-size limitation.
    • Or this could be because I’d like to save end-users’ bandwidth especially when they’re connecting via 3G or 4G network through their data-plan.

image

Figure 1: Larger screen vs. smaller screen playback experience

How do we design the most effective media workflow to handle such scenario?

You can definitely produce / encode different assets to serve different purposes: one asset for larger screen (which encoded with 4 highest bitrates), another one for smaller screen (which encoded with 4 lowest bitrates)

Although it works, I don’t think it’s a great idea since you face these challenges:

  1. Management overhead as you’ve have different physical files / assets / locator URL
  2. Redundant storage which cause higher storage cost
  3. Not-future proven: imagine that in the future, you have “paid silver tier” which the user can watch 5 bitrates, you’ll need to re-encoded your library again which can be cumbersome process.

A. Using Dynamic Manifest for Bitrate Filtering through Azure Media Services Explorer (AMSE)

Let me show how you can leverage Dynamic Manifest capability (with AMSE tool) to achieve this. The following step-by-step guide will cover how this can be done in more “elegant” way.

1. Download and install AMSE here if you haven’t done so. Since version 3.24 onward the features have been added. But I’ll still recommend you to use the latest one.

2. Connect to your Azure Media Services account.

3. Prepare your content and encode them with H264 Adaptive Bitrates MP4 Set 720p

(for step 2 and 3, you may refer to Video 1 in this post on how it can be done)

4. Navigate to tab “Global filters” and right click select “Create a global filter…”.

Note: there are 2 types of filters: asset-level and global-level. We’re using global filter in this tutorial.

image

Figure 2 – Creating a global filter

5. Give the global filter a name, in my example “smallerscreen” and then navigate to “Tracks filtering” tab.

image

Figure 3 – Track Filtering in creating global filter

6. Although you may add the track rules and conditions manually, I’d recommend you to insert the “track filtering example” and modify from there. To do so, click “Insert tracks filtering example”.

image

Figure 4 – Defining bitrates in tracks filtering

Notice that in Rule1, the condition of bitrate is 0-1500000, which is 1500 kbps encoding profile I’ve set. Of course you may adjust it accordingly to the bitrates that you’re expecting. Click “Create Filter”.

7. Go back to the Asset tab. Navigate to the asset that you’ve encoded earlier, publish a locator if you haven’t done so. Then right click on the asset and select Playback – with Azure Media Player – with a global filter – smallerscreen.

image

Figure 5 – Playing back the video with global filter

8. Now you can see that the video is played through Azure Media Player with the following URL:

http://<media services account>.origin.mediaservices.windows.net/<GUID>/<video>.ism/manifest(filter=smallerscreen)

Navigate to the “quality” selection button left to the “sound” icon and notice the quality that the “small scree” user can select.

image

Figure 6 – Playback experience for smaller screen user

With that URL, you will see that the “smallerscreen” user can only watch the lowest 4 quality (bitrates). Likewise, you may create another filter that indicates “largerscreen” user.

The interesting thing to note here is we only store 1 set of asset in Media Services without having to store multiple times.

B. Using Dynamic Manifest for Bitrate Filtering through .NET SDK

<to be updated>

Conclusion

Although Dynamic Manifest can be used in other use case (such as timeline trimming), this post fill focus on rendition, specifically on bitrates filtering for “larger screen vs. small screen” scenario.

The later part of the post also covers how to create a filter with UI-based tool (Azure Media Services Explorer) and .NET SDK although you can also achieve this with REST-API.

References

Please find the following post by Jason and Cenk explaining Dynamic Manifest features.

Reviewers

This article is reviewed by

  • Jason Suess – Principal PM Manager, Azure Media Services
  • Cenk Dingiloglu – Senior Program Manager, Azure Media Services
Posted in Azure | Tagged | Leave a comment

Windows Azure On-boarding Guide for Dev / Test

This post is to provide customer who are considering / has decided Windows Azure for Dev / Test environment.

Windows Azure’s Values for Different Stakeholders in Dev / Test Scenario

 

 

Application sponsor

BUIT/ Developers

Central IT/ Infrastructure Ops

Faster time to market

Faster infrastructure provisioning and rollout times on Windows Azure enable your application teams to make changes faster

Instantly provision any amount of test/development resources, when you need them

Allow your users to self-provision based on a set of policies and rules that you set upfront

Lower cost

Minimize your investment and pay only for what you use on Windows Azure for testing and development

Only pay for what you use with metered charge-back for all resources on the public cloud

Free up on-premises DC capacity by moving test/development to Windows Azure

Less risk

Minimize your upfront investment using Windows Azure, with the option to expand rapidly as required

Moving test/dev to Windows Azure gives you access to capacity when you need it, while complying with governance policies set by central IT

Get back control over your IT environment, while giving your end-users the same benefits as public cloud / infrastructure ownership

 

The Solution

How’s the solution going to look like for Dev Test? Well, it could be as simple as spinning up a VM and manage it from on-premise by the developer, like you can see in below Solution 1. Or it might be more advanced as shown in Solution 2, which involving Virtual Network: Site to Site VPN.

Solution 1 – Simple

devtest_sol1 

Solution 2 – Advanced

devtest_sol2

Get Started Resources

Here’re are some of the on-boarding guide for you go get started dev / test in Windows Azure:

Creating and preparing Infrastructure Services

Managing Infrastructure Services via Scripting

 

A recording session that’s worth to be checked out: Building Your Lab, Dev, and Test Scenarios in Windows Azure Infrastucture Services (IaaS)

channel9 Hope this helps.

Posted in Azure, IaaS | Tagged | Leave a comment

Windows Azure outstands amongst 5 large IaaS providers in an independent comparative analysis by Cloud Spectator

Recently, I found an analysis paper about cloud server performance conducted by an independent cloud performance metrics company, Cloud Spectator.

This post is to summarize the paper and I definitely encourage you to read the full report over here: http://www.iqcloud.net/wp-content/uploads/2013/07/Cloud-Computing-Performance-A-Comparative-Analysis-of-5-Large-Cloud-IaaS….pdf

Objective of analysis study

The objective of the paper is determine the price-performance value of the cloud providers. Providing some valuable insight for customer when selecting their prefer cloud vendor.

value

Figure 1 – Principle of value proposition [figure from the paper]

Who are being compared

The study (done in June 2013) compared five large IaaS providers in the industry:

Methodology

Timeframe

The tests were run for 3 times in 5 consecutive days: May 25, 2013 – May 29, 2013.

VM Size

The most common size for cloud server, Medium Size (or equivalent / similar setup) was chosen from the 5 cloud vendors:

Spec

Figure 2 – Medium VM Spec [figure from the paper]

Benchmark

The tests used Unixbench 5.1.3 as benchmarking the performance of Linux OPS running on virtualized infrastructure, producing rating out of 10 stars. Details of Unixbench can be found here: https://code.google.com/p/byte-unixbench/

Information

Two important pieces of info are collected:

  • Performance: how well the provider scores on Unixbench, and how consistent the scores are.
  • Price-Performance: after performance scores are established, we factor in cost to understand how much performance a user can expect on return for every amount of money spent, i.e., the value.

The Results

Performance Only

The performance result shows that Windows Azure provides the best performance and notably 3 times higher than AWS EC2 on average!

result-performance

Figure 3 – Performance Only Result [figure from the paper]

avg-bench-score

Figure 4 – Average Unixbench Score, derived from Figure 3 [figure from the paper]

Price-Performance=Value

Retail hourly price of the cloud providers are captured (pay-as-you-go) basis on date of experiment.

pay-per-hour-price

Figure 5 – Pay-per-hour price [figure from the paper]

By taking each score and dividing by the price, we can get a relative price-to-performance score for each provider. And here are the score (The higher the score, the better):

price-perf

Figure 6 – Price-Performance Result [figure from the paper]

CloudSpecs Score

CloudSpecs score is a further normalized value from Figure 6, taking the highest value to 100. And here’re the scores:

cloudspec-score

With the cloudspecs score, the ratio of each of the providers are formed as following

cloudspec-score-ratio

Conclusion

While acknowledging that Unixbench is just one test, customers may always consider other factors when selecting their cloud vendor.

To conclude, Amazon EC2 and Windows Azure offers the lowest price at $0.12 per hour. However, Windows Azure performs much better than EC2 in this experiment (approximately 3 times). The experiment also shows that Rackspace scores worst in term of price-performance.

Posted in Azure, Cloud | 2 Comments

SQL Database Automated Backup–Before and Now

SQL Database and its three replicas

You might have heard that SQL Database (formally SQL Azure) is a scalable and highly durable database service on the cloud and there’re multiple replicas automatically provisioned when we create a database. It’s true that there will be three replicas store for each database. This is in fact purely for HA purpose in case one of the machine hosting the SQL Database service goes down.

Customers are transparent and inaccessible to these three replicas. In another word, if we accidentally delete one of the table (or entire database), it’s really gone Sad smile! (Luckily it’s only a demo database)

I had experienced that before and tried to contact the Azure Support. There’s no way to revive our deleted database anymore.

Design and archive it our own

As a cloud architect, we should really be aware of this. In fact, for many projects I’ve worked on of the last three years, the archival or backup mechanism has been always be part of my design. This is because at that time, there’s no built-in automated backup in SQL Database for customers.

How I did that?

V1. sqlcmd and bcp + Worker Role = Automated Backup

At earlier day, we used sqlcmd to backup the script and bcp to backup the data. This may sound a bit surprising for some of you and that’s really what we can do at that time. We created a worker role and ran in schedule (typically daily) to perform backup and push the data to Azure Blob Storage.

The output is 1 .tsql file and copies of .dat file per database table.

V2. bacpac + Worker Role = Automated Backup

Later, Microsoft introduced bacpac as part of import and export solution for both SQL Server and SQL Azure. The output of this technique is .bacpac file which is similar to .bak file as we familiar of.

There was also an UI in management portal that allow us to export and import the database to Azure Storage on-demand basis, but still lack of automated way.  Alternately, there’s exe (command line interface) that eventually calls WCF service to perform backup. We twisted our design from sqlcmd + bcp to just simply use the command line.

Now, it’s built-in supported!

Finally, I notice that it’s built-in provided in management portal. SQL Database – Configuration. And you can find it by choosing Export Status to Automatic.

You can further specify the frequency of the backup on every N days. You can also specify the retention to only keep the last N days (so that your storage account won’t grow too big over the time).

automated_backup

After the configuration, you can see that the bacpac is finally pushed to my storage account.

image

Posted in Azure, SQL Azure Database | Leave a comment

Invitation – Community Technology Update 2013, Singapore

Community Technology Update (CTU) 2013 will be held on 27th July 2013, organised by the Community Leads from various Singapore based User Groups and MVPs. We’re putting together some of the best talents from the island (and our closest neighbour, Malaysia), in order to share our experiences across the series of Microsoft Technologies that we believe all of us truly care about.

Register now!

How do I sign up?

Follow the instructions in the URL to register – http://www.sgdotnet.org/Pages/Registration.aspx

How much does it cost?

For early bird registration, it’ll cost you $12.00.

For walk-ins on actual day, it’ll cost you $20.00. So we strongly encourage you to register beforehand so that we can cater sufficient food for everyone.

What is CTU?

CTU is in our 10th Iteration – We’re proud to be organised by the Community, for the Community. In true spirit of sharing, our speakers all purely volunteers from the field like anyone of you within the Microsoft ICT industry. CTU is held bi-annually, and is the biggest community event in Singapore.

Who should Attend?

Anyone who’s interested in the Microsoft technologies, we’ve a range of topics meant for

  • IT Professionals
  • Developers
  • Database administrators

And it’s reserved specially for user group members!

Session Information

0830 – 0900 Registration
0900 – 0930 Key Note
Level 22CF-15 Level 22CF-12 Level 22BR-01
0945 – 1100 WAV01Technical Overview of SVC video in Lync 2013 (Level 200)

Speaker: Brenon Kwok

ITP01Accelerate your Windows XP Deployment via Application Compatibility Testing with Citrix AppDNA (Level 200)

Speaker: Jay Paloma

DEV01Customizing SharePoint 2013 Search Experiences

Speaker: Mohd Faizal

1115 – 1230 WAV02Discover the new Exchange 2013 and benefit from it’s improvement (Level 200)

Speaker: Triston Woon

ITP02Windows 8.1

Speaker: Desmond Tan

DEV02What’s new, branding in SharePoint 2013

Speaker: Loke Kit Kai

1230 – 1330 Lunch Break
1330 – 1445 WAV03Microsoft IO (Infrastructure Optimization) and Microsoft Technologies. (Level 200)

Speaker: Sarbjit Singh

ITP03Secure, Centralised Administration Using PowerShell Web Access (Level 200)

Speaker: Matt Hitchcock

DEV03Building on the new SharePoint 2013 Apps Model? 10 things to look out for

Speaker: Patrick Yong

1500 – 1615 WAV04Microsoft Business Intelligence with Excel and SharePoint 2013 (Level 200)

Speaker: Tian Ann

ITP04Evaluating options for tiered storage in the enterprise – a look at the options, benefit, features and use cases (Level 200)

Speaker: Daniel Mar

DEV04Changes on SharePoint Workflow Authoring Tools

Speaker: Emerald Tabirao

1630 – 1700 Closing Address & Lucky DrawLevel 21 Auditorium

Useful Links

Track Information

Frequent Asked Question

Lucky Draw

Stand a chance to win a Microsoft Surface Pro (128GB w Type Cover) worth close to $1500 in the LUCKY DRAW!!!

Surface Pro

Posted in Invitation | Leave a comment

ASP.NET Bad Practices: What you shouldn’t do in ASP.NET (Part 4)

I’ve so far covered 15 bad practices in the past three posts and I truly hope that all ASP.NET developers be aware of them including the consequences of each.

Today, I’ll be covering another 5 as the part four.

16. Style Properties on Controls

AVOID

  • The four thousand specific control style properties, e.g.
    • EditItemTemplate-AlternateItem-Font-ForeColor-Opacity-Level :S

WHY

  • Maintainability
  • Bigger page size resulting slower performance since it not being cached

PREFER

  • CSS stylesheets

 

17. Filtering records on app level, not database level

TRY TO AVOID

  • Bringing whole list of records from database and filter them on the application level
using (NorthwindEntities ent = new NorthwindEntities())
{  
    var productList = ent.Products.ToList();

    foreach (var product in productList)
    {
        if (product.UnitPrice > 1000)
            Export(product);
    } 
}

WHY

  • Unnecessary traffic
  • Unnecessary processing resource

DO

  • Write proper query (or LINQ Query) to database
  • Get only what you need
using (NorthwindEntities ent = new NorthwindEntities())
{  
    var productList = ent.Products.Where(x => x.UnitPrice > 1000).ToList();
    foreach (var product in productList)
    { 
        Export(product);
    } 
}

18. Cookieless Form Auth & Session

DO NOT

  • Enable cookieless forms authentication or session

WHY

  • It could make your users being the victim to hijacking attacks

DO

  • Enable “require cookies” for these features
  • Consider using only secure (SSL) cookies for sites serving sensitive information

 

19. Missing “!IsPostback” check

DO NOT

  • Forget the !IsPostBack check if you’re not expecting the execution on every postbacks.
  • You can say that, this is so fundamental.
  • Yes it is, but I’ve still seen quite couple of developers make this mistake!
protected void Page_Load(object sender, EventArgs e)
{
    //initialize the code here
}

WHY

  • Overhead on the unnecessary calls might occurs
  • Trigger incorrect / unexpected value

DO

  • Understand what you’re really trying to achieve
  • Put !IsPostBack check if you’re to only set the value for one first time.
protected void Page_Load(object sender, EventArgs e)
{
    if (!IsPostBack)
    {
        //initialize the code here
    }
}

20. Putting Non-common scripts in MasterPages

DO NOT

  • Putting unnecessary / non-common scripts / codes in masterpages

WHY

  • All pages using the masterpages will be inherited the scripts
  • Inappropriate usage may cause inefficiency
  • Huge page size

DO

  • Put only what really needed to be shared across child pages
  • Consider using NestedMasterPages while part of scripts need to be inherited

 

That’s all for today’s 5 bad practices. Hope that I can compile some more and share with you again in future posts.

See you!

Posted in ASP.NET, Bad Practices | Tagged | Leave a comment

ASP.NET Bad Practices: What you shouldn’t do in ASP.NET (Part 3)

Hello everyone! Hope the first and second articles are useful to you. This is third article of ASP.NET Bad Practices: What you shouldn’t do in ASP.NET. The next five bad practices are equally important as those discussed earlier.

Some of them are related to web.config. They are as following:

11. Turning “off” Custom Error in Production

DO NOT

  • Set Custom Error to OFF in Production

WHY

  • Source code, stack trace, others info will be exposed
  • Version of ASP.NET, Servers, etc. exposed

DO

  • Of course, set it ON or RemoteOnly
  • Consider using “friendly” custom error page

 

12. Setting EnableViewStateMac=false in production

DO NOT

  • Set EnableViewStateMac = false
  • Do not set it to false even though you’re not using viewstate

WHY

enableviewstatemac

DO

  • Always set it to TRUE

viewstate_mac

 

13. Turning Off Request validation

TRY TO AVOID

  • Turning off the RequestValidation
  • RequestValidation will help to warn developer that there’s potential XSS (Cross Site Scripting) occur when it’s turned off.
  • Here’s the screenshot of the warning

request_validate

UNLESS

  • You know what you’re doing
  • Make sure that everything are properly HTML-encoded

WHY

  • It creates opportunity for Cross Site Scripting

DO

  • It’s actually on by default.
  • Use a rich editor with built-in-HTML-encoded feature

 

14. Too Much “inline” javascript / css

TRY TO AVOID

  • Writing too much inline javascript / css on the ASPX / HTML pages

WHY

  • Lack of caching
  • Code maintenance

PREFER

  • Have it on the different files
  • The files will be cached on browsers
  • *Make use of CDN (Content Delivery Network) to improve performance further

 

15. Impersonation: do you really need to do so?

TRY TO AVOID

  • Overuses / improper usage of impersonation
  • Especially, impersonate to “admin” user

WHY

  • Posing security risk
  • Prevents the efficient use of connection pooling
  • When accessing downstream databases
  • Performance degradation

DO

  • Clarify:
    • Do you really need to impersonate?
  • If you do, remember these:
    • Consider using programmatic instead of declarative impersonation
    • When impersonating programmatically, be sure to revert to the original context
  • Alternatives approaches are depending on scenarios

Some References On This Point:

 

I’ll continue to updating the post again in future, making this series of posts an awesome ones. Stay tuned.

Posted in ASP.NET, Bad Practices | Tagged | 1 Comment