Enabling Auto Scaling in Windows Azure Virtual Machine & adding Linked Resources

Recently faced few scenarios about handling auto scaling strategies in Windows Azure Virtual Machines. This demo will lap around the autoscaling establishment in virtual machines containing SharePoint 2013 VM/ SQL Server 2012/2014 CTP1 VM/Linux VM.

In this demo , I have selected a SQL Server 2014 CTP1 VM with Windows Server 2012 R2 edition in order to made effective auto scaling which contains  hdinsight clusters as proof-of concept of SQL server 2014 CTP1 (Big Data Lambda Architecture).

Lets first create the VM , selected from Azure Management portal itself.


  • Add Configuration to your name , including choosing availability set.


  • Add Cloud service & associated Storage Account for storing VHD /s


  • alternatively, you can specify the endpoint of your virtual machine on azure for HTTP/HTTPS/MSSQL etc .


  • After configuring the VM, open the Dashboard from the management portal , select ‘Scale’ tab & specify your schedule settings for Constraints rules.


  • Configure the scale settings based on schedule time Weekday/Weeknight/Weekend based on timezones , by selecting scale metrics (CPU, Queues) & scale up .


  • Save your configuration rules in VM availability set.
  • Next, open the dashboard , select the ‘Linked Resources‘ tab , in order to add additional storage account/ SQL database configured for VM.





  • Specify , your storage credential , since it’s always advisable to keep your diagnostic storage account separate from other storage accounts.



  • Next , connect to the SQL Server 2014 CTP 1 VM on Windows Server 2012 R2 , Open SSMS 2014 .




A lap around of Big Data with Microsoft HDInsight

Big Data synonyms with three V s :  Volume , Velocity & Variety. Even with traditional e-commerce system to modern social networks  all systems data conservation is dependent on this platform. Lets check a scenario of modern e-commerce analytic s after integration with Big Data.



  • Big Data platform typically works by storing data first into clusters , then process the data through MapReduce workflows which executes by Mapping the input data through independent chunks processed by appropriate algorithms, the output from Map phase then moves to Shuffle/Sorting phase & finally the output from Shuffle phase comes to Reduce phase as input.
  • Lets check a typical Big Data MapReduce workflow.




  • Microsoft’s BigData platform works exactly same way as a collaborative solution with Horton Works named as Microsoft HDInsight. Which typically simplifies the solution of running complex batch scripts. Lets cover a little insight of HDInsight/Hadoop ecosystem.


  • Microsoft’s Big Data platform unveils solutions from storing data into HDFS to query processing on Hive up to implementing Business Intelligence analytics on Excel Powerpivot, Powerpivot, SSAS & SSRS solutions.


  • Storing data into HDFS : Petabytes to Zetabytes of data to be stored in HDFS clusters by means of Name Node followed by Data Nodes, in Azure HDInsight each Data Node is integrated with a worker roles & compute cluster. Alternatively , you can leverage the solutions using Azure Blob Storage utilizing  Front End(attaches OAuth/Security layer for authentication), Partition layer: for mapping with Azure Queue, table & blob storages , Stream layer : 3 layer HA for scaled out data stream.


  • In order to programming on HDInsight , you can opt for Java, C#, F#, .NET, .js API, LINQ to Hive APIs which leverages to code on hadoop ecosystems including hadoop pig, hive, mahout, cascading, pegasus.


Microsoft's Hadoop Vision

Microsoft’s Hadoop Vision

%d bloggers like this: