PowerShell Script for Assigning Public IP to Azure Virtual Machine


Recently the azure virtual machines provisioned with ‘Resource Manager/Classic‘ deployment model is provisioned with Virtual IP & dynamic IP except Public IP address. Public IP address is necessary for accessing services deployed on VM from local browser.

For example, in Azure Linux VM, if you provision Apache Hadoop/Hortonworks Data Platform/Cloudera Hadoop Distribution , you may need to access the hadoop services from local browser in those addresses like hdfs service : http://<ip address of VM>:50070, http://<ip address of VM>:50075 , MapReduce Service http://<ip address of VM>:8080 etc.

In order to assign public ip for an azure Linux VM , you need to execute the following powershell script.

Get-AzureVM -ServiceName ‘azfilestorage’ -Name ‘azfilestorage’ | Set-AzurePublicIP -PublicIPName ‘linuxvmip’ | Update-AzureVM

Next, update the VM with SSH Port 22 so that , the VM can be accessed through SSH.

Get-AzureVM -Name ‘azfilestorage’ -ServiceName ‘azfilestorage’ | Set-AzureEndpoint -Protocol tcp -Name ‘SSH’ -PublicPort 22 -LocalPort 22 | Update-AzureVM

PS

 

Next, you can check the public IP address on Azure Portal under ‘IP Address’ section of Azure Virtual Machine like as the screenshot.

IP-addr

Azure Stream Analytics & Machine Learning Integration With RealTime Twitter Sentiment Analytics Dashboard on PowerBI


Recently, it has been introduced the integration of ASA & AML available as preview update & it’s possible to add AML web service URL & API key as ‘custom function‘ with ASA input. In this demo, realtime tweets are collected based on keywords like ‘#HappyHolidays2016‘, ‘#MerryChristmas‘, ‘#HappyNewYear2016‘ & those are directly stored on a .csv file saved on OneDrive. Here goes the solution architecture diagram of the POC.

SolutionArc

 

 

Now, add the Service Bus event hub endpoint as input to the ASA job, while deploy the ‘Twitter Predictive Sentiment Analytics Model‘  & click on ‘Open in Studio‘ to start deploy the model. Don’t forget to run the solution before deploying.

AML

 

Once the model is deployed, open the ‘Web Service‘ dashboard page to get the model URL & API key, click on default endpoint -> download the excel 2010 or earlier apps. Collect the URL & API key to apply it to ASA function credentials for AML deployment.

DeployedAML

Next, create an ASA job & add the event hub credentials where the real world tweets are getting pushed & click on ‘Functions‘ tab of ASA job to add the AML credentials. Provide model name, URL & API key of the model & Once, it’s added, click on Save.

ASA-Functions

 

Now, add the following ASA SQL to aggregate the realtime tweets sentiment scores coming out from predictive twitter sentiment model.

Query

 

Provide the output as Azure Blob storage, add a container name & serialization type as CSV & start the ASA job. Also, start importing data into PowerBI desktop from the ASA output Azure blob storage account.

Output

 

 

PowerBI desktop contains in-built power Query to start preparing the ASA output data & processing data types. Choose the AML model sentiment score datatype as decimal type & TweetTexts as Text(String) type.

PBI-AML

 

Start building the ‘Twitter Sentiment Analytics‘ dashboard powered by @AzureStreaming & Azure Machine Learning API with realworld tweet streaming, there’re some cool custom visuals are available on PowerBI.  I’ve used some visuals here like ‘wordcloud‘ chart which depicts some of the highly scored positive sentiment contained tweets with most specific keywords like ‘happynewyear2016‘, ‘MerryChristmas‘,’HappyHolidays‘ etc.

PBI-visuals

 

While, in the donut chart, the top 10 tweets with most positive sentiment counts are portrayed with the specific sentiment scores coming from AML predictive model experiment integrated with ASA jobs.

PBI-dashboard

~Wish you HappyHolidays 2016!

What’s new in Azure Data Catalog


The Azure Data Catalog (aka previously PowerBI Data Catalog) has released in public preview on last monday(July 13th) @WPC15, which typically reveals a new world of storing & connecting #Data across on-prem & azure SQL database. Lets hop into a quick jumpstart on it.

Connect through Azure Data Catalog through this url  https://www.azuredatacatalog.com/ by making sure you are logging with your official id & a valid Azure subscription. Currently , it’s free for first 50 users & upto 5000 registered data assets & in standard edition, upto 100 users & available upto 1M registered data assets.

Provision

 

Lets start with the signing of the official id into the portal.

Signin

Once it’s provisioned, you will be redirected to this page to launch a windows app of Azure Data Catalog.

AzureDC

 

It would start downloading the app from clickonce deployed server.

ADCapp

 

After it downloaded & would prompt to select server , at this point it has capacity to select data from SQL Server Analysis service, Reporting Service, on-prem/Azure SQL database & Oracle db.

Servers

For this demo, we used on-prem SQL server database to connect to Azure Data Catalog.

Catalog

We selected here ‘AdventureWorksLT’ database & pushed total 8 tables like ‘Customer’, ‘Product’, ‘ProductCategory’, ‘ProductDescription’,’ProductModel’, ‘SalesOrderDetail’ etc. Also, you can tags to identify the datasets on data catalog portal.

metadata-tag

Next, click on ‘REGISTER’ to register the dataset & optionally, you can include a preview of the data definition as well.

Object-registration

 

Once the object registration is done, it would allow to view on portal. Click on ‘View Portal’ to check the data catalogs.

Portal

Once you click , you would be redirected to data catalog homepage where you can search for your data by object metaname.

Search

 

SearchData

in the data catalog object portal, all of the registered metadata & objects would be visible with property tags.

Properties

You can also open the registered object datasets in excel to start importing into PowerBI.

opendata

Click on ‘Excel’ or ‘Excel(Top 1000)’ to start importing the data into Excel. The resultant data definition would in .odc format.

SaveCustomer

 

Once you open it in Excel, it would be prompted to enable custom extension. Click on ‘Enable’.

Security

From Excel, the dataset is imported to latest Microsoft PowerBI Designer Preview app to build up a custom dashboard.

ADC-PowerBI

Login into https://app.powerbi.com & click to ‘File’ to get data from .pbix file.

PowerBI

Import the .pbix file on ‘AdventureWorks’ customer details & product analytics to powerbi reports & built up a dashboard.Uploading

The PowerBI preview portal dashboard has some updates on tile details filter like extension of custom links.

PowerBI-filter

 

The PowerBI app for Android is available now, which is useful for quick glance of real-time analytics dashboards specially connected with Stream analytics & updating  real time.

WP_20150715_14_07_48_Pro

WP_20150715_14_13_33_Pro

AdventureWorks-ADC

 

 

 

Deployment of Apache Hadoop 2.7.0 on Ubuntu Vivid 15.04 on Azure Linux VM


Recently, on april 21st, the first release of 2015 of Apache Hadoop is committed, the version 2.7.0 is came up as dev edition. Lots of new updates have added.

  • This release drops support for JDK6 runtime & works with JDK 7+ only.
  • This release is yet not ready for production use. But, production users should wait for 2.7.1/2.7.2 release.

In Hadoop common, first time it has got support for Azure Blob storage – blob as file system for Azure.

Other than that, Hadoop HDFS has got support for file truncate, support for quotas per storage type & support for files with variable-length blocks. For Yarn & MapReduce, some of the new pluggins are added like Yarn authorization made pluggable, global caching of YARN localized resources, ability to limit running MapReduce of a job.

Here goes a step by step guide on installation of Apache Hadoop 2.7.0 on Azure Linux Virtual Machine(Ubuntu 15.04) .

 

 

 

What’s new in Azure SDK 2.5 & Visual Studio 2013 Update 4


Recently, after playing enough with Azure Stream Analytics , it’s time to move on with azure .net development & a new version of Azure sdk is published. Let’s have a quick overview on latest azure sdk.

First of all, lets download the sdk from webpi console, as directed ‘Microsoft Azure SDK 2.5 for .NET(VS 2013)

webpi

In this edition, there are few new components added like as:

i) EnvironmentTools.VS.msi

ii) HiveODBC32.msi

iii)HiveODBC64.msi

iv) Microsoft.Azure.HDInsightTools-x64.msi

v) Microsoft.Azure.HDInsightTools-x86.msi

so on…

Components

Now, after installing sdk 2.5 , lets start with Visual Studio 2013.

Vs2013-sdk2.5

Expand on ‘QuickStart’ under ‘Cloud’ & start exploring options to create AppService , Compute & DataService directly from VS 2013 /2012 itself.

sdk2.5

 

The default ‘DataBlobStorage1’ sample would be created in VS to create blob container, create a block blob/page blob, upload a new blob , delete a blob (all basic CRUD operations on blob using REST)

BlobStorage-VS

Next, the major improvements is done on Azure HDinsight shell integration into Visual Studio onto which you can now run your custom Hive table queries on HDFS of HDInsight clusters. Lets create a sample Hive query file on VS 2013.

Lets move into HDInsight tab on left side of VS installed menu & select HDInsight’ & select ‘HiveApplication’ to start with new Hive-ql. For this demo, I am selecting Hive Sample from VS.

HDI

 

On selecting Hive sample, I would be able to open the sample Hive queries on ‘weblogAnalysis.hql‘  & ‘sensordataAnalysis.hql’ from Azure HDinsight cluster.

Here goes a sample weblogAnalysis.hql:

DROP TABLE IF EXISTS weblogs;
— create table weblogs on space-delimited website log data.
— In this sample we will use the default container. You could also use ‘wasb://[container]@[storage account].blob.core.windows.net/Path/To/Data/’ to access the data in other containers.
CREATE EXTERNAL TABLE IF NOT EXISTS weblogs(s_date date, s_time string, s_sitename string, cs_method string, cs_uristem string,
cs_uriquery string, s_port int, cs_username string, c_ip string, cs_useragent string,
cs_cookie string, cs_referer string, cs_host string, sc_status int, sc_substatus int,
sc_win32status int, sc_bytes int, cs_bytes int, s_timetaken int )
ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘ ‘
STORED AS TEXTFILE LOCATION ‘/HdiSamples/WebsiteLogSampleData/SampleLog/’
TBLPROPERTIES (‘skip.header.line.count’=’2’);

 

Before proceeding with the realtime hive queries, we need to make sure that the Azure HDI cluster is already provisioned & it might be either a simple Hadoop HDI cluster, HBase HDI cluster or Storm HDI cluster to build hive tables on top of it.

sensorhql-vs

There’s a new option came out for Azure HDI cluster to add custom powershell scripts while provisioning a HDI cluster using azure portal. Also, new additions of HDI cluster is exploration of R(official cran packages) & Apache Spark on hdinsight hdfs cluster which will be covered with demo next.

An Overview of Latest Components of Azure HDInsight – Apache Tez, Yarn (MapReduce 2.0) Apache Storm & Kafka with HDP 2.1


Azure HDInsight 3.1 built on Hortonworks HDP 2.1 consists of lots of important components of hadoop 2.x like highly data-streaming component ‘Apache Tez’, the next generation(ngen) mapreduce 2.0 or ‘YARN’ node on top of HDFS along with realtime data streaming engine ‘Apache Storm’ & distributed message processing framework ‘Apache Kafka’. In this demo, we’ll check a little configuration info on each components running on Azure HDI cluster(3.1).

First, provision a HBase type HDI cluster through Azure PowerShell .

HBase

Next, you can check the provisioned Hbase HDI cluster on Azure Portal & enable RDP on it.

RDP

Next, On HDI cluster, first check the hadoop-components by browsing the directory ‘C:\apps\dist‘ where , you should see all components of HDP2.1 is prepared except Apache Storm.

Tez

Now, Tez -0.4.0.2.1.5.0-2057 is configured itself with HDI 3.1 Hbase cluster so, can check the hadoop-config page to run hive queries with Tez. For that, on cluster desktop, check the Yarn config page which clarifies the Yarn node status.

tez-hive

Now, Similarly, check the tez-site.xml , for configuration level & DAG node status purpose.

tez-site

Next, jump back to previous directory ‘C:\apps\‘ & write in search-pane on windows explorer ‘Storm‘. Copy the ‘storm-0.9.1.2.1.5.0-2057.zip‘ & paste it into ‘C:\apps\dist\‘ & then unzip it. Under .\bin directory find the Storm.cmd file which is needed for running Storm-Zookeeper, Storm-Nimbus, Storm-Supervisor & UI daemons.

First, configure the Storm.yaml with IPV4 address of HDI cluster then start executing first Storm-zookeeper nodes, master & slave daemon.

zookeeper

nimbus

Start the Supervisor (Worker) daemon job.

supervisor

And, at last start the UI job.

storm

Storm-UI can be viewed via web interface through browser on port 8080.

Storm UI

Next, to configure Apache Kafka for distributed message processing, we need to first download the stable version of kafka, I used here Kafka-0.8. You can download it from github repository as .zip https://github.com/apache/kafka

Now, after unzipping it , paste to same directory ‘C:\apps\dist\‘ with other components & start installation of Apache Kafka 0.8 on Azure HDI.

Before to do it, replace the windows .bat files under ‘C:\apps\dist\kafka-0.8\bin\windows\‘ with the latest kafka batch files for windows which can be downloaded from here.

Set the Java ClassPath on Hadoop command line or PowerShell as ‘Set Path=C:\apps\dist\java\bin

Next, update the scala & packages through the following commands.

.\sbt.bat update

kafka-sbt& then the list of commands :

.\sbt.bat package
.\sbt.bat assembly-package-dependency

After that, start the Zookeeper-server before starting Kafka-server.

.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties

kafka-zookeeper-start

Now, Start the Kafka server by running the following command.

.\bin\windows\kafka-server-start.bat .\config\server.properties

kafka-server-startNext, Create a Topic to post messages using the following command.

.\bin\windows\kafka-create-topic.bat --zookeeper localhost:2181 --replica 1 --partition 1 --topic test

kafka-topic
You can check the list of topics by using the following command.
.\bin\windows\kafka-list-topic.bat --zookeeper localhost:2181
List-topics

On Getting Success message, next start to post message on kafka cluster.Before that , start the console-producer by using the command.

.\bin\windows\kafka-console-producer.bat --broker-list localhost:9092 --topic test


SendMessage
Next, start the console-consumer by executing the following command.

.\bin\windows\kafka-console-consumer.bat --zookeeper localhost:2181 --topic test --from-beginning

Kafka-HDIThe following screenshot displays the demo of running Apache Kafka-0.8 clusters(Producers & Consumers) on Azure Hbase HDI 3.1 cluster.
 

An OverView of HDInsight (Hadoop+HBase) with Integrated PowerShell along with R


Recently, while started the work with Predictive Analytic s with Machine Learning & R , felt the necessity of integration of Azure HDInsight-HBase with Azure ML features. In this demo, we ‘ll go through few basic understandings of operations on HDInsight(Hadoop) on Azure with PowerShell 0.8.6.

To start with, first we need to create an azure storage account which must be in same datacenter (e.g SouthEast Asia for this demo) of HDInsight cluster.

 

StorageAccount

You need also create a blob container & storage context object in order to copy raw data (e.g Click Stream data, log data, machine-sensor data) to local drive to azure storage account.

 

StorageAcc

 

To Copy data from local drive to Azure Storage container , use the following script.

CopyDataToBlob

 

 

Next, we need to provision the HDInsight cluster , for that need to execute the following script.

ProvisioningCluster

Upon, executing the script, the cluster provisioning is started from accept, configuring , provisioning phase. You need to assign the username & password manually.

HDInsightProvision

ClusterProvisioned

 

Next, check in Azure management portal after few mins, the provisioning have been started.

Portal

Details of HDInsight cluster provisioning along with running HQL queries is stored in my github repository. You can get it here.

Now, HBase columnar storage is available as a part of hadoop cluster from HDInsight offerings, so while provisioning cluster from portal , you need the corresponding cluster type – HBase or Hadoop.

HBase

Both of cluster type(either HBase or Hadoop) of HDInsight 3.1 is completely based of pure Hortonworks HDP 2.1 clusters which contains the hadoop components of the following version.

  • Apache Hadoop 2.4
  • Apache HBase 0.98.0
  • Apache Pig 0.12.1
  • Apache Hive 0.13.0
  • Apache Tez 0.4
  • Apache ZooKeeper 3.4.5
  • Hue 2.3.1
  • Storm 0.9.1
  • Apache Oozie 4.0.0
  • Apache Falcon 0.5
  • Apache Sqoop 1.4.4
  • Apache Knox 0.4
  • Apache Flume 1.4.0
  • Apache Accumulo 1.5.1
  • Apache Phoenix 4.0.0
  • Apache Avro 1.7.4
  • Apache Mahout 0.9.0
  • Third party components:
    • Ganglia 3.5.0
    • Ganglia Web 3.5.7
    • Nagios 3.5.0

     

    For Big Data analytics world , one of the most fine-grained language that supports now with Azure ML is R. You can install R official packages for Windows, Linux & OS X, also for official project perspective , use R IDE.

    R Packages:

    R packages are self-contained units of R functionality that can be invoked as functions. A good analogy would be a .jar file in Java. There is a vast library of
    R packages available for a very wide range of operations ranging from statistical operations and machine learning to rich graphic visualization and plotting. Every package will consist of one or more R functions. An R package is a re-usable entity that can be shared and used by others. R users can install the package that contains the functionality they are looking for and start calling the functions in the package. A comprehensive list of these packages can be found at http://cran.r-project.org/ called Comprehensive R Archive Network (CRAN).

    Data Modelling with R:

    Regression: In statistics, regression is a classic technique to identify the scalar relationship between two or more variables by fitting the state line on the
    variable values. That relationship will help to predict the variable value for future events. For example, any variable y can be modeled as linear function
    of another variable x with the formula y = mx+c. Here, x is the predictor variable, y is the response variable, m is slope of the line, and c is the
    intercept. Sales forecasting of products or services and predicting the price of stocks can be achieved through this regression. R provides this regression
    feature via the lm method, which is by default present in R.
    Classification: This is a machine-learning technique used for labeling the set of observations provided for training examples. With this, we can classify
    the observations into one or more labels. The likelihood of sales, online fraud detection, and cancer classification (for medical science) are common
    applications of classification problems. Google Mail uses this technique to classify e-mails as spam or not. Classification features can be served by glm,
    glmnet, ksvm, svm, and randomForest in R.
    Clustering: This technique is all about organizing similar items into groups from the given collection of items. User segmentation and image
    compression are the most common applications of clustering. Market segmentation, social network analysis, organizing the computer clustering,
    and astronomical data analysis are applications of clustering. Google News uses these techniques to group similar news items into the same category.
    Clustering can be achieved through the knn, kmeans, dist, pvclust, and Mclust methods in R.

    Recommendation: The recommendation algorithms are used in recommender systems where these systems are the most immediately recognizable machine learning techniques in use today. Web content recommendations may include similar websites, blogs, videos, or related content. Also, recommendation of online items can be helpful for cross-selling and up-selling. We have all seen online shopping portals that attempt to recommend books, mobiles, or any items that can be sold on the Web based on the user’s past behavior. Amazon is a well-known e-commerce portal that generates 29 percent of sales through recommendation systems. Recommender systems can be implemented via Recommender()with the recommenderlab package in R.

     

A Lap around the New Microsoft Azure Management Portal & Preview Programs


Well, a few months back during Build 2014, the new Microsoft Azure Management Portal (Preview) announced for public access. There’s a quite lot interesting features are included to it to make it more appealing & enchanting. One of the unique feature is ‘Parts‘ quite similar like metro ‘Tiles’ to make developer’s life more easier conglomerate all services under one single UI pane.

Portal

 

  • Select , Virtual Machine pane while list of all standard VM images along with available data centers are available.

VM

 

  • Similarly , Web , Mobile & Developer Services , you would be able to see the respective services while in Data, Cache & Storage+backup service consists of latest Azure Redis Cache(Preview), HDInsight, MySQL, Mongo Labs, SQL Server 2014 Standard, Oracle DB 12.1.0.1 standard & 11G R2, WebLogic server, Azure Backup & Hyper-V Recovery Manager.

Storage

  • Now, check for App+ Data Services, a lot of new app services including ‘Notification Hub’ , ‘Pusher’ , ‘SendGrid‘ is available in new Azure Portal.

App

 

  • The ‘Parts’ of new Azure Portal is self -customizable , so when you right-click on any of tiles , you would get the ‘Customize‘ option from when you can select the appropriate size of blocks of ‘Service Health’, ‘Gallery’, ‘What’s New’ etc.

Tile

 

  • There is a self -notification-able ‘Notification’ hub is added in new portal which summarizes any of ‘Service Health’ report /errors related to service for last 24 hours.

Notification

  •  While, from ‘Browse’ option, you should be able to see the list of ‘Resource Group Services’, ‘Virtual Machine’, ‘Websites’, ‘Team Server projects.

Browse

  • Click on ‘What’s New’ tile while a brilliant  grid view of latest updates from Azure blog is available.

Azure Blog

 

  • Now, take a focus on latest ‘Preview Features‘ available which are ‘Windows Azure Automation’, ‘New Service Tiers for SQL Database’, ‘Azure RemoteApp’, ‘Visual Studio Online Account Access’, ‘Windows Azure Files’, ‘Billing Alert Service’.

First, lets check out the ‘Azure Automation API‘ .

  • Windows Azure Automation allows you to automate the creation, monitoring, deployment, and maintenance of resources in your Windows Azure environment using a highly-available workflow execution engine. Orchestrate time-consuming, error-prone, and frequently repeated tasks against Windows Azure and third party systems to decrease time to value for your cloud operations.
  • SQL DB Premium Tiers(P1 & P2) are already in preview mode since 2013 , now there are three tiers available in this mode Basic, Standard & Premium since Web & Business edition of SQL Azure db will be deprecated on April 2015 onwards.

Details of each tiers :

  • Basic (Preview): Designed for applications with a light transactional workload. Performance objectives for Basic provide a predictable hourly transaction rate.
  • Standard (Preview): Standard is the go-to option for getting started with cloud-designed business applications. It offers mid-level performance and business continuity features. Performance objectives for Standard deliver predictable per minute transaction rates.
  • Premium (Preview): Designed for mission-critical databases, Premium offers the highest performance levels for SQL Database and access to advanced business continuity features. Performance objectives for Premium deliver predictable per second transaction rates.

 

  • Azure RemoteApp helps employees stay productive anywhere, and on a variety of devices – Windows, Mac OS X, iOS, or Android. Your company’s applications run on Windows Server in the Azure cloud, where they’re easier to scale and update. Employees install Microsoft Remote Desktop clients on their Internet-connected laptop, tablet, or phone—and can then access applications as if they were running locally. You need to sign up here

 

  • You’ll need to have Azure Active Directory (Azure AD) set up in order to work with Visual Studio Online Access. You’ll also need an organizational account, which is an email address associated with either your directory or an Office 365 subscription. If you don’t use Azure AD, go here to find out how to set it up. If you use Azure AD but can’t access your organization’s directory, work with your system administrator to get set up.

 

  • Windows Azure Files allows VMs in a Windows Azure Data Center to mount a shared file system using the SMB protocol. These VMs will then be able to access the file system using standard Windows file APIs (CreateFile, ReadFile, WriteFile, etc). Many VMs (or PaaS roles) can attach to these file systems concurrently, allowing you to share persistent data easily between various roles and instances. In addition to accessing your files through the Windows file APIs, you can access your data using the file REST API, which is similar to the familiar blob interface.

 

  • If you’re the Account Administrator for a Windows Azure subscription, you can set up email alerts when a subscription reaches a spending threshold you choose. Alerts are currently not available for subscriptions associated with a commitment plan using Billing Services.To view and set up alerts, go to the Account Center click Subscriptions, select the subscription you want to work with, and then click Alerts. You can set up a total of five billing alerts per subscription, with a different threshold and up to two email recipients for each alert.Preview

 

 

Automated Provisioning of Azure Virtual Machines with PowerShell using Runbook


Recently, I have been adding a lot of energy towards the latest additions of Azure family, like Automation API, Scheduler, Machine Learning (ML) on HDInsight, StorSimple (checking it from today itself in management portal). With my utmost curiosity researched & noted down a few points to be taken care of while writing custom IaaS PowerShell scripts to provision fresh Azure VM image using traditional Azure cmdlets.

$adminPassword = '[YOUR-PASSWORD]'
$vmname = 'mytestvm1'
New-AzureQuickVM -Windows -ServiceName $cloudSvcName -Name $vmname -ImageName $image -Password $adminPassword


Most of us got familiar with these script while issue happens providing the $imagename , for Windows Server 2012 DataCenter , the command would be like this:

$cloudSvcName = '[Your Cloud Service Name]'
$vmname = '[Name of VM]'
$availabilityset = '[Name of Availability set]' (Optional)
$admin = '[Your Username]'
$password = '[Your Password]'
New-AzureQuickVM -Windows -ServiceName $cloudSvcName -AvailabilitySetName $availabilityset  -Name $vmname -ImageName "
bd507d3a70934695bc2128e3e5a255ba__RightImage-Windows-2012-x64-v5.8.8.12" -AdminUsername $admin –Password $password 

After provisioning , you would be able to see the default endpoints.
Remote EndpointEndPoint

The default configuration of VM would be (A1 1 core, 1.75 GB Memory) with Standard Tier in order to put multiple VMs on same load-balanced endpoint & ease autoscaling .

In next article, I would travel around PowerShell automation scripts using Run book utilizing Azure VM, Storage & Cloud services.