Deployment considerations in Azure – Cloud Services

Manage Deployments in Azure

Note: this is a work in progress

Staging area is not designed to be a “QA” environment but only a holding-area before production is deployed.
You should open up a new service for Testing environment with its own Prod/Staging. In this case, you will want to maintain multiple configuration file sets, one set per deployment environment (Production, Testing, etc.)
Staging is a temporary deployment slot used mainly for no-downtime upgrades and ability to roll back an upgrade.
Azure provides production and staging environments within which you can create a service deployment. When a service is deployed to either the production or staging environments, a single public IP address, known as a virtual IP address (VIP), is assigned to the service in that environment. The VIP is used for all input endpoints associated with roles in the deployment. Even if the service has no input endpoints specified in the model, the VIP is still allocated and used as the source address assigned to outbound traffic coming from each role.

What happens when a service is promoted from staging to production?

Typically a service is deployed to the staging environment to test it before deploying the service to the production environment. When it is time to promote the service in staging to the production environment, you can do so without redeploying the service. This can be done by swapping the deployments.
The deployments can be swapped by calling the Swap Deployment Service Management API or by swapping the VIPs in the portal, which result in the same underlying operation on the hosted service. For more information on swapping the VIPs, see How to Manage Cloud Services.
Screen shot from Azure Portal – 2015-09-28















When the service is deployed, a VIP is assigned to the environment to which is it is deployed. In the case of the production environment, the service can be accessed by the URL, .cloudapp.net, or by the VIP. When a service is deployed to the staging environment, a VIP is assigned to the staging environment and the service can be accessed by a URL, .cloudapp.net, or by the assigned VIP. The assigned VIPs can be viewed in the portal or by calling the Get Deployment Service Management API.
When the service is promoted to production, the VIP and URL that were assigned to the production environment are assigned to the deployment that is currently in the staging environment, thus “promoting” the service to production. The VIP and URL assigned to the staging environment are assigned to the deployment that was in the production environment.
It is important to remember that neither the production public IP address nor the service URL changes during the promotion.
To examine how this works, we can illustrate a scenario in which there is a Deployment A deployed to the production environment. Additionally, there is a Deployment B deployed to the staging environment. The following table illustrates VIPs after the initial deployment of the services to production and staging:

 

Deployment A
VIP1
.cloudapp.net
Production
Deployment B
VIP2
.cloudapp.net
Staging
Once the Deployment B is promoted to production the VIPs are as follows:
Deployment B
VIP1
.cloudapp.net
Production
Deployment A
VIP2
.cloudapp.net
Staging
When the deployments are swapped, the deployment in the production environment that was associated with the production VIP and URL is now associated with the staging VIP. Likewise, the deployment in the staging environment that was associated with the staging VIP and URL is now associated with the production VIP.

Only new incoming connections are connected to the newly promoted service. Existing connections are not swapped during a deployment swap.

Persistence of VIPs in Windows Azure

Throughout the lifetime of a deployment, the VIP assigned will not change, regardless of the operations on the deployment, including updates, reboots, and reimaging the OS. The VIP for a given deployment will persist until that deployment is deleted. When a customer swaps the VIP between a stage and production deployment in a single hosted service, both deployment VIPs are persisted. A VIP is associated with the deployment and not the hosted service. When a deployment is deleted, the VIP associated with that deployment will return to the pool and be re-assigned accordingly, even if the hosted service is not deleted. Windows Azure currently does not support a customer reserving a VIP outside of the lifetime of a deployment.

Managing ASP.NET machine keys for IIS

Azure automatically manages the ASP.NET machineKey for services deployed using IIS. If you routinely use the VIP Swap deployment strategy, you should manually configure the ASP.NET machine keys. For information on configuring the machine key, see Configuring Machine Keys in IIS 7.












More on machine keys later …

Questions for later:

Is there any way to deploy different instance sizes for test/production












Note that the image above shows multiple cscfg files, but only one csdef file. The cscfg file has the role names, instance counts, configuration values, and so on. The one csdef file is used with whichever configuration you select when you publish. It has a list of all of the configuration settings (but not the values), setup tasks (if applicable), the size of the VM to be used, and so on. The value you want to especially note is the VM size.

Using this methodology of multiple configuration files in one cloud project, you only have one place to set the size of the VM regardless of whether you are publishing to staging or production. You may not want to use the same sizes for staging and production, especially if you are using medium or larger VMs in production and small VMs in staging. In that case, you either have to change this every time you publish, or you have to have another solution.

Note: See the heading “Multiple cloud projects with their own configuration settings”:

http://blogs.msdn.com/b/microsoft_press/archive/2015/03/12/guest-article-microsoft-azure-dev-test-scenario-considerations.aspx

See your Azure VM deployment succeed or fail!

Morning,

Yesterday we were having deployment issues due to an Azure WebRole startup task (more on that in my next post.)

We rolled back the changes and all was fine but I wanted to find the information that was logged on the server so I can trouble shoot in the future if it happens again. 

I just did a fresh deployment as a base line to prove that I was working with a successful deployment.

As it was deploying I could see log records appearing in here when logged onto my Cloud Service VM:


















As this was a successful deployment I could see messages in the above mentioned Windows Azure Event log showing that nothing went wrong.

I could see the log message stating that the web site installed into IIS successfully.
I could see the successful OnStart() and OnRun() events.

Here are some screen shots:











Note that if we had diagnostics turned on we could probably see the same information inside the visual studio server explorer for our cloud service.
































Not very useful when everything goes well. Ill post more when and if I get a failed deployment.

thanks
Russ

An alternative Way to Remotely Debug Azure Web Sites


Afternoon,

I have been having trouble connecting to my Azure instances with the normal attach to debugger method:











This never works for me even when debugging in enabled.

Here is a link to a way that works and it worked the first time.

https://samlman.wordpress.com/2015/02/27/another-cool-way-to-remotely-debug-azure-web-sites/

I only tested with an Azure Web App so not sure about WCF and services yet.
(More on this later)

thanks
Russ

Stop an AzureVM using Azure Automation with a schedule

Hello,

UPDATE: The script mentioned in this post is now here:
https://gallery.technet.microsoft.com/scriptcenter/Stop-Azure-VM-with-OrgID-41a79d91

In this blog I will show you how to use Azure Automation to schedule a Powershell script to stop and deallocate a VM running in Azure. 


The reason I am blogging this is because I have spent a couple of days looking at other people’s blogs and the information seems to not be quite correct. In particular, the need to use a self signed certificate from your Azure box is no longer required.


The reason you might want to do this is to save some money as when your Azure VM is stopped and deallocated, you will not be charged for it.


Firstly, I created a VM to play with called tempVMToStop as follows:



It required a username and password so I used my name. 

Once you have the VM you can remote desktop to it using the link at the bottom of the Azure portal and the username and password created in the previous step.


The next step is to add our automation script.

Now we go to automation in Azure:



Remember the goal of this blog is to automatically stop the following VM:
first we will need to create a user that is allowed to run our automation in Azure Active Directory as shown here:

Create the user to be used for automation:







































Then go back into the automation section and choose Assets:


and add the automation user you just created here:







































This is reasonably new as before you needed to create a self signed certificate on your VM and import the pfx file into an Asset => Credential but this is no longer needed.

Now go to the automationDemo and then choose Runbooks:






















Click to create a new runbook:

Once it is created click on Author and write your script as follows:
workflow tempVMToStopRunBook
{
    Param
    (   
        [Parameter(Mandatory=$true)]
        [String]
        $vmName,       

[Parameter(Mandatory=$true)]
        [String]
        $cloudServiceName 
    )
    # Specify Azure Subscription Name
    $subName = ‘XXX- Base Visual Studio Premium with MSDN’ 
    
    $cred = Get-AutomationPSCredential -Name “automationuser”
    
    Add-AzureAccount -Credential $cred
   
    Select-AzureSubscription -SubscriptionName $subName 

    $vm = Get-AzureVM -ServiceName $cloudServiceName -Name $vmName 
        
    Write-Output “VM NAME: $vm”
    Write-Output “vm.InstanceStatus: $vm.InstanceStatus”
    
    if $vm.InstanceStatus -eq ‘ReadyRole’ ) {
        Stop-AzureVM -ServiceName $vm.ServiceName -Name $vm.Name -Force    
    }

}

Note that the subscriptionname shown as  XXX – Base Visual Studio Premium with MSDN will need to be replaced by your subscription.

Also the workflow class name must be the same as the runbook name.

Save it and then you can choose to test it or just publish it. 
I will skip to publish as I have already tested it.









































Once it is published you can click start and enter the 2 param names that the script is expecting:

    Param
    (   
        [Parameter(Mandatory=$true)]
        [String]
        $vmName,       

[Parameter(Mandatory=$true)]
        [String]
        $cloudServiceName 
    )






























Now we want to see that our VM stops so here was mine before:











Once you run it you will see some output when you click Jobs in the runbook:




































And then if you look back at your VM it should be stopped:
















Note that as we are totally deallocating the resources, the next time you start it up, it will get an new IP address but this will be all given to you in the VM section in your portal.

The next step is to obviously schedule what we just did and also schedule a start script so we could, for example, stop our VM at the end of a business day and start it in the morning at 8am so it is ready for us to use. 

This will save some money as the VM will not be using resources overnight.

Go back to the root of your automation and add a new asset for your schedule:







































Here’s one I created that will run the Power Shell script we created every day:












































That’s all there is to it. 

Note that I am no expert on Azure automation so all comments and constructive criticism are welcome.

thanks
Russ

Debugging an custom object using Powershell in the Package Manager Console window.

All,

I was just trying to find out what data my collection of custom objects was getting hydrated with. the context doesn’t matter but for the record, I was hitting a Sitecore index and duplicates were being rendered in my UI.

I started using the immediate window but it had limitations that were preventing me from getting what I needed.

I wanted to see if all the objects that were hydrated that contained the word “test” hence the “test*”.

Anyway, here is what I came up with:

for($i = 0; $i -lt 1000; $i++) // 1000 is a bit high so adjust for your needs
$a = $dte.Debugger.GetExpression(“(()results[$i]).Name”); 

if ($a.Value -match “test*”) 
Write-Host $a.Value 
}

Here are the results:

  • “testcampaignspeed73” 
  • “testcampaignspeed74”
  • “testcampaignspeed2”
  • “testcampaignspeed23”


And here is the PMC window:










I just wanted to put this out there for myself to remember and for anyone else who needs it.

thanks
Russ 

Add SendGrid email to SQL database mail

How to configure database mail with SendGrid and use it for SSIS agent jobs.

Note: This is more for me as a reminder for the future.

USE master;  
EXECUTE sp_configure ‘show advanced options’,1;  
RECONFIGURE WITH OVERRIDE;  
sp_configure ‘Database Mail XPs’,1;  
RECONFIGURE;  

USE msdb;  
EXECUTE msdb.dbo.sysmail_add_profile_sp  
@profile_name = ‘EmailAdmin’,
@description = ‘Profile for sending Automated DBA Notifications’;

EXECUTE msdb.dbo.sysmail_add_account_sp  
@account_name = ‘SendGridSQLAlerts’,
@description = ‘Account for Automated DBA Notifications’,
@email_address = ‘‘,
@display_name = ‘SendGrid SQL Alerts’,
@mailserver_name = ‘smtp.sendgrid.net’,
@username = ‘‘,
@password = ‘‘,
@port = 25

EXECUTE msdb.dbo.sysmail_add_profileaccount_sp  
@profile_name = ‘EmailAdmin’,
@account_name = ‘SendGridSQLAlerts’,
@sequence_number = 1

USE [msdb];  
EXEC msdb.dbo.sp_set_sqlagent_properties @databasemail_profile=N’EmailAdmin’;  

EXEC msdb.dbo.sp_add_operator @name=N’EmailOperator’,  
    @enabled=1, 
    @pager_days=0, 
    @email_address=N’

thanks
Russ

Using Azure Powershell to attach to your azure subscription and create a new website

Hello,

As I haven’t posted for a while I wanted to just quickly post my findings on using Azure Powershell.

My aim here using only powershell is to:

  • Connect to my Azure subscription
  • Create a website in azure
  • Stop the website
  • Remove the website

Ok here goes.

Firstly you will need to install Azure Powershell so go here and work it out:

Then fire it up. It should look like this and you can find it in your windows start menu.


then we need to tell powershell about our azure subscription:


We just type Add-AzureAccount and you will be prompted to log into Azure.
Look here for info on this command:

After that you will see some confirmation that it fould your azure account:


You may need to change the execution policy for you to do things from your local Azure:


So here is my Azure website list before I add a new site:


So next I run New-AzureWebsite -Name MyPowerShellTest:

You can see it looks like it did something and returned info about my new site. If you look in Azure you can see the new site:



Nice!

Ok so for fun, lets stop the site:



So it worked.

Now ill delete the website completely using:
Remove-AzureWebsite -Name MyPowerShellTest

And it’s gone:


Well, you will have to trust me as I cant show you nothing!

thanks
Russ

What I found in 4 minutes in Visual Studio 2013 in an MVC5 project

I have installed Visual Studio this morning and here are a few initial things I have found.

I used the following version:

Firstly, after installation it asked my a couple of questions including what colour scheme I would like to use for visual studio. As you can see I chose DARK.
I thought this was a nice little welcome surprise instead of me burning my eyes out for weeks until I realise I could change the colour scheme.

Secondly, and I’m not sure if this is a good thing, I am logged into MSDN through visual studio:

I suppose it feels more personalised but lets see what happens down the track!

Another thing I noticed straight away is a little hint above methods that shows you how many time that method is referenced:

When you click on this it shows you the referenced code:

Another thing I think I like it that the old aspnet membership provider has been replaced with claims identification. I suppose it’s still a custom database supplied by Microsoft but, as I remember as I have used claims identity before

(https://github.com/brockallen/BrockAllen.MembershipReboot) it should be more loosely coupled and allow you to write cleaner code without being totally locked inside the membership provider. As far as I remember I was using dependency injection and the OLD membership provider kept on making me write ugly code.

They have also added some controller example tests by default. This is great as I remember when I started writing controller test I did not have any examples.
This should motivate developers to write more tests from the start which is good.

I have always hated the Visual Studio test GUI but now it seems it looks a little bit better and is now more like re-sharper:

I am not sure if this was here before but there seems to be a new notification window to give you updates on what is new. For example, an update to the nuget package manager.

Code coverage is also included which is something I would have needed to pay for before:

The final thing I found was when I was in a controller, for example, I could right click and select code map:

And then I can drag classes onto the canvas to see their relationships:

That’s my 2 cents worth and that’s all for now.

thanks
RuSs