Using the On-Premises Data Gateway to process data with Logic Apps

Summary:

This post outline how to:
  • Setup the on-premises data gateway on an on-premises machine (I will be using my laptop
    Note: You shouldn’t install a gateway on a computer, such a laptop, that may be turned off, asleep, or not connected to the Internet because the gateway can’t run under any of those circumstances. In addition, gateway performance might suffer over a wireless network.)
  • Build a Logic App than consumes Azure Service Bus queue messages and sends them as text files, through the On-Premises Data Gateway, to a folder on my local machine.

Setup the On-Premises Data Gateway

  • The gateway runs as a windows service and, as with any other windows service, you can start and stop it in multiple ways. For example, you can open a command prompt with elevated permissions on the machine where the gateway is running, and then run either of these commands:
    • To stop the service, run this command:
      NET STOP PBIEGWSERVICE:
      
    • To start the service, run this command:
      NET START PBIEGWSERVICE:
      
  • Here is the windows service that was installed:
  • Note that I changed the windows service to run as a newly setup windows account (Administrator) as this is the only way I could get the logic app to authenticate with my local service:

  • You may need to ensure your firewall is open. Run the following in PowerShell in adminsitrator mode:
    Test-NetConnection -ComputerName watchdog.servicebus.windows.net -Port 9350

    And here is the outcome:

  • See this link for other firewall options and whitelisting:
    https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-gateway-install

  • Log onto Azure with the same account you used to set up the on-premise-gateway and create an on-premises gateway in Azure, connecting it to the one on your machine using the drop down at the bottom of the next picture:
  • Here is my newly created on-prem-gateway:
  • Now go and create a local windows share. I just opened it up to “everyone” for the purposes of my exercise:

Setup the Logic App

  • Create the logic app in Azure:
  • I used the peek-lock and complete template to make it easier:
  • Note that I have already created a service bus namespace and queue for this demo.
  • Now add the file connector in and tell it to use the on-premises-data-gateway:

    NOTE: The above gateway name 'MondayOnPremGW' does not match the 
    name of the gateway created at the top of this demo as explained here:I had hours of trouble will getting the above setting to connect. 
    In the end, even though I had un-installed the on-premises-gateway 
    many times and cleaned up the resouces in Azure many times
    I could still see a list of on-premises-data-gateways in Azure:
    So, finally, I just selected one that had worked for me a few weeks ago.
    In theory, you should not have the same issue if you create 1 and then use 1.
    I suspect there are some sort of remnants from previous installs left locally.

Send some messages to the service bus

  • I have some c# code to send a number of messages to the service bus.
  • For the purposes of this post I will disable the LogicApp and send 10 messages to the service bus so they are visible:
  • And the content of 1 message:
  • After enabling the Logic App the messaged in the service bus queue are consumed:
  • Here is a sample of 1 successful Logic App run:
  • And the resulting files on my local file system:
  • And here is the content of a file created from a service:

Issues

As mentioned above I had hours of trouble will getting the logic app on-premises file connector to connect to my machine.
In the end, even though I had un-installed the on-premises-gateway many times and cleaned up the resouces in Azure many times I could still see a list of on-premises-data-gateways in Azure. So, finally, I just selected one that had worked for me a few weeks ago.
In theory, you should not have the same issue if you create 1 and then use 1 only.
I suspect there are some sort of remnants from previous installs left locally.

We also saw quite a lot of throttling and have been informed by the Microsoft that less limited model is on the way soon:

The average run time was not too good but this might be due to me being on a wifi network:

Final Thoughts

Overall it was a easy to work with and setup, apart from the issue mentioned above. I think more work may need to be done on the on-premises-gateway install as it seems to hold onto data between un-installing and re-installing.

How to auto-heal your web app so you hopefully get your weekends back

Summary

This document outlines how to set up auto-healing for your web app. At times you may be enjoying your weekend and get a call telling you an app is constantly running at 100% CPU. 
Autoheal is a way to automatically recycle your app pool based on certain triggers.
I have created a POC web app with 3 instance VMs.I have turned on all diagnostic logs and set them to verbose. 
Here is the Event Log before enabling auto-heal in the web.config (there were more records but I removed them as they were all similar):













And here is the new web.config. This will recycle the app pool when we get 20 or more requests in a period of 30 seconds:









So, once this is deployed, I will call the end point 100 times in less than 30 seconds which should cause the trigger to run its action – Recycle
Here is the calling Powershell code (this just simulates a client):

$URL = ‘http://autohealingpoc.azurewebsites.net/api/values’;

$contentType = “application/json”;

for($i=1

    $i -le 100

    $i++){   

    Write-Host ‘I is: ‘ + $i;

   $response = Invoke-RestMethod -Method Get -Uri $URL -Headers $headers -ContentType $contentType;

   $response;

}


After I ran 100 requests in less than 30 seconds you can see in the event log that the instance RD0003FF857039 recycled:












When I really loaded it up, all 3 instance auto-healed:























Thanks
Russ

Monitor your Azure API Management Instance with PowerBI

Monitor your Azure API Management Instance with PowerBI

Summary

This document outlines the steps involved to monitor your Azure API management instance with PowerBI.

Note: I will refer to Azure API management as APIM in this document.

Steps will include:

  • Add a logger, using the APIM REST API, to your APIM instance to send events to an event hub
  • Set up a Stream Analytics job – It consists of one or more input data sources, a query expressing the data transformation, and one or more output targets that results are written to. Together these enable the user to perform data analytics processing for streaming data scenarios
  • Build a PowerBI dashboard to see your APIM data in a format that suits your business requirements.

Adding a logger to APIM

First thing you need to do is add a logger to your APIM instance using the APIM REST API. I will use Postman to do this.

Firstly, in your APIM instance, enable the REST API:















Secondly, go to the bottom of the security page where you enabled the REST API and generate a shared access key:





You will use this later in Postman after we first create an Azure Event hub.


Create an Event Hub

Go into your Azure portal and create an event hub.





Ensure you create to Event Hub shared access policies. 1 for sending to the event hub and 1 for receiving. This is to allow you more granular control over your hub.







































Creating the logger in Postman

Now that you have an Event Hub, the next step is to configure a Logger in your API Management service so that it can log events to the Event Hub.
API Management loggers are configured using the API Management REST API
To create a logger, make an HTTP PUT request using the following URL template.

https://{your service}.management.azure-api.net/loggers/{new logger name}?api-version=2014-02-14-preview

Replace {your service} with the name of your API Management service instance.
Replace {new logger name} with the desired name for your new logger. You will reference this name when you configure the log-to-eventhub policy.


Add the following headers to the request.
Specify the request body using the following template.

{
"type" : "AzureEventHub",
"description" : "Sample logger description",
"credentials" : {
"name" : "Name of the Event Hub from the Azure Classic Portal",
"connectionString" : "Endpoint=Event Hub Sender connection string"
}
}

 Here is mine:












You will see that it returned 201 Created which means we now have a logger.


Now we go into APIM and add a policy on the built in echo API.


This will send event to our event hub when the



ProductUnlimited

APIEcho API

OperationRetrieve resource 


API is hit.


View the Event Hub Events

If you want to view the event hub events for your own sanity check then download Service Bus Explorer and listen to your event hub:





















You will notice that the event hub data shown in the listener is the same data written by our APIM policy.

Create Stream Analytics Job to send data to PowerBI

Next, go to your Azure portal and create a new stream analytics job.

Once it is created then create an input:







































And a PowerBI output: Note that you will be prompted to authorize your PowerBI account.








































Also create a query to transform the data. My query doesnt do anything special:



















Then start your stream analytics job.




















Now go back to your APIM instance and hit the API end point a few times. Make sure it is the API operation 
with the policy on it.

Also, change param1 and param2 on the operation to a few different values so we get somewhat useful data:



















Now look at the trace for that operation in APIM and you will see the log to event hub event has fired:
















Now log into PowerBI on the web and you will see (hopefully) a new streaming dataset:
















Create a dashboard


We will create a PowerBI dashboard to visualise our APIM data using param1 and param2

I will drag a pie chart onto the workspace and set the following: All I did was add param2 as a count. As you can see we get a great visualisation of the number of times param2 was used on the APIM operation.





























So you can see that I set a request with param2=5 a lot more times that I did for other calls.


Obviously you can use your imagination as to what you can use this for.

Here we see a Tree Map, Pie Chart and Funnel displaying data from my APIM. The funnel shows distinct calls from IP address.





thanks
Russ

Error WAT200: No default service configuration "ServiceConfiguration.cscfg" could be found in the project.

Morning,

When deploying a cloud service / web role you may experience the following message:

2016-04-27T08:16:41.2709354Z ##[error]C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\Windows Azure Tools\2.9\Microsoft.WindowsAzure.targets(373,5): Error WAT200: No default service configuration “ServiceConfiguration.cscfg” could be found in the project.

On your VSTS build screen:

It means it is basically looking for a default cscfg file as we havent specified an alternative.

In my build I went and set the Target Profile to match the build configuration and this fixed the issue.

So it will be looking for one of the files shown below that match the currently building BuildConfiguration

And after those changes all builds perfectly:

Thanks
Russ

Moving nuget package folder to a different location (1 level higher)

Hi,

Note: thanks to this post by sebastian belczyk for some help:
http://belczyk.com/2013/02/moving-nuget-packages-directory-outside-solutionss-directory/

This morning I need to relocate my nuget packages folder 1 level higher to allow an offshore team to work parallel with us. Basically, the original solution was cloaked and branched. In the original solution we realised we had 2 places for nuget packages so hence we had to merge them into 1 solution. This mean’t that the branched solution need to match the original other wise we would have lots of reintegration issues.

Firstly, I needed to update my nuget.config file. We had our package located here:
../packages/
But we really needed them here:
../../packages/ – up another level.

Then I needed to close the solution and reopen it for this change to take affect. I found this out as changing the file without restarting meant that nothing changed and my packages were still restoring to the old ../packages/ directory.

Then right click to Manage nuget Packages for Solution …

Then Visual Studio will detect that there is no nuget packages folder on the file system at the location specified:

It will load them when you click restore:

So in summary, I had this file structure:
Soln
— Project
—- packages Folder

And now I have this structure:
Soln
— packages Folder
— Project

thanks
Russ

Move your IIS App pools from one machine to another

So,

Today I got a new machine and didn’t want to have to set my IIS up again.

So I ran the following from a command prompt run as administrator:

%windir%\system32\inetsrv\appcmd list apppool /config /xml > c:\apppools.xml


This created an xml file with all my app pools in it. I deleted  6 nodes that held the default iis app pools:

  • DefaultAppPool
  • Classic .NET AppPool
  • .NET v2.0 Classic
  • .NET v2.0
  • .NET v4.5 Classic
  • .NET v4.5
Then I went to my new machine after copying over the edited apppools.xml file and ran this command:

%windir%\system32\inetsrv\appcmd add apppool /in < c:\apppools.xml

And all my app pools were added:

To export all my sites I ran this:

%windir%\system32\inetsrv\appcmd list site /config /xml > c:\sites.xml

It exported all my sites to an xml file. I edited this and removed the default web site as it was already present on the destination machine.

Then I ran this on the destination machine:

 %windir%\system32\inetsrv\appcmd add site /in < c:\sites.xml

And it imported all my sites into IIS:

That is all,

Russ

Fix already installed nuget packages

Hello,

Sometimes I create a branch and for some reason 1 or 2 projects packages and hence Dlls are wrong. I even try using “Restore Nuget Packages” at solution level from the context menu but this doesn’t always fix it.

The best fix I have found is this:
Update-Package PackageName -ProjectName MyProject -reinstall

So if you have Entity Framework 6.1.2 installed and it got broken in a branch then you could run:


Update-Package EntityFramework -ProjectName MyProject -reinstall

And it will reinstall EF 6.1.2

That is all,

thanks
Russ

Service Fabric Reliable Actors and Reliable Services

Service Fabric Reliable Actors and Reliable Services

Note: This is a note to self.  

Actors

Actors are isolated, single-threaded components that encapsulate both state and behavior. They are similar to .NET objects, so they provide a natural programming model. Every actor is an instance of an actor type, similar to the way a .NET object is an instance of a .NET type. For example, an actor type may implement the functionality of a calculator, and many actors of that type could be distributed on various nodes across a cluster. Each such actor is uniquely identified by an actor ID.

Stateless actors

Stateless actors, which are derived from the StatelessActor base class, do not have any state that is managed by the Actors runtime. Their member variables are preserved throughout their in-memory lifetime, just as with any other .NET type. However, when they are garbage-collected after a period of inactivity, their state is lost. Similarly, the state can be lost due to failovers, which can occur during upgrades or resource-balancing operations, or as the result of failures in the actor process or its hosting node.

The following is an example of a stateless actor:
class HelloActor : StatelessActor, IHello
{
public Task SayHello(string greeting)
{
return Task.FromResult("You said: '" + greeting + "', I say: Hello Actors!");
}
}

Stateful actors

Stateful actors have a state that needs to be preserved across garbage collections and failovers. They derive from the StatefulActor, where TState is the type of the state that needs to be preserved. The state can be accessed in the actor methods via the State property on the base class.
The following is an example of a stateful actor accessing the state:
class VoicemailBoxActor : StatefulActor<VoicemailBox>, IVoicemailBoxActor
{
public Task<List<Voicemail>> GetMessagesAsync()
{
return Task.FromResult(State.MessageList);
}
...
}


Actor state is preserved across garbage collections and failovers when it’s persisted it on disk and replicated across multiple nodes in the cluster. This means that, as with method arguments and return values, the actor state’s type must be data contract serializable


Actor state providers

The storage and retrieval of the state are provided by an actor state provider. State providers can be configured per actor or for all actors within an assembly by the state provider specific attribute. When an actor is activated, its state is loaded in memory. When an actor method finishes, the Actors runtime automatically saves the modified state by calling a method on the state provider. If failure occurs during the Save operation, the Actors runtime creates a new actor instance and loads the last consistent state from the state provider.
By default, stateful actors use the key-value store actor state provider, which is built on the distributed key-value store provided by the Service Fabric platform. For more information, see the topic on state provider choices.

Reliable Services

Reliable Services gives you a simple, powerful, top-level programming model to help you express what is important to your application. With the Reliable Services programming model, you get:

  • For stateful services, the Reliable Services programming model allows you to consistently and reliably store your state right inside your service by using Reliable Collections. This is a simple set of highly available collection classes that will be familiar to anyone who has used C# collections. Traditionally, services needed external systems for Reliable state management. With Reliable Collections, you can store your state next to your compute with the same high availability and reliability you’ve come to expect from highly available external stores, and with the additional latency improvements that co-locating the compute and state provide.
  • A simple model for running your own code that looks like programming models you are used to. Your code has a well-defined entry point and easily managed lifecycle.
  • A pluggable communication model. Use the transport of your choice, such as HTTP with Web API, WebSockets, custom TCP protocols, etc. Reliable Services provide some great out-of-the-box options you can use, or you can provide your own.

What makes Reliable Services different?

Reliable Services in Service Fabric is different from services you may have written before. Service Fabric provides reliability, availability, consistency, and scalability.

  • Reliability–Your service will stay up even in unreliable environments where your machines may fail or hit network issues.
  • Availability–Your service will be reachable and responsive. (This doesn’t mean that you can’t have services that can’t be found or reached from outside.)
  • Scalability–Services are decoupled from specific hardware, and they can grow or shrink as necessary through the addition or removal of hardware or virtual resources. Services are easily partitioned (especially in the stateful case) to ensure that independent portions of the service can scale and respond to failures independently. Finally, Service Fabric encourages services to be lightweight by allowing thousands of services to be provisioned within a single process, rather than requiring or dedicating entire OS instances to a single instance of a particular workload.
  • Consistency–Any information stored in this service can be guaranteed to be consistent (this applies only to stateful services – more on this later)

Stateless Reliable Services

A stateless service is one where there is literally no state maintained within the service, or the state that is present is entirely disposable and doesn’t require synchronization, replication, persistence, or high availability.
For example, consider a calculator that has no memory and receives all terms and operations to perform at once.

Stateful Reliable Services

A stateful service is one that must have some portion of state kept consistent and present in order for the service to function. Consider a service that constantly computes a rolling average of some value based on updates it receives. To do this, it must have the current set of incoming requests it needs to process, as well as the current average. Any service that retrieves, processes, and stores information in an external store (such as an Azure blob or table store today) is stateful. It just keeps its state in the external state store.

When to use Reliable Services APIs

If any of the following characterize your application service needs, then you should consider Reliable Services APIs:

  • You need to provide application behaviour across multiple units of state (e.g., orders and order line items).
  • Your application’s state can be naturally modeled as Reliable Dictionaries and Queues.
  • Your state needs to be highly available with low latency access.
  • Your application needs to control the concurrency or granularity of transacted operations across one or more Reliable Collections.
  • You want to manage the communications or control the partitioning scheme for your service.
  • Your code needs a free-threaded runtime environment.
  • Your application needs to dynamically create or destroy Reliable Dictionaries or Queues at runtime.
  • You need to programmatically control Service Fabric-provided backup and restore features for your service’s state*.
  • Your application needs to maintain change history for its units of state*.
  • You want to develop or consume third-party-developed, custom state providers*.

Comparing the Reliable Actors API and the Reliable Services API

When to choose Reliable Actors API When to choose Reliable Services API
Your problem space involves a large number (1000+) of small, independent units of state and logic. You need to maintain logic across multiple components.
You want to work with single-threaded objects that do not require significant external interaction. You want to use Reliable Collections (like .NET Reliable Dictionary and Reliable Queue) to store and manage your state.
You want the platform to manage communication for you. You want to manage communication and control the partitioning scheme for your service.

Keep in mind that it is perfectly reasonable to use different frameworks for different services within your app. For instance, you might have a stateful service that aggregates data that is generated by a number of actors.

That’s all for a high level summary.

thanks

Russ

How to install TCP / Named Pipes and some IIS bindings on a server

Morning,

Thought I would share this as it is very useful for installing server features and setting up IIS for your binding needs.


Here is the PowerShell script:



Here are the server features before the script runs:



IIS Before:



The script running:




The server after:



IIS After:



Named Pipes and TCP Listeners installed and running:

 

Thanks for listening,

Russ



Generate Entity Framework update scripts from migrations

This is how you generate Entity Framework update scripts from migrations.

Note: this is a very simplified post that doesn’t generate a very complicated database script.

So you already have an initial database migration in your project. If you don’t go Google how to get started.

I’ll start by generating an SQL script for my initial migration.

Here is part of my initial migration in C#:












I will now generate the script for this but running this command in the Package Manager Console:

Update-Database -Script -SourceMigration: $InitialDatabase -TargetMigration: Initial






Make sure you select the correct Default Project in the dropdown shown in the above picture.

Here is the SQL script:



























Now I will update my model with a new property:




I then ran the following to create my new C# migration:

Add-Migration AddedAProperty -StartUpProjectName User.DbResourceAccess







Which created this new C# file:










Next I will run this:

Update-Database -Script -SourceMigration: $InitialDatabase -TargetMigration: AddedAProperty

Which created the following script:








You could then apply this to a production database for example.
I’m not sure you would want to insert into a __MigrationHistory table on production though.

thanks
Russ