A former twenty something in technology

nodeIn my previous most I explained how to get started running a node.js application with Azure WebApps. In this post I will note some more advanced features that I would recommend you consider for your project.

  • Routing static files
  • Multiple Instances
  • Slow Initial Render After Deployment (or timeout)
  • Always On & Application Initialization
  • Kudu notifications & Git SHA

Routing Static Files

IIS is handling the requests for your application initially and then proxying requests into node.exe through a named pipe port. Now while node.js is perfectly capable of handling serving up static resources like images, css, html, etc. why burden it.

Using a simple IIS rule in your web.config you can just have IIS serve the file directly by passing node.

<rule name="StaticContent">
<action type="Rewrite" url="client{REQUEST_URI}"/>
</rule>

Multiple Instances

IISNode is capable of spinning up multiple node processes and assigning them each a unique named pipe port that it will round robin serve requests too. Now while node by default is already async and makes good use processing during IO operations. You may still have some CPU intensive sections of your app and node is basically single threaded.

Multiple instances of your application will easily allow it to scale and will allow it to tolerate a few application faults without taking everyone down.

You can set the nodeProcessCountPerApplication value to acheive multiple node.js instances. These settings can be configured in the web.config but I recommend setting them with the Azure WebApps Application Settings section.

Slow Initial Render After Deployment

Unfortunately Azure WebApps disk IO performance is not wonderful. So for a real world application that contains thousands of node_modules it can take some time for it to read the entire hierarchy into memory before it can start serving HTTP requests. In order for your first request to not timeout in these cases you can extend the namedPipeConnectionRetryDelay timeout (in milliseconds) to something longer than it takes for your application to boot. This will ensure that the first request will wait to the actual response being proxied out to node responds.

Always On & Application Initialization

During a deployment all your node processes will shut down and won’t initialize until the next request comes in. Azure WebApps have a feature called Always On which for a Asp.net application will ensure the application pool is kept running and won’t spin down when idle — aka no requests coming in. —

However that is only part of the equation. For IISNode we need to leverage Application Initialization in order to handle a primer request to spin up the node process as well.

Beware of SSL. It’s important to note that Application Initialization is unable to make HTTPS requests so if you redirect all HTTP requests to HTTPS then you will need to configure at least a specific route that is allowed to handle HTTP requests in order for Application Initialization to work.

<applicationInitialization skipManagedModules="true" doAppInitAfterRestart="true">
<-- Any route that is HTTP accessible -->
<add initializationPage="/warmup" />
</applicationInitialization>

Kudu notifications & Git SHA

If you decided to use continuous integration when you push to your repository. Then you probably experienced the mystery if your application deployed successfully or not. Using the Kudu interface you can go to tools > Web hooks. Using a service like Zapier you can configure a service endpoint to receive notifications when deployment is done, successful or failure. Then do what you want with that information. In my case I post it to a slack channel

With or without notifications it can often be useful to know exact SHA your application is running. This information is available from Azure WebApps Kudu debug console under the path D:\home\site\deployments\active 

A simple file read script will do the trick.

var fs = require('fs');

var filePath = 'D:\home\site\deployments\active');
var version_sha;
fs.readFile(filePath, {encoding: 'utf-8'}, function(err,data){
if (!err){
version_sha = data.toString();
}else{
console.error(err);
}
});

Running Node On Azure Web Apps

Categories: Azure, nodejs
Comments: No

nodeIf you don’t want to manage a virtual machine and just want to take advantage of Azure’s salable web app infrastructure for running your node.js application then this post is for you

I’ve broken this guide into a few key areas of pain that I went through getting the configuration right and reliable.

  • Commit your node_modules (*Your choice…but..)
  • Express & IISNode
  • Continuous Integration, SSH & Private Repositories
  • Kudu is Kool
  • Node versions

Commit your node_modules

I’ve tried pretty much every option that involved not committing your node_modules. This is where I’ve landed so let me explain the pros and cons

I’ll get the con out of the way right away. Yes, you’re committing all your dependencies, code you didn’t write, which will increase the size of your repository.

Now on the flip side lets go over the pros:

  1. Deployments are faster. You only have to deploy your repository and you’re done. No concerns with running npm install during a deployment and getting a untested version of a dependency.
  2. No concerns with network timeouts or latency when downloading thousands of additional files.

Just because you didn’t write it doesn’t mean you’re not responsible for it. At the end of the day it’s your application and you need to take ownership of all facets of it.

Express & IISNode

Azure WebApps come pre-installed with IISNode an open source IIS module allowing you to proxy connections from IIS to Node. Enabling this feature can be done by adding a web.config file to the root if your project with a couple lines of configuration.

If you’ve ever done Asp.net development then a web.config is pretty straight forward to you. But for the node.js folks the web.config is basically an xml based instructions for IIS and .net. In this case it’s going to tell IIS what to do with requests and how to route them into node.

Beware of named pipes. It’s important to know that because IIS is handling the requests on port 80 your node app is actually given a named pipe string for the port instead of a number and that is what IIS will proxy requests through.

Hope that gives you a good idea of how to configure Azure WebApps to run node.js and some of the initial brain teaser issues to solve.

<handlers>
<-- Change path to the entry point of your application -->
<add name="iisnode" path="index.js" verb="*" modules="iisnode" />
</handlers>
<-- All of these properties can be overriden at the application level -->
<iisnode node_env="development" loggingEnabled="true" debuggingEnabled="true" logDirectory="..\LogFiles\nodejs" promoteServerVars="HTTPS" enableXFF="true" />

Continuous Integration, SSH & Private Repositories

If you’re using GitHub you may be interested in doing continuous integration.

Using the Azure setup wizard for continuous integration against your repository will generate a SSH key on that repository. This grants this server access to pull from your repository if it’s public or private which is great. It will also add a web hook to kudu to notify the server when a push on your desired branch is done.

We now have continuous integration and deployment complete right? Well maybe…

Re-assigning SSH Keys

A nice touch of Kudu is that it will by default do a git pull --recurse-submodules allowing you to pull any associated sub modules with your project as well. However because the SSH key is only assigned to the primary project you won’t be authorized to pull the sub modules by default.

You can get around this limitation by logging into your GitHub project and removing the SSH key auto generated by Azure. Then through the Kudu web interface path out to the .ssh directory, D:\home\.ssh and grab the public SSH key. Then add this key to your GitHub account instead of to a specific repository.

Kudu is Kool

Kudu is an open source interface for Azure WebApps that handles deployments and management of your application. It’s really awesome actually. But the primary pain comes during the deployment process.

During a deployment Kudu will launch a deploy.cmd batch script that you can alter to do any unique operations for your project before and after deployment.

Two primary issues you will run into

  • npm install – If you didn’t follow my advice about node_modules then Kudu will have to do npm install causes it to make hundreds of requests to install node_modules.
  • Kudu Sync  Is part of the Kudu deployment process. It’s job is to copy new or changed files. However it can easily become out of sync and mistakenly think it has already copied the file or it’s the same and not replace it.

Maybe I’m just lucky at hitting fringe cases immediately but with a real world project Kudu had my files out of sync after a couple deployments. The good news is because the Kudu deployment script is just a batch script we can do anything you can do with batch.

Robocopy

I’ve been a huge fan of robocopy for years and it’s my go to tool of choice when ensure files and folders are in sync.

Replace:

rem call :ExecuteCmd "%KUDU_SYNC_CMD%" -v 100 -f "%DEPLOYMENT_SOURCE%" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
call :ExecuteRoboCopy robocopy "%DEPLOYMENT_SOURCE%" "%DEPLOYMENT_TARGET%" /MIR /MT /R:100 /NDL /NP /NFL /XD ".git" /log:"D:\home\LogFiles\deployment.log"

Then you will have robocopy in charge of keeping your repository in sync with your wwwroot deployments.

Node Versions

You can configure Azure WebApps to use a specific of node by Application setting WEBSITE_NODE_DEFAULT_VERSION to the version you desire. Not all versions are available. I recommend using the Kudu interface to path to D:\Program Files (x86)\nodejs to get a list of all the available versions.

Check out my next post on some advanced configurations and some additional recommendations.

 

Separation of concerns using Loopback

Categories: nodejs
Comments: No

StrongLoop_logoA single code base application that contains your server side rendering, API (REST or otherwise), business logic, client side application and testing frameworks is nice when your application is small or you’re showing a demo at a conference. However in the real world your application is likely much larger and intricate than that of a demo made in 5 minutes.

Loopback

A little background. Loopback is a node based framework for building websites and applications created and maintained by Strongloop.

The Strongloop team has provided examples as well as a command line application scaffolding utility to get you up and running quickly. While this scaffold-ed application is full featured and great for getting started. You are still left with an application that was created in 5 minutes. It will need a little thought and work to get it into a real world application.

 Separating out the pieces

The key to making this work is identifying what is required in each project in order to operate and function together.

The primary points to focus on are:

  • Node modules
  • boot scripts
  • exporting module singleton objects (loopback & configuration)
  • middleware
  • model locations (specifically when utilizing loopback-component-passport)

Setup

Simply clone the API and SITE projects into their own directory. Then you can run node . in the api and site directories or create a combination script to start up both

var api = require('./api').loopback;
var site = require('./site');

api.start();
site.start();

If you’re converting a legacy app or simply have a complicated validation scheme already implemented on the server that you aren’t ready to replicate on the client then whole form validation may be for you.

The goal is to take an entire form model and validate the whole object on the server and return back a collection of errors. This will give the user instant feedback that they have entered some data that is invalid.

Directive

Creating a custom directive contact-validator and applying it to the ng-form directive I am able to setup the asyncValidator on the entire model.

app.directive('contactValidator', function($q, $http) {
	return {
		restrict: 'A',
		require: 'ngModel',
		link: function(scope, element, attrs, ngModel) {
			ngModel.$asyncValidators.contact = function(model) {
			...
			}
		}
	}
});

In order to catch all the model updates a $watch is applied with the deep checking enabled.

scope.$watch(attrs.ngModel, function(model) {
    if (model != null) {
        ngModel.$validate();
    }
}, true);

You may want to review the ngModel directive documentation and the advanced features it contains.

Model Options

Model options are important because without them then the entire model will be cleared if validation fails.

ng-model-options="{allowInvalid: true}"

Things to consider

When using this method consider changing the validation event on each input field to on blur instead of on keypress to reduce the chattyness of the validations events.

ng-model-options="{updateOn:'blur'}"

In low latency environments where the web server and you are in the same office this can provide a quick way to extend the server side validation directly to the browser.

Remember it will be more performant to have your client side code do as much of the validation as possible. But when you need it this method is available.

 

 

Reset your DNN Password

Categories: DotNetNuke
Comments: 3

UnlockDNN-ss1
I’ve previously shown how in DNN you could recover your DNN host password. In present times DNN has shifted it’s default password policy in DNN 7.1.0 to use a hash password.

What this means is that you can no longer simply see what the current password is for any of your users. While this is great for security it’s a problem if you’ve forgotten your password and smtp is down.

Expanding upon my previous script I’ve created a single page utility application that will allow you to:

  • See top 25 super users.
  • Change any users password.
  • Create a new super user.
Unlock DNNDownload Now

Now just save this file to the root of your DNN application (beside the default.aspx) and happy resetting.

Something that I thought was missing from DNN’s DAL2 was a generic implementation of it’s repository base class that you could inherit from instead of repeating more boiler plate.

Ultimately why this is important should be apparent when you see the implementation below. But spoiler alert: It’s very little code!

using System;
using System.Collections;
using System.Collections.Generic;
using System.Data;
using System.Diagnostics;

namespace Components.Data
{
    public abstract class RepositoryImpl<T> : IRepository<T> where T : class
    {

        public virtual void Delete(T item)
        {
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                repo.Delete(item);
            }
        }

        public virtual void Delete(string sqlCondition, params object[] args)
        {
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                repo.Delete(sqlCondition, args);
            }
        }

        public virtual IEnumerable<T> Find(string sqlCondition, params object[] args)
        {
            IEnumerable<T> list = default(IEnumerable<T>);
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                list = repo.Find(sqlCondition, args);
            }
            return list;
        }

        public virtual IPagedList<T> Find(int pageIndex, int pageSize, string sqlCondition, params object[] args)
        {
            IPagedList<T> list = default(IPagedList<T>);
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                list = repo.Find(pageIndex, pageSize, sqlCondition, args);
            }
            return list;
        }

        public virtual IEnumerable<T> Get()
        {
            IEnumerable<T> list = default(IEnumerable<T>);
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                list = repo.Get();
            }
            return list;
        }

        public virtual IEnumerable<T> Get<TScopeType>(TScopeType scopeValue)
        {
            IEnumerable<T> list = default(IEnumerable<T>);
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                list = repo.Get<TScopeType>(scopeValue);
            }
            return list;
        }

        public virtual T GetById<TProperty>(TProperty id)
        {
            T item = default(T);
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                item = repo.GetById<TProperty>(id);
            }
            return item;
        }

        public virtual T GetById<TProperty, TScopeType>(TProperty id, TScopeType scopeValue)
        {
            T item = default(T);
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                item = repo.GetById<TProperty, TScopeType>(id, scopeValue);
            }
            return item;
        }

        public virtual IPagedList<T> GetPage(int pageIndex, int pageSize)
        {
            IPagedList<T> list = default(IPagedList<T>);
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                list = repo.GetPage(pageIndex, pageSize);
            }
            return list;
        }

        public virtual IPagedList<T> GetPage<TScopeType>(TScopeType scopeValue, int pageIndex, int pageSize)
        {
            IPagedList<T> list = default(IPagedList<T>);
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                list = repo.GetPage<TScopeType>(scopeValue, pageIndex, pageSize);
            }
            return list;
        }

        public virtual void Insert(T item)
        {
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                repo.Insert(item);
            }
        }

        public virtual void Update(T item)
        {
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                repo.Update(item);
            }
        }

        public virtual void Update(string sqlCondition, params object[] args)
        {
            using (IDataContext db = DataContext.Instance()) {
                dynamic repo = db.GetRepository<T>();
                repo.Update(sqlCondition, args);
            }
        }
    }
}

What this class will allow you to do is create a simple class that inherits RepositoryImpl and gain access to all the CRUD operations.

using System;
using System.Collections;
using System.Collections.Generic;
using System.Data;
using System.Diagnostics;
using DotNetNuke.Data;

namespace Components.Data
{

    public interface IAnyEntityRepository : IRepository<AnyEntityName>
    {
    }

    public class AnyEntityRepository : RepositoryImpl<AnyEntityName>, IAnyEntityRepository
    {
    }
}

Additionally this structure will allow you to use dependency injection like Ninject to inject the concrete class using the IAnyEntityRepository interface.

Dependency injection (DI) has several advantages that I myself haven’t fully wrapped my head around all the use cases. But this article isn’t going to be focused on explaining why DI is important but rather how you can implement it in DNN.

Getting Started

Using nuget you can install ninject into your module.

Install-Package ninject

Searching the internet you may find a package called Ninject.Web and may find yourself thinking this is exactly what I need!. Unfortunately this extension assumes that you have a standard user control or access directly to the page and does not allow you inherit from PortalModuleBase

The primary abstraction in Ninject.Web is the creation of a KernelContainer which is a static class to broker all your DI interactions. So you will need to create that class.

using Ninject;

sealed class KernelContainer
{

     private static IKernel _kernel;

     public static IKernel Kernel {
          get { return _kernel; }

          set {
               if (_kernel != null) {
                    throw new NotSupportedException("The static container already has a kernel associated with it!");
               }

               _kernel = value;
          }
     }

     public static void Inject(object instance)
     {
          if (_kernel == null) {
               throw new InvalidOperationException(String.Format("The type {0} requested an injection, but no kernel has been registered for the web application. Please ensure that your project defines a NinjectHttpApplication.", instance.GetType()));
          }

          _kernel.Inject(instance);
     }
}

Next we must new up an instance of the kernel by hooking into the application start pipeline. An easy way to do this is with the WebActivatorEx library that comes with ninject. This will fire once at application start and create a static instance of your KernalContainer and assign a new instance of StandardKernel. Finally you write up or bind if you will all of your interfaces to their concrete classes.

[assembly: WebActivatorEx.PreApplicationStartMethod(typeof(Components.DI.Bindings), "RegisterServices")]
namespace Components.DI
{
	public static class Bindings
	{
		public static void RegisterServices()
		{
			KernelContainer.Kernel = new StandardKernel();
			KernelContainer.Kernel.Bind<IDataRepository>().To<DataRepository>().InSingletonScope();

		}
	}

}

After that we need to tell the KernelContainer that our module user control may have classes that need to be injected. This is done by adding an abstraction to the PortalModuleBase. I call mine CustomModuleBase but you can call yours whatever you like.

using System.Web.UI;
using DotNetNuke.Entities.Modules;

public class CustomModuleBase : PortalModuleBase
{
		public CustomModuleBase()
		{
			KernelContainer.Inject(this);
		}
}

Now you can utilize the ninject inject attribute on any constructor, method, or property in your class and it will inject that class into into the interface.

public class Main : CustomModuleBase
{
	[Inject()]
	public IDataRepository_repo { get; set; }
}

Windows Azure VM Subscription Transfer

Categories: Azure
Comments: No

I successfully managed to transfer my Windows Azure Virtual Machine to a different subscription without losing any files with minimal downtime.

How did I do it?

I followed the steps on the Windows Azure Documentation: How to Capture an Image of a Virutal Machine Running Windows Server

  1. Ran sysprep on my Windows VM (C:\Windows\System32\Sysprep\Sysprep.exe)
  2. Waited about 10 minutes for operation to complete and system shutdown
  3. Clicked Capture

Then UH OH! You’re going to delete my virtual machine? But that means I will lose my IP address because there will be no VMs running under that domain. THERE HAS GOT TO BE A BETTER WAY.

In fact there is. So what I did was:

  1. Created a new extra small Ubuntu virtual machine and added to an existing virtual machine (the one I was about to capture).
  2. Added Apache quickly to host a web server letting people know I’m down. (sudo apt-get install apache2)
  3. Modified the default index.html (sudo vim /var/www/index.html) and created a nice maintenance message.
  4. Created a new endpoint for port 80 temporarily to the Ubuntu virtual machine.
  5. Back on the primary machine. Started the Capture.
  6. When the capture was finished. I created a new virtual machine utilizing the image I just captured and set the subscription to the one wanted to move too.
  7. VM booted up. I had to re-add my attached disk
  8. Removed Port 80 endpoint from the Ubuntu virtual machine and applied it to the newly created Windows machine.
  9. Shutdown Ubuntu VM (I will keep you around for future maintenance since you no longer cost me money except the small storage charge)

BAM! Site migrated to the new subscription and website is up and running without any loss to data.

 

Actively Blogging At InspectorIT

Categories: Personal
Comments: No

The TwentyTech blog is now reserved for personal items and standard technical blogging entries have moved to my company InspectorIT.com/blog

Recent entries:

IIT-Logo-Trans-350x180-FB

 

I prefer to work through a problem backwards so if you’re just here for a solution start here. If you want to see the process, continue reading.

Resolution

In the end the problem ended up being that my CPU was un-clocking itself to protected against overheating. The reason for this was due to the fact that there was a thick layer of dust between my CPU fan and heat-sink. After buying a can of compressed air and fully clearing out the heat-sink and case of any dust and debree my Maximum Frequency was back up to 100% (actually 104%).

The Story

Over the last few weeks I had been fighting a losing battle with my computers performance. Recently having upgraded my Windows 7 installation to Windows 8 I started noting that flash based video play back was sluggish and would cause the rest of my tasks to feel delayed.

Jumping to the obvious conclusion that Windows 8 was the problem I decided to re-install Windows 7 and get my PC back to where it was previously. Much to my annoyance and surprise the problem persisted even on Windows 7.

My computer at this time is no slouch and contains the following specs

  • Intel Core i7
  • 24 GB of ram – triple channeled
  • 2 Videos Cards – Nvidia GeForce GTX 570 & Geforce 275
  • SSD Drive for C:

Identifying the problem

Trying to figure out the problem was difficult. I was pretty sure it was hardware related but couldn’t understand why the symptoms would only appear to show when playing video. Again jumping to conclusions I assumed it was a video card issue or driver issue. Then one day I finally pushed my computer to the limit and it just shut off  and started giving me bios error messages about  overclocking and would fail to reboot.

This all seemed strange because I’m not overclocking my CPU and generally my CPU runs at around less than 12%-23% so it was hardly being taxed. After staring at my the Resource Monitor for awhile I noticed that the Maximum Frequency of my CPU was at 60%.

In a nut shell what it means is what the efficiency of your CPU is running at; so mine was running at 50-60% of it’s capability.  Curious why this was I started poking around my bios and found an option for Overclocking Protection which was enabled. Decided to disable it about 10 seconds into booting my PC instantly shut off.

The Problem

My CPU was under clocking it self in order to save itself from burning out due to the additional heat. Protected by the bios setting to due this it opted to under clock itself rather than simply shutdown to keep cool. I decided open up my case and take a look at my CPU and found that there was a thick layer of caked on dust over the entire top of my heat-sink under the CPU fan.

I wish there were a little more obvious hardware monitoring tools that would identify these issues so it would take some of the guess work out of identifying hardware issues.