Tuesday, October 27, 2015

Angular2: Fix Displaying data example in es5

Since I’m recently working on an angular project, I started to get curious about the angular 2 version. I started with the quick start demo and everything went very smooth. So the next step was the Displaying Data example. It was then when it started to get wrong. I was following the walkthrough, but the result I got was a blank page.

After some research I discovered that that the angular developers made a breaking change in the alpha 35 version. All the classes ending on Annotation(s) where renamed to Metadata. Another difference is that you need to use the ng object instead of angular on the window.

If you want to get the Displaying Data example to work in es5, you will need to do the following:


   1: (function () {
   2:     // ES5
   3:     function DisplayComponent() {
   4:         this.myName = "Alice";
   5:     }
   7:     DisplayComponent.annotations = [
   8:         new ng.ComponentMetadata({
   9:             selector: "display"
  10:         }),
  11:         new ng.ViewMetadata({
  12:             template:
  13:                 '<p>My name: {{ myName }}</p>'
  14:         })
  15:     ];
  17:     document.addEventListener('DOMContentLoaded', function () {
  18:         ng.bootstrap(DisplayComponent);
  19:     });
  20: })();


    <script src="angular2.sfx.dev.js"></script>
    <script src="show-properties.js"><script>

Wednesday, March 19, 2014

Pack nuget packages locally and with TFS Build

Currently I’m working on a project at a customer where we are integrating a new ASP.NET MVC application into an existing ASP.NET WebForms application. Due the complexity of the legacy system (70+ projects) I decided to start a brand new solution for the new development and put this into a nuget package. Every time a new version of the ASP.NET MVC needs to be published, a new version of the nuget package gets installed on the existing legacy system.

To automate this, I started by adding a post-build event to the ASP.NET MVC Project that got triggered every time I was building a release build. So far, so good.

Next step was configuring the TFS Build so I could push the package to a share, this way every any of the team could install the package and I could use package restore. This way I no longer needed to check-in the package folder.

Note: Don’t use ‘enable Nuget Package Restore’ function on the solution, but enable it on ‘Tools > Options > Package Manager > General’ on this tab page you have a group box called Package Restore. Check both checkboxes. More information about this can be found on this blog by Xavier Decoster.


This is where the hell began. Building the project succeeded, but the build failed on packaging the nuget package. One of the issues I had was that nuget looked for was looking for the bin folder to copy the dll’s, but a TFS build doesn’t place the dll files in the bin folder after build, but to a binaries folder at solution level. A possible fix was changing the output path for a release build to ‘..\..\Binaries’ on the properties of the project file (Build tab). But this is rather a workaround then a good solution. So I looked further for a better solution.

Next I started to take a look at the nuget.targets file that gets added when you use the ‘enable Nuget Package Restore’ on the solution. I know I just told not to use this, and you shouldn’t, but this targets file also contained a task for packaging nuget packages. So the next thing I did was copying the content of the .nuget folder to my own folder and modified the NuGet.targets file.

If you thought this would solve all my problems, think again. The problem of the packaging  looking for the bin folder of the project was solved by adding –OutputDirectory “$(OutDir) “ to the pack command. Note the space after the $(OutDir). This must be present and is necessary to handle the double slashes that occurs when packaging on TFS. This results in something like 'bin\ \package.nupkg'. Not nice, but seems to work. Next problem I had was the fact I was using the “-IncludeReferencedProjects” option. This was still using the output path configured for this project instead of the binaries folder.

After some googling I found a helpfull comment on a nuget issue. (The last comment by athinton). So by adding -Properties OutputPath="$(OutDir) " to the pack statement, I solved the issue and packaging after a TFS build finally worked with the in“-IncludeReferencedProjects” option.

But … this broke the packaging inside visual studio. For making it work in VS, the -Properties OutputPath="$(OutDir) " must be removed, so that is why I’m using an MSBuild task to define all this rules.

The NuGetToolsPath must be change to the path where the nuget.exe file is located. (Note that it appears 2 times)
Extra options for packaging the nuget package can be added to the buildcommand element.

Last but not least, don’t forget to import this file in your project file.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<SolutionDir Condition="$(SolutionDir) == '' Or $(SolutionDir) == '*Undefined*'">$(MSBuildProjectDirectory)\..\</SolutionDir>

<!-- Property that enables building a package from a project -->
<BuildPackage Condition=" '$(BuildPackage)' == '' ">true</BuildPackage>

<!-- Download NuGet.exe if it does not already exist -->
<DownloadNuGetExe Condition=" '$(DownloadNuGetExe)' == '' ">false</DownloadNuGetExe>

<PropertyGroup Condition=" '$(OS)' == 'Windows_NT'">
<!-- Windows specific commands -->
<NuGetToolsPath>$([System.IO.Path]::Combine($(SolutionDir), "Nuget"))</NuGetToolsPath>

<PropertyGroup Condition=" '$(OS)' != 'Windows_NT'">
<!-- We need to launch nuget.exe with the mono command if we're not on windows -->

<!-- NuGet command -->
<NuGetExePath Condition=" '$(NuGetExePath)' == '' ">$(NuGetToolsPath)\NuGet.exe</NuGetExePath>

<NuGetCommand Condition=" '$(OS)' == 'Windows_NT'">"$(NuGetExePath)"</NuGetCommand>
<NuGetCommand Condition=" '$(OS)' != 'Windows_NT' ">mono --runtime=v4.0.30319 $(NuGetExePath)</NuGetCommand>

<PackageOutputDir Condition="$(PackageOutputDir) == ''">$(OutDir) </PackageOutputDir>

<OutputPath Condition="'$(BuildingInsideVisualStudio)' == 'false' " >OutputPath="$(OutDir) "</OutputPath>
<OutputPath Condition="'$(BuildingInsideVisualStudio)' == 'true' " ></OutputPath>

<NonInteractiveSwitch Condition=" '$(VisualStudioVersion)' != '' AND '$(OS)' == 'Windows_NT' ">-NonInteractive</NonInteractiveSwitch>

<!-- Commands -->
<BuildCommand>$(NuGetCommand) pack "$(ProjectPath)" -Properties "Configuration=$(Configuration);Platform=$(Platform);$(OutputPath)" $(NonInteractiveSwitch) -OutputDirectory "$(PackageOutputDir)" -IncludeReferencedProjects</BuildCommand>

<!-- Make the build depend on restore packages -->
<BuildDependsOn Condition="$(BuildPackage) == 'true'">
<Target Name="BuildPackage">
<Exec Command="$(BuildCommand)"
Condition=" '$(OS)' != 'Windows_NT' " />

<Exec Command="$(BuildCommand)"
Condition=" '$(OS)' == 'Windows_NT' " />

Friday, January 25, 2013

JavaScript: OO programming using the Revealing Prototype Pattern

I’m currently playing around with SPA (Single Page applications), meaning a lot of JavaScript to write. One of the principals in OO programming is don’t repeat your self. That is why I was looking for ways to accomplish this. In languages as C#, Java, VB, … we can make use of classes for this, but in JavaScript we don’t have a keyword class to define classes. (Except if you are using Typescript, this is a language build on top of JavaScript that allows you to define classes like you do in C# or Java. You should really check it out.)


In JavaScript you only have functions, but all these functions have closures. A closure is a special kind of object that combines two things: a function, and the environment in which that function was created. The environment consists of any local variables that were in-scope at the time that the closure was created. In the example below the “obj” variable is a closure which has “Hello world!” and the function innerFunction in scope when the closure was created.

   1: function closure(){
   2:     var hello = "Hello";
   3:     function innerFunction(suffix){
   4:         alert(hello + " " + suffix);
   5:     }
   6:     return innerFunction;
   7: }
   9: var obj = closure(); // Creates a new closure
  10: obj("world!"); // This will show the message box with "Hello world!"

The innerFunction also has his own closure containing the suffix variable. Because the innerFunction is inside of an other scope, it can make use of the parent scope as long as it is created in the parent scope, and there aren’t duplicate names. In the case inner- and outer scope have the same variable or function defined, the value of the inner scope will be the one defined in the inner scope.

   1: function a(){
   2:     var c = "1"
   3:     function b(c){
   4:         alert(c);
   5:     }
   6:     return b;
   7: }
   9: var x = a();
  10: x("5"); // The message box will show 5.


To create a new object in JavaScript, we need to do 2 things. First make a class definition using a function. In the example bellow we define the class in the function “Class”. You see we can have private fields and private methods inside. Because none of this 2 are returned at the end, they will only exist inside the closure and be only accessible inside it. At the end of the class definition we return an object literal. This object literal will contain all the public functions and fields the object has.

   1: function Class(){
   2:     var privateField = "private"
   3:     function privateFunction(){
   4:         return privateField;
   5:     }
   7:     return {
   8:         publicField: "public",
   9:         publicFunction: function(){
  10:             return privateFunction();
  11:         }
  12:     }        
  13: }
  15: var instance = new Class();
  16: alert(instance.publicField); // shows "public"
  17: alert(instance.publicFunction()); // shows "private"

There is also a second way to expose fields and functions public. You can do this by adding the fields and the functions to the scope of the function. You can access the scope of a function, by using the this keyword.

   1: function Class() {
   2:     var privateField = "private"
   3:     function privateFunction() {
   4:         return privateField;
   5:     }
   7:     this.publicField = "public";
   8:     this.publicFunction = function () {
   9:         return privateFunction();
  10:     };
  11: }
  13: var instance = new Class();
  14: alert(instance.publicField); // shows "public"
  15: alert(instance.publicFunction()); // shows "private"

Prototype Pattern

In the objects chapter of this post I showed you 2 ways to define a class. The disadvantage of the above code is that everything inside the class definition gets created when you create a new instance of the class. When creating a lot of instances you can imagine that you will get memory issues after a while. That is when the prototype pattern can come to the rescue.

The prototype pattern is a creational design pattern used in software development when the type of objects to create is determined by a prototypical instance, which is cloned to produce new objects (Source Wikipedia). This means instead of creating the entire object every time, only the state (which makes the object instances unique) will be created, but all the other stuff like methods will be cloned. By doing this you can save a lot of memory allocation.

In the example below you can see the prototype pattern in action. As mentioned only the state of the object is kept in the object, but every thing that can be shared will be cloned when a new instance is created. This means the publicField we have will always have the value “public” no matter how many instances of the object you created and if you change the value of it, it will be changed for all existing and new instances. This is the same for the publicFunction and the body of the function, but in the function you are able to access the state of the instance. This way you can write the function signature once, but access and change the state of the instance individually.

   1: function Class(p) {
   2:     var privateField = p;
   3:     function privateFunction() {
   4:         return privateField;
   5:     }
   7:     this._privateFunction = privateFunction;
   8: }
  10: Class.prototype.publicField = "public";
  11: Class.prototype.publicFunction = function () {
  12:     return this._privateFunction();
  13: };
  15: var instance = new Class("private");
  16: alert(instance.publicField); // shows "public"
  17: alert(instance.publicFunction()); // shows "private"

With the above example you can also create a second instance with “public” as argument and in this case the publicFunction will return “public”.

One of the disadvantages of this approach is the fact that all the fields you want to access in the shared functions (defined in the prototype), need to be public on the instance. Meaning they need to be returned or added to the scope. To solve this a bit there is a convention that private fields, who need to be accessible in the shared parts have a “_”-prefix. This won’t make them inaccessible, but intellisense will ignore them, so making it a bit harder to know.

Revealing Prototype Pattern

As you see with the prototype pattern every thing you define as shared is public. This is where the revealing prototype goes further. It allows you to have private functions and variables inside the shared scope.

   1: function Class() {
   2:     var privateField = "private"
   3:     function privateFunction() {
   4:         return privateField;
   5:     }
   7:     this._privateFunction = privateFunction
   8: }
  10: Class.prototype = function () {
  11:     var privateField = "Hello";
  12:     var publicField = "public";
  13:     function privateFunction() {
  14:         return privateField + " " + this._privateFunction();
  15:     };
  17:     return {
  18:         publicField: publicField,
  19:         publicFunction: privateFunction
  20:     };
  21: }();
  24: var instance = new Class();
  25: alert(instance.publicField); // shows "public"
  26: alert(instance.publicFunction()); // shows "Hello private"


By using the revealing prototype pattern you have several advantages:

  • Less memory use, everything defined as shared is only created once.
  • You have the advantage to encapsulate your private code and expose public only the functions and fields you want
  • The classes are open for extension but closed for modification: you can easily add extra functionality without affecting the existing implementation.

Of course there are some down sizes too:

  • The definition of the constructor and the class implementation (for the prototype parts) are separate
  • You can make your state fully private if you need it in the shared parts.

For me the leverage of the advantages are bigger than the disadvantages. I’m using this pattern more and more. I’m also busy migrating my library to use this pattern, so other people can easily extend my library and add there own functionality without changing the source file and the existing implementation.

Wednesday, December 12, 2012

Build automation: Merging and minifying JavaScript files with Ajax Minifier

In my Library I have come to the point where it is gets harder and harder to maintain it. I' am currently over the 3000 lines, so it got time to split up my library in separate files and divide it in logical groups. After splitting everything up in separate files, I was looking for away to merge all these files into one file and have the option to minify it. This is also a step I wanted to make, so my lib could get more and more mature. The last thing I wanted was to integrate it with visual studio – which is my IDE that I’m using to develop the library – and be portable. With that last thing I mean I didn’t want to install something on my machine to do the magic.

After some searching I choose the Microsoft Ajax Minified. It comes in a nuget package which makes it easy to install. Next to that it integrates smoothly with visual studio using MS Build Tasks. This means it will merge and minify the files every time I build the project. There is only a little disadvantage where I will start with.

Set-up Ajax minifier

To start, the first thing we need to do after installing the nuget package, is adding the following files to your project (I have added a new folder Build to add them in.)

  • AjaxMin.dll
  • AjaxMinTask.dll
  • AjaxMinTask.targets

All these files can be found in the tools/net40 folder in the nuget package folder of the Ajax minifier

<folder where the solution is located>\packages\AjaxMin.<version>\tools\net40

The disadvantage of this way of working is that you need to copy the above files when getting the latest version of the Ajax minifier from nuget.

The last thing you need to do is editing the .csproj file so it will use the added build task. You can do this by unloading the project. (right click on the project –> Unload project)


Once you have done this, you can right click on your project and choose for edit <project name>.csproj


This will open an XML file containing information about the project. At the end of the file before the project closing tag, you must add the following line.

<Import Project="$(MSBuildProjectDirectory)\Build\AjaxMinTask.targets" />

Save this file and right click on the project again and choose for Reload Project. This will load the project again with the modifications.

Ajax minifier manifest File

Once all this is done, we can add an Ajax manifest file. In this file we can declare which files must be merged and minified. Also we declare the location where these files must be saved. Optionally we can provide some arguments to overwrite the default behavior. More information about the arguments can be found here. Note that these arguments are the same as the arguments you can provide when working with the command line tool.

The location of this manifest file doesn’t matter, but the extension needs to be “.ajaxmin”. For my library I have put this file also in the build folder where all the Ajax minifier files are located. You can also provide multiple output tag so you can create for example a minified and a non minified file.

Bellow you can find an example of an Ajax manifest file.

<?xml version="1.0" encoding="utf-8" ?>
  <output path="Scripts\output.js">
    <arguments>-clobber -pretty</arguments>
    <input path="file1.js"/>
    <input path="file2.js"/>

The arguments I am providing are “-clobber” to overwrite files even when they are flagged read-only, “-pretty” this provides a nice formatted JavaScript file that is readable. (not minified)

By default the output files will be added to a content folder in your project (must exist or you will get an error.) If you want to determine the path in your manifest file, then you need to change the AjaxMinTask.targets file we added in the first part. This is also an XML-file containing the build task definition.

In this file you will find a tag called “AjaxMinManifestTask”. On that element you have several attributes and the one we need is Output Folder. In my case the value of it is “$(ProjectDir)”. With this I can address the location in my manifest file starting from the root of my project. As said earlier: the only thing you need to keep in mind is that the folder structure must be present.

When all this is done, the only thing you need to do is build the project, and the files will get created.


Working with the Ajax Minifier gives you a lot of options for minifying and merging your JavaScript files. Next to that it can also minify CSS files. And as seen above with some work you can easily integrate it into the build of your project.