Wednesday, March 19, 2014

Pack nuget packages locally and with TFS Build

Currently I’m working on a project at a customer where we are integrating a new ASP.NET MVC application into an existing ASP.NET WebForms application. Due the complexity of the legacy system (70+ projects) I decided to start a brand new solution for the new development and put this into a nuget package. Every time a new version of the ASP.NET MVC needs to be published, a new version of the nuget package gets installed on the existing legacy system.

To automate this, I started by adding a post-build event to the ASP.NET MVC Project that got triggered every time I was building a release build. So far, so good.

Next step was configuring the TFS Build so I could push the package to a share, this way every any of the team could install the package and I could use package restore. This way I no longer needed to check-in the package folder.

Note: Don’t use ‘enable Nuget Package Restore’ function on the solution, but enable it on ‘Tools > Options > Package Manager > General’ on this tab page you have a group box called Package Restore. Check both checkboxes. More information about this can be found on this blog by Xavier Decoster.


This is where the hell began. Building the project succeeded, but the build failed on packaging the nuget package. One of the issues I had was that nuget looked for was looking for the bin folder to copy the dll’s, but a TFS build doesn’t place the dll files in the bin folder after build, but to a binaries folder at solution level. A possible fix was changing the output path for a release build to ‘..\..\Binaries’ on the properties of the project file (Build tab). But this is rather a workaround then a good solution. So I looked further for a better solution.

Next I started to take a look at the nuget.targets file that gets added when you use the ‘enable Nuget Package Restore’ on the solution. I know I just told not to use this, and you shouldn’t, but this targets file also contained a task for packaging nuget packages. So the next thing I did was copying the content of the .nuget folder to my own folder and modified the NuGet.targets file.

If you thought this would solve all my problems, think again. The problem of the packaging  looking for the bin folder of the project was solved by adding –OutputDirectory “$(OutDir) “ to the pack command. Note the space after the $(OutDir). This must be present and is necessary to handle the double slashes that occurs when packaging on TFS. This results in something like 'bin\ \package.nupkg'. Not nice, but seems to work. Next problem I had was the fact I was using the “-IncludeReferencedProjects” option. This was still using the output path configured for this project instead of the binaries folder.

After some googling I found a helpfull comment on a nuget issue. (The last comment by athinton). So by adding -Properties OutputPath="$(OutDir) " to the pack statement, I solved the issue and packaging after a TFS build finally worked with the in“-IncludeReferencedProjects” option.

But … this broke the packaging inside visual studio. For making it work in VS, the -Properties OutputPath="$(OutDir) " must be removed, so that is why I’m using an MSBuild task to define all this rules.

The NuGetToolsPath must be change to the path where the nuget.exe file is located. (Note that it appears 2 times)
Extra options for packaging the nuget package can be added to the buildcommand element.

Last but not least, don’t forget to import this file in your project file.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="">
<SolutionDir Condition="$(SolutionDir) == '' Or $(SolutionDir) == '*Undefined*'">$(MSBuildProjectDirectory)\..\</SolutionDir>

<!-- Property that enables building a package from a project -->
<BuildPackage Condition=" '$(BuildPackage)' == '' ">true</BuildPackage>

<!-- Download NuGet.exe if it does not already exist -->
<DownloadNuGetExe Condition=" '$(DownloadNuGetExe)' == '' ">false</DownloadNuGetExe>

<PropertyGroup Condition=" '$(OS)' == 'Windows_NT'">
<!-- Windows specific commands -->
<NuGetToolsPath>$([System.IO.Path]::Combine($(SolutionDir), "Nuget"))</NuGetToolsPath>

<PropertyGroup Condition=" '$(OS)' != 'Windows_NT'">
<!-- We need to launch nuget.exe with the mono command if we're not on windows -->

<!-- NuGet command -->
<NuGetExePath Condition=" '$(NuGetExePath)' == '' ">$(NuGetToolsPath)\NuGet.exe</NuGetExePath>

<NuGetCommand Condition=" '$(OS)' == 'Windows_NT'">"$(NuGetExePath)"</NuGetCommand>
<NuGetCommand Condition=" '$(OS)' != 'Windows_NT' ">mono --runtime=v4.0.30319 $(NuGetExePath)</NuGetCommand>

<PackageOutputDir Condition="$(PackageOutputDir) == ''">$(OutDir) </PackageOutputDir>

<OutputPath Condition="'$(BuildingInsideVisualStudio)' == 'false' " >OutputPath="$(OutDir) "</OutputPath>
<OutputPath Condition="'$(BuildingInsideVisualStudio)' == 'true' " ></OutputPath>

<NonInteractiveSwitch Condition=" '$(VisualStudioVersion)' != '' AND '$(OS)' == 'Windows_NT' ">-NonInteractive</NonInteractiveSwitch>

<!-- Commands -->
<BuildCommand>$(NuGetCommand) pack "$(ProjectPath)" -Properties "Configuration=$(Configuration);Platform=$(Platform);$(OutputPath)" $(NonInteractiveSwitch) -OutputDirectory "$(PackageOutputDir)" -IncludeReferencedProjects</BuildCommand>

<!-- Make the build depend on restore packages -->
<BuildDependsOn Condition="$(BuildPackage) == 'true'">
<Target Name="BuildPackage">
<Exec Command="$(BuildCommand)"
Condition=" '$(OS)' != 'Windows_NT' " />

<Exec Command="$(BuildCommand)"
Condition=" '$(OS)' == 'Windows_NT' " />

Friday, January 25, 2013

JavaScript: OO programming using the Revealing Prototype Pattern

I’m currently playing around with SPA (Single Page applications), meaning a lot of JavaScript to write. One of the principals in OO programming is don’t repeat your self. That is why I was looking for ways to accomplish this. In languages as C#, Java, VB, … we can make use of classes for this, but in JavaScript we don’t have a keyword class to define classes. (Except if you are using Typescript, this is a language build on top of JavaScript that allows you to define classes like you do in C# or Java. You should really check it out.)


In JavaScript you only have functions, but all these functions have closures. A closure is a special kind of object that combines two things: a function, and the environment in which that function was created. The environment consists of any local variables that were in-scope at the time that the closure was created. In the example below the “obj” variable is a closure which has “Hello world!” and the function innerFunction in scope when the closure was created.

   1: function closure(){
   2:     var hello = "Hello";
   3:     function innerFunction(suffix){
   4:         alert(hello + " " + suffix);
   5:     }
   6:     return innerFunction;
   7: }
   9: var obj = closure(); // Creates a new closure
  10: obj("world!"); // This will show the message box with "Hello world!"

The innerFunction also has his own closure containing the suffix variable. Because the innerFunction is inside of an other scope, it can make use of the parent scope as long as it is created in the parent scope, and there aren’t duplicate names. In the case inner- and outer scope have the same variable or function defined, the value of the inner scope will be the one defined in the inner scope.

   1: function a(){
   2:     var c = "1"
   3:     function b(c){
   4:         alert(c);
   5:     }
   6:     return b;
   7: }
   9: var x = a();
  10: x("5"); // The message box will show 5.


To create a new object in JavaScript, we need to do 2 things. First make a class definition using a function. In the example bellow we define the class in the function “Class”. You see we can have private fields and private methods inside. Because none of this 2 are returned at the end, they will only exist inside the closure and be only accessible inside it. At the end of the class definition we return an object literal. This object literal will contain all the public functions and fields the object has.

   1: function Class(){
   2:     var privateField = "private"
   3:     function privateFunction(){
   4:         return privateField;
   5:     }
   7:     return {
   8:         publicField: "public",
   9:         publicFunction: function(){
  10:             return privateFunction();
  11:         }
  12:     }        
  13: }
  15: var instance = new Class();
  16: alert(instance.publicField); // shows "public"
  17: alert(instance.publicFunction()); // shows "private"

There is also a second way to expose fields and functions public. You can do this by adding the fields and the functions to the scope of the function. You can access the scope of a function, by using the this keyword.

   1: function Class() {
   2:     var privateField = "private"
   3:     function privateFunction() {
   4:         return privateField;
   5:     }
   7:     this.publicField = "public";
   8:     this.publicFunction = function () {
   9:         return privateFunction();
  10:     };
  11: }
  13: var instance = new Class();
  14: alert(instance.publicField); // shows "public"
  15: alert(instance.publicFunction()); // shows "private"

Prototype Pattern

In the objects chapter of this post I showed you 2 ways to define a class. The disadvantage of the above code is that everything inside the class definition gets created when you create a new instance of the class. When creating a lot of instances you can imagine that you will get memory issues after a while. That is when the prototype pattern can come to the rescue.

The prototype pattern is a creational design pattern used in software development when the type of objects to create is determined by a prototypical instance, which is cloned to produce new objects (Source Wikipedia). This means instead of creating the entire object every time, only the state (which makes the object instances unique) will be created, but all the other stuff like methods will be cloned. By doing this you can save a lot of memory allocation.

In the example below you can see the prototype pattern in action. As mentioned only the state of the object is kept in the object, but every thing that can be shared will be cloned when a new instance is created. This means the publicField we have will always have the value “public” no matter how many instances of the object you created and if you change the value of it, it will be changed for all existing and new instances. This is the same for the publicFunction and the body of the function, but in the function you are able to access the state of the instance. This way you can write the function signature once, but access and change the state of the instance individually.

   1: function Class(p) {
   2:     var privateField = p;
   3:     function privateFunction() {
   4:         return privateField;
   5:     }
   7:     this._privateFunction = privateFunction;
   8: }
  10: Class.prototype.publicField = "public";
  11: Class.prototype.publicFunction = function () {
  12:     return this._privateFunction();
  13: };
  15: var instance = new Class("private");
  16: alert(instance.publicField); // shows "public"
  17: alert(instance.publicFunction()); // shows "private"

With the above example you can also create a second instance with “public” as argument and in this case the publicFunction will return “public”.

One of the disadvantages of this approach is the fact that all the fields you want to access in the shared functions (defined in the prototype), need to be public on the instance. Meaning they need to be returned or added to the scope. To solve this a bit there is a convention that private fields, who need to be accessible in the shared parts have a “_”-prefix. This won’t make them inaccessible, but intellisense will ignore them, so making it a bit harder to know.

Revealing Prototype Pattern

As you see with the prototype pattern every thing you define as shared is public. This is where the revealing prototype goes further. It allows you to have private functions and variables inside the shared scope.

   1: function Class() {
   2:     var privateField = "private"
   3:     function privateFunction() {
   4:         return privateField;
   5:     }
   7:     this._privateFunction = privateFunction
   8: }
  10: Class.prototype = function () {
  11:     var privateField = "Hello";
  12:     var publicField = "public";
  13:     function privateFunction() {
  14:         return privateField + " " + this._privateFunction();
  15:     };
  17:     return {
  18:         publicField: publicField,
  19:         publicFunction: privateFunction
  20:     };
  21: }();
  24: var instance = new Class();
  25: alert(instance.publicField); // shows "public"
  26: alert(instance.publicFunction()); // shows "Hello private"


By using the revealing prototype pattern you have several advantages:

  • Less memory use, everything defined as shared is only created once.
  • You have the advantage to encapsulate your private code and expose public only the functions and fields you want
  • The classes are open for extension but closed for modification: you can easily add extra functionality without affecting the existing implementation.

Of course there are some down sizes too:

  • The definition of the constructor and the class implementation (for the prototype parts) are separate
  • You can make your state fully private if you need it in the shared parts.

For me the leverage of the advantages are bigger than the disadvantages. I’m using this pattern more and more. I’m also busy migrating my library to use this pattern, so other people can easily extend my library and add there own functionality without changing the source file and the existing implementation.

Wednesday, December 12, 2012

Build automation: Merging and minifying JavaScript files with Ajax Minifier

In my Library I have come to the point where it is gets harder and harder to maintain it. I' am currently over the 3000 lines, so it got time to split up my library in separate files and divide it in logical groups. After splitting everything up in separate files, I was looking for away to merge all these files into one file and have the option to minify it. This is also a step I wanted to make, so my lib could get more and more mature. The last thing I wanted was to integrate it with visual studio – which is my IDE that I’m using to develop the library – and be portable. With that last thing I mean I didn’t want to install something on my machine to do the magic.

After some searching I choose the Microsoft Ajax Minified. It comes in a nuget package which makes it easy to install. Next to that it integrates smoothly with visual studio using MS Build Tasks. This means it will merge and minify the files every time I build the project. There is only a little disadvantage where I will start with.

Set-up Ajax minifier

To start, the first thing we need to do after installing the nuget package, is adding the following files to your project (I have added a new folder Build to add them in.)

  • AjaxMin.dll
  • AjaxMinTask.dll
  • AjaxMinTask.targets

All these files can be found in the tools/net40 folder in the nuget package folder of the Ajax minifier

<folder where the solution is located>\packages\AjaxMin.<version>\tools\net40

The disadvantage of this way of working is that you need to copy the above files when getting the latest version of the Ajax minifier from nuget.

The last thing you need to do is editing the .csproj file so it will use the added build task. You can do this by unloading the project. (right click on the project –> Unload project)


Once you have done this, you can right click on your project and choose for edit <project name>.csproj


This will open an XML file containing information about the project. At the end of the file before the project closing tag, you must add the following line.

<Import Project="$(MSBuildProjectDirectory)\Build\AjaxMinTask.targets" />

Save this file and right click on the project again and choose for Reload Project. This will load the project again with the modifications.

Ajax minifier manifest File

Once all this is done, we can add an Ajax manifest file. In this file we can declare which files must be merged and minified. Also we declare the location where these files must be saved. Optionally we can provide some arguments to overwrite the default behavior. More information about the arguments can be found here. Note that these arguments are the same as the arguments you can provide when working with the command line tool.

The location of this manifest file doesn’t matter, but the extension needs to be “.ajaxmin”. For my library I have put this file also in the build folder where all the Ajax minifier files are located. You can also provide multiple output tag so you can create for example a minified and a non minified file.

Bellow you can find an example of an Ajax manifest file.

<?xml version="1.0" encoding="utf-8" ?>
  <output path="Scripts\output.js">
    <arguments>-clobber -pretty</arguments>
    <input path="file1.js"/>
    <input path="file2.js"/>

The arguments I am providing are “-clobber” to overwrite files even when they are flagged read-only, “-pretty” this provides a nice formatted JavaScript file that is readable. (not minified)

By default the output files will be added to a content folder in your project (must exist or you will get an error.) If you want to determine the path in your manifest file, then you need to change the AjaxMinTask.targets file we added in the first part. This is also an XML-file containing the build task definition.

In this file you will find a tag called “AjaxMinManifestTask”. On that element you have several attributes and the one we need is Output Folder. In my case the value of it is “$(ProjectDir)”. With this I can address the location in my manifest file starting from the root of my project. As said earlier: the only thing you need to keep in mind is that the folder structure must be present.

When all this is done, the only thing you need to do is build the project, and the files will get created.


Working with the Ajax Minifier gives you a lot of options for minifying and merging your JavaScript files. Next to that it can also minify CSS files. And as seen above with some work you can easily integrate it into the build of your project.

Thursday, November 22, 2012

Programming in HTML5 with JavaScript and CSS3 Specialist

MCSD_WinMet_Step1As of today, I am a Programming in HTML5 with JavaScript and CSS3 Specialist. I took the 70-480 exam and passed with a score of 760. Tough I already have quite some experience with HTML5 and JavaScript, the exam was tougher than I expected. That is why I wanted to make this post so every who is thinking about doing this exam gets a head start. Also every one who wants to do the exam, can do this for free until 31/03/2013. More information about this you can find here.

To start, Microsoft provides a one day online course to prepare for the 70-480. This course can be found on the Microsoft virtual academy. On that site you can find courses for several technologies. These courses are divided into several modules and every module has a survey you can take when you completed the resources provided. This is a good starting point if you want to study for the 70-480, but it is certainly not enough. The course is too brief.

If I look back at the exam, jQuery is very important. In a lot of questions jQuery was used in the code examples and solutions. So make sure you know how jQuery works and how to use it. Also a great part of the questions handled about AJAX request. It is definitely worth to inspect the AJAX implementation of jQuery with all its options and events. I got several questions about it.

A second great part of the questions handled about forms and validations. Make sure you know how regular expressions work. I had at least two questions that handled about choosing the right regex. And to complete the HTML5 element part I got some cases in which I note which HTML5 structure element I should use.

The questions about CSS handled the most about positioning, the box model, display modes, media queries and selectors.

The last part was about JavaScript with the focus on the object oriented aspect and closures. It is also a important to know how event handling works in JavaScript. For the API’s the most important ones are Web workers, local/session storage. It is good to know how they work and how you can use them.

For every one who takes the challenge, I wish you good luck and I hope I provided some useful hints.

Wednesday, November 14, 2012

IndexedDBViewer: Take a look inside your indexedDB

Some days ago I released a new version of the IndexedDBViewer 1.1.0. The IndexedDBViewer is intended for web developers who want to sneak into their indexedDB database. It allows you to inspect the structure of your database as well as the data stored inside this structure. The difference with the previous version is that it no longer needs the jQueryUI library. This way I eliminated at least one reference. The following references are still necessary:

If you are using nuget, you can get all the resource by searching for the indexedDBViewer.

The second major change is that the viewer can easily be added to an existing page. The only thing you need to do is add a div with “indexedDBViewer” as id and data-dbName attribute to pass the database you want to inspect. The rest will be handled by the script in the viewer

   1: <div id="indexedDBViewer" data-dbName="database name"></div>

Once this is done and you navigate to the page with the viewer, you will get the following result


In the bottom you will see the view appear. On the left pane you get an overview of the database structure. This a list with on top the name of the database. Under that you will find child nodes that represent the object stores present in the database. If we descend an other level we can see the indexes present on the object store. If you click on the “+” or “-“ next to the names, you can expand or hide the structure beneath.

If you click on the database name in the navigation pane, you will get information about the database and it’s structure.

  • In the general block you will see the name of the database and the version it is currently in
  • The object stores block gives you an overview of all the object stores present and how they are configured.
  • The indexes block shows all the present indexes and how they are configured.


When you click on one of the object store names in the navigation pane, you will get all the data present in the object store. Because the data is saved as a key/value pair, you will see the key with his corresponding value. If the value is an object or contains objects, then you can inspect them by clicking on the “+” to expand and “-” to hide the details.


If you click on one of the index names in the navigation pane, you will get – similar as for object stores – all the data present in the object store. But in this case you will see a little more. Besides the key of the index and the value you will see the key the value has in the object store. This can be found under the “primary” key column.


As last there are some little extra features:

  • If you click on the top border of the viewer and drag it up or down, then you can change the height of the viewer.
  • if you click on the “-” in the right top of the viewer, you can hide the viewer. If you want it to appear again, then you have to click on the “+” on the right bottom of the page.



With this Chrome like indexedDBViewer you can inspect the structure of your database inclusive all data stored within it. This with the advantage that it runs inside the browser, so you can use it cross-browser.

Friday, September 21, 2012

IndexedDB: MultiEntry explained

For a long time I was not sure what the purpose of the multiEntry attribute was. Since non of the browsers supported it yet, but since sometime Firefox and even the latest builds of Chrome support it, it all came clear to me. The multiEntry attribute enables you to filter on the individual values of an array.For this reason, the multiEntry attribute is only useful when the index is put on a property that contains an array as value.

When the multiEntry attribute is on true, there will be a record added for every value in the array. The key of this record will be the value of the array and the value will be the object keeping the array. Because the values in the array are used as key, means that the values inside the array need to be valid keys. This means they can only be of the following types:

  • Array
  • DOMString
  • float
  • Date

So far for the theory, an example will make everything clear.

In the example below I will use an object Blog. A blog contains out of the following properties:

   1: var blog = { Id: 1
   2:            , Title: "Blog post"
   3:            , content: "content"
   4:            , tags: ["html5", "indexeddb", "linq2indexeddb"]};

In the indexeddb we have an object store called blog which has an index on the tags property. The index has the multiEntry attribute turned on. If we would insert the object above, we would see the following records in the index:

“"html5”{ Id:1, Title: “Blogpost”, content:”content”, tags: [“html5”, “indexeddb”, “linq2indexeddb”]}
“indexeddb”{ Id:1, Title: “Blogpost”, content:”content”, tags: [“html5”, “indexeddb”, “linq2indexeddb”]}
“linq2indexeddb”{ Id:1, Title: “Blogpost”, content:”content”, tags: [“html5”, “indexeddb”, “linq2indexeddb”]}

So for every value in the array of the tags attribute, a record is added in the index. This means when you start filtering, it is possible that the same object can be added to the result multiple times. For example if you would filter on all tags greater then “i”, the result would be 2 times the blog object I use in this example.