Sunday, May 1, 2016

Components in angular 1.5.x

In my previous article I described my view on a component based architecture. In this post I want to focus on how I applied this in a real life application with angular. Since the 1.5 version was released, components have become first class citizens just like in Angular 2, Ember 2, React,… On the other hand components aren’t so revolutionary. We have already known them for a long time as directives, but we never used them like this. Generally, we would only use directives for manipulating the DOM directly or in some cases to build reusable parts.
The components in angular 1.5 are a special kind of directive with a bit more restrictions and a simpler configuration. The main differences between directives and components are:
  • Components are restricted to elements. You can’t write a component as an attribute.
  • Components always have an isolated scope. This enforces a clearer dataflow. No longer will we have code that will change data on shared scopes.
  • No more link functions, but only a well-defined API on the controller.
  • A component is defined by an object and no longer by a function call.

Well-defined Lifecycle

Components have a well-defined lifecycle:
  • $onInit: This callback is called on each controller after all controllers for the element are constructed and their bindings initialized. In this callback you are also sure that all the controllers you require on are initialized and can be used. (since angular 1.5.0)
  • $onChanges: This callback is called each time the one-way bindings are updated.  The callback provides an object containing the changed bindings with their current- and previous value. Initially this callback is called before the $onInit with the original values of the bindings at initialization time. This is why this is the ideal place for cloning your objects passed through the bindings to ensure modifications will only affect the inner state.
    Please be aware that the changes callback on the one-way bindings for objects will only be triggered if the object reference changes. If a property inside the object changes, the changes callback won’t be called. This avoids adding a watch to monitor the changes made on the parent scope (works correctly since angular 1.5.5)
  • $postLink: This is called once all child elements are linked. This is similar to the (post) link function inside directives. Setting up DOM handlers or direct DOM manipulation can be done here. (since angular 1.5.3)
  • $onDestroy: This is the equivalent of the $destroy event emitted by the scope of a directive/controller. The ideal place to clean up external references and avoid memory leaks. (since angular 1.5.3)

Well-defined structure

Components also have a clean structure. They exist out of 3 parts:
  • A view, this can be a template or an external file.
  • A controller which describes the behaviour of the component.
  • Bindings, the in- and outputs of the component. The inputs will receive the data from a parent component and by using callbacks, will inform the parent component of any changes made to the component.

Bindings

Because components should only be allowed to modify their internal state and should never modify any data directly outside its component and scope. We should no longer make use of the commonly used two-way binding (bindings: { twoWay: “=” } ). Instead since angular 1.5 we now have a one-way binding for expressions (bindings: { oneWay: “<” } ).
The main difference with the two-way binding is the fact that the bound properties won’t be watched inside the component. So if you assign a new value to the property, it won’t affect the object on the parent. But be careful, this doesn’t apply on the fields of the property, that is why you always should clone the objects passed through the bindings if you want to change them. A good way to do this is working with named bindings, this way you can reuse the name of the bound property inside the component without affecting the object in the parent scope.
   1: module.component("component",{
   2:     template: "<div>{{$ctrl.item}}</div>"
   3:     bindings: {
   4:         inputItem: "<item"
   5:     }
   6:     controller: function(){
   7:         var $ctrl = this;
   8:         this.$onChanges = function(changeObj){
   9:             if(changeObj.inputItem){
  10:                 this.item = 
  11:                     angular.clone(changeObj.inputItem.currentValue);
  12:             }
  13:         }
  14:     }
  15: });
Another way to pass data is by using the “@” binding. This can be used if the value you are passing is a string value. The last way is using a “&” binding or a callback to retrieve data through a function call on the parent. This can be useful if you want to provide an auto complete component with search results data.
For exposing the changes inside the component we can only make use of the “&” binding. Calling a callback in the parent scope is the only way that we can and should pass data to the parent scope/component.
Let’s rephrase a little:
  • “=” binding (two-way) should no longer be used to avoid unwanted changes on the parent’s scope.
  • “<” binding (one-way) should be used to retrieve data from the parent scope passed as an expression.
  • “@” binding (string) should be used to retrieve string values from the parent scope.
  • “&” binding (callback) can be used to either retrieve data from the parent scope or be used as the only way to pass data to the parent scope.

Summary

The components way of working in angular 1.5 closes the gap for migrating your code to an angular 2 application a bit more. By building your apps in the angular 2 style you are already thinking on the same level. This will ease the migration path by shifting away from the controller way of working.
Directives will still have an existence beside the components. You will still have some cases where you will want to use attributes instead of elements or have the need of a shared scope for the UI state. But in most cases components will do the trick.

Saturday, April 23, 2016

Component based architecture

Once angular 1.5 got released and components seems to be the future of Frond-end development, I started looking to Component Based Architecture. A good well defined component is easy to reuse and the flow of the data is very clear. That is way it is also easily testable. Next to these dumb/presentational components we also have some smart components who are aware of services/routing. These smart components control the dumb/presentational components. They are responsible to act on the changes made in the dumb/presentational components and delegate these changes back.
Further on in the post I will only speak of dumb components, but the same applies on presentational components.
The schema gives you an overflow of how every thing works together. The redlines show the flow of the data. The blue line are the change events emitted by the dumb components. The smart component is just responsible for retrieving and storing the data and keep everything consistent. This schema I found on a very good blog post about Refactoring angular apps to components.
data_down_actions_up

Smart Components

We call the Smart Components smart, because they have a notion where they can retrieve and store the data. They are able to communicate with services and provide the dumb components of their data. They also handle the actions/events raised by the dumb components and act upon them. This can be storing the data to a service or updating a local data context, but also providing dumb components which have auto complete fields with their items.
In most cases smart components won’t have a lot of markup. Mostly the mark-up will be present in the dumb components, but it’s not wrong to have some in the smart component. (In most cases this will be markup to structure your markup or parts that aren’t or won’t be reusable.)
By making your smart component the single source of truth you can easier manage the behavior of your dumb components. I’ll explain this by a small example:
We have a list of translations. Every translation has a language which has to be unique inside the list. So when adding a new translation we want all the languages who are already present filtered out of the language choices. If we would retrieve the list of languages inside the adding component, then the adding would need to have a notion of either the list of translations or the component managing the translations list. We don’t want any of both situations because this would decrease the reuse of the adding component. By retrieving this data from the smart component, we can work around this. Because the smart component knows the translations list, it can easily filter out the existing languages before passing them to the adding component.
Just because the fact that smart components are your single source of truth makes them hard even impossible to reuse. It is more likely that you will reuse your dumb components with other services then vice versa. This is also the reason why you will have little smart components and many dumb components inside your application.

Dumb or Presentational Components

As you could have guessed the dumb components are the opposite of the smart component. Instead of retrieving data, the data is passed to them and if they make changes to the data, these changes are exposed using action/events. As you can see these components have a well-defined interface with all their in- and outputs. This makes the purpose of your component very clear and will help you to improve your understanding of your application even better for you and your team.
The  goal of a dumb component is that you can easily reuse it inside your application. That is why it’s good to keep your dumb small. This will increase the chance you will be able to reuse it. This should also be a trigger to help you decided whether you should split your code up into a component or not.
Another way to help you decide how to split up into dumb components is by keeping the design principle “Separation of concerns” in mind. Every concern can be a potential component. My opinion on this is to be pragmatic on this. If your not planning on reusing some parts then don’t just split them up jus because they would be great components. Keep them in a single component, and if the day comes that you can extract some reusable components, you can still refractor them.
Because dumb components are small and well defined they are also very easy to test. You don’t need to mock anything because you can just pass the data directly to your component.

Conclusion

By implementing this way of working into my current project, I was able to extract a lot of reusable components that in other cases would have been copied with minor adjustments. Now they live as separate components that can be easily maintained.
It also made my code more readable. Instead of big chunks of codes in a single controller, I now have my well-defined components implementing their own concerns. By hiding the implementations details inside the components my smart component is very clean in code but still very clear and readable.
In my next post I will handle on how I implemented this approach inside an angular 1.5 application.

Wednesday, April 13, 2016

Project setup in Angular 1.x

Since 6 months now I have been working on an Angular 1.x project. In the start I have been struggling with the setup of the project, but I found a good way to start with the Angular 1 Style guide of John Papa. Combined with an module loader I optimized this setup.

Below you will find an example how to structure your application. The examples I will provide will use es6, but the concept behind it can easily be used with module loaders like commonJS and requireJS. In these cases just the imports of the files and the export of the file it self is handle a bit different.

Structure

/app
   /feature
       /subFeature
            component.js
            directive.js
            constants.js
            index.js
            subFeature.module.js
       controller.js
       component.js
       service.js
       feature.module.js
       feature.routes.js
       index.js
   myApp.js
   myApp.module.js
   index.js

As you can see in every folder structure I follow the same pattern. I have the following files:

  • index.js
    • The entry point of the folder
  • *.module.js
    • The definition of the angular module
  • *.routes.js
    • The routing definitions
  • other files
    • Controllers
    • Directives
    • Components
    • Services
    • constants

    • ….

*.module.js

The *.module.js file is used to define the angular module. In here we import angular and all the modules on which the module should depend on. If we are adding internal dependencies, we can do this by importing the index file of that module. This index file will init the module and expose the name of the module. (example: feature.module.js)

The module it self is exported so we can use it when we want to add services, controllers, components, … to it.

subFeature.module.js

   1: import angular from "angular";
   2: // import external dependencies necessary in the module
   3:  
   4: export default angular.module("myApp.feature.subFeature", []);

feature.module.js

   1: import angular from "angular";
   2: import subFeatureModule from "./subFeature/index";
   3: // import external dependencies necessary in the module.
   4:  
   5: export default angular.module("myApp.feature", [subFeatureModule]);

myApp.module.js

   1: import angular from "angular";
   2: import featureModule from "./feature/index";
   3: import "angular-route";
   4: // import external dependencies necessary in the module.
   5:  
   6: export default angular.module("myApp", [featureModule, "ngRoute"]);

*.routes.js

The *.routers.js is used to define the routing in your app for the feature. If you are using the new component router, this isn’t necessary because the routes will be defined inside the components.

   1: import FeatureModule from "./feature.module";
   2:  
   3: FeatureModule.config(function($routeProvider, $locationProvider) {
   4:   $routeProvider
   5:   .when('/Feature', {
   6:     templateUrl: 'controller.html',
   7:     controller: 'Controller'
   8:   });
   9: });

index.js

The index.js file is the entry file for the folder. In here we import all the components, controllers, services, … we want to add to the module. We also import the *.module.js file so we can expose the module name. This way we can easily use it to add the module as a dependency in another module.

The name of this file is irrelevant, you can also use something like init. The reason I use index.js is because this is the default entry point of a folder in webpack.

subfeature/index.js

   1: import "./component";
   2: import "./constant";
   3: import "./directive"
   4: import SubFeatureModule from "./subFeature.module";
   5:  
   6: export default SubFeatureModule.name;

feature/index.js

   1: import "./component"
   2: import "./controller";
   3: import "./service";
   4: import "./feature.routes";
   5: import featureModule from "./feature.module";
   6:  
   7: export default featureModule.name;

app/index.js

   1: import MyAppModule from "./myApp.module";
   2:  
   3: export default MyAppModule.name;

Other files

All the other files like (services, controllers, directives, constants, …) will import the *.module.js file. Every thing else necessary to define the service, controller, directive, … will also be present in this file. This way everything is present on a single place.

subFeature/component.js

   1: import SubFeatureModule from "./subFeature.module";
   2: // other imports
   3:  
   4: ComponentController.$inject = [];
   5:  
   6: SubFeatureModule.component("component", {
   7:     template: "<div></div>",
   8:     controller: ComponentController,
   9: })
  10:  
  11: function ComponentController(){}

subFeature/constant.js

   1: import SubFeatureModule from "./subFeature.module";
   2: // other imports
   3:  
   4: SubFeatureModule.constant("constant", {
   5:     "key": value
   6: })

Conclusion

By defining the component in the *.module.js file we have a single point where we can find the definition of the module. By exposing the module we also create a single point where we keep the name of the module. You no longer need to use the name of the module if you want to add a service/controller/… to it. This eliminates typo’s in the module names.

The index.js provides a single entry point for the module. All the services/controllers/… you want to add to the module should be added in here. Also we import the *.module.js file. This gives us the ability to expose the name of the module. These names can then be use to be added as dependency without having to know it string value. Again this eliminates typo’s in the module names.

In all the other files the definition and resolving of the dependencies can be put together. This downsizes the chance on errors and dependency mismatches

Tuesday, October 27, 2015

Angular2: Fix Displaying data example in es5

Since I’m recently working on an angular project, I started to get curious about the angular 2 version. I started with the quick start demo and everything went very smooth. So the next step was the Displaying Data example. It was then when it started to get wrong. I was following the walkthrough, but the result I got was a blank page.

After some research I discovered that that the angular developers made a breaking change in the alpha 35 version. All the classes ending on Annotation(s) where renamed to Metadata. Another difference is that you need to use the ng object instead of angular on the window.

If you want to get the Displaying Data example to work in es5, you will need to do the following:

show-properties.js:

   1: (function () {
   2:     // ES5
   3:     function DisplayComponent() {
   4:         this.myName = "Alice";
   5:     }
   6:  
   7:     DisplayComponent.annotations = [
   8:         new ng.ComponentMetadata({
   9:             selector: "display"
  10:         }),
  11:         new ng.ViewMetadata({
  12:             template:
  13:                 '<p>My name: {{ myName }}</p>'
  14:         })
  15:     ];
  16:  
  17:     document.addEventListener('DOMContentLoaded', function () {
  18:         ng.bootstrap(DisplayComponent);
  19:     });
  20: })();

 


<html>
<head>
    <script src="angular2.sfx.dev.js"></script>
    <script src="show-properties.js"><script>
</head>
<body>
    <display></display>
</body>
</html>

Wednesday, March 19, 2014

Pack nuget packages locally and with TFS Build

Currently I’m working on a project at a customer where we are integrating a new ASP.NET MVC application into an existing ASP.NET WebForms application. Due the complexity of the legacy system (70+ projects) I decided to start a brand new solution for the new development and put this into a nuget package. Every time a new version of the ASP.NET MVC needs to be published, a new version of the nuget package gets installed on the existing legacy system.

To automate this, I started by adding a post-build event to the ASP.NET MVC Project that got triggered every time I was building a release build. So far, so good.

Next step was configuring the TFS Build so I could push the package to a share, this way every any of the team could install the package and I could use package restore. This way I no longer needed to check-in the package folder.

Note: Don’t use ‘enable Nuget Package Restore’ function on the solution, but enable it on ‘Tools > Options > Package Manager > General’ on this tab page you have a group box called Package Restore. Check both checkboxes. More information about this can be found on this blog by Xavier Decoster.

image

This is where the hell began. Building the project succeeded, but the build failed on packaging the nuget package. One of the issues I had was that nuget looked for was looking for the bin folder to copy the dll’s, but a TFS build doesn’t place the dll files in the bin folder after build, but to a binaries folder at solution level. A possible fix was changing the output path for a release build to ‘..\..\Binaries’ on the properties of the project file (Build tab). But this is rather a workaround then a good solution. So I looked further for a better solution.

Next I started to take a look at the nuget.targets file that gets added when you use the ‘enable Nuget Package Restore’ on the solution. I know I just told not to use this, and you shouldn’t, but this targets file also contained a task for packaging nuget packages. So the next thing I did was copying the content of the .nuget folder to my own folder and modified the NuGet.targets file.

If you thought this would solve all my problems, think again. The problem of the packaging  looking for the bin folder of the project was solved by adding –OutputDirectory “$(OutDir) “ to the pack command. Note the space after the $(OutDir). This must be present and is necessary to handle the double slashes that occurs when packaging on TFS. This results in something like 'bin\ \package.nupkg'. Not nice, but seems to work. Next problem I had was the fact I was using the “-IncludeReferencedProjects” option. This was still using the output path configured for this project instead of the binaries folder.

After some googling I found a helpfull comment on a nuget issue. (The last comment by athinton). So by adding -Properties OutputPath="$(OutDir) " to the pack statement, I solved the issue and packaging after a TFS build finally worked with the in“-IncludeReferencedProjects” option.

But … this broke the packaging inside visual studio. For making it work in VS, the -Properties OutputPath="$(OutDir) " must be removed, so that is why I’m using an MSBuild task to define all this rules.

The NuGetToolsPath must be change to the path where the nuget.exe file is located. (Note that it appears 2 times)
Extra options for packaging the nuget package can be added to the buildcommand element.

Last but not least, don’t forget to import this file in your project file.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<SolutionDir Condition="$(SolutionDir) == '' Or $(SolutionDir) == '*Undefined*'">$(MSBuildProjectDirectory)\..\</SolutionDir>

<!-- Property that enables building a package from a project -->
<BuildPackage Condition=" '$(BuildPackage)' == '' ">true</BuildPackage>

<!-- Download NuGet.exe if it does not already exist -->
<DownloadNuGetExe Condition=" '$(DownloadNuGetExe)' == '' ">false</DownloadNuGetExe>
</PropertyGroup>

<PropertyGroup Condition=" '$(OS)' == 'Windows_NT'">
<!-- Windows specific commands -->
<NuGetToolsPath>$([System.IO.Path]::Combine($(SolutionDir), "Nuget"))</NuGetToolsPath>
</PropertyGroup>

<PropertyGroup Condition=" '$(OS)' != 'Windows_NT'">
<!-- We need to launch nuget.exe with the mono command if we're not on windows -->
<NuGetToolsPath>$(SolutionDir)Nuget</NuGetToolsPath>
</PropertyGroup>

<PropertyGroup>
<!-- NuGet command -->
<NuGetExePath Condition=" '$(NuGetExePath)' == '' ">$(NuGetToolsPath)\NuGet.exe</NuGetExePath>

<NuGetCommand Condition=" '$(OS)' == 'Windows_NT'">"$(NuGetExePath)"</NuGetCommand>
<NuGetCommand Condition=" '$(OS)' != 'Windows_NT' ">mono --runtime=v4.0.30319 $(NuGetExePath)</NuGetCommand>

<PackageOutputDir Condition="$(PackageOutputDir) == ''">$(OutDir) </PackageOutputDir>

<OutputPath Condition="'$(BuildingInsideVisualStudio)' == 'false' " >OutputPath="$(OutDir) "</OutputPath>
<OutputPath Condition="'$(BuildingInsideVisualStudio)' == 'true' " ></OutputPath>

<NonInteractiveSwitch Condition=" '$(VisualStudioVersion)' != '' AND '$(OS)' == 'Windows_NT' ">-NonInteractive</NonInteractiveSwitch>

<!-- Commands -->
<BuildCommand>$(NuGetCommand) pack "$(ProjectPath)" -Properties "Configuration=$(Configuration);Platform=$(Platform);$(OutputPath)" $(NonInteractiveSwitch) -OutputDirectory "$(PackageOutputDir)" -IncludeReferencedProjects</BuildCommand>

<!-- Make the build depend on restore packages -->
<BuildDependsOn Condition="$(BuildPackage) == 'true'">
$(BuildDependsOn);
BuildPackage;
</BuildDependsOn>
</PropertyGroup>
<Target Name="BuildPackage">
<Exec Command="$(BuildCommand)"
Condition=" '$(OS)' != 'Windows_NT' " />

<Exec Command="$(BuildCommand)"
LogStandardErrorAsError="true"
Condition=" '$(OS)' == 'Windows_NT' " />
</Target>
</Project>




Friday, January 25, 2013

JavaScript: OO programming using the Revealing Prototype Pattern

I’m currently playing around with SPA (Single Page applications), meaning a lot of JavaScript to write. One of the principals in OO programming is don’t repeat your self. That is why I was looking for ways to accomplish this. In languages as C#, Java, VB, … we can make use of classes for this, but in JavaScript we don’t have a keyword class to define classes. (Except if you are using Typescript, this is a language build on top of JavaScript that allows you to define classes like you do in C# or Java. You should really check it out.)

Closures

In JavaScript you only have functions, but all these functions have closures. A closure is a special kind of object that combines two things: a function, and the environment in which that function was created. The environment consists of any local variables that were in-scope at the time that the closure was created. In the example below the “obj” variable is a closure which has “Hello world!” and the function innerFunction in scope when the closure was created.

   1: function closure(){
   2:     var hello = "Hello";
   3:     function innerFunction(suffix){
   4:         alert(hello + " " + suffix);
   5:     }
   6:     return innerFunction;
   7: }
   8:  
   9: var obj = closure(); // Creates a new closure
  10: obj("world!"); // This will show the message box with "Hello world!"

The innerFunction also has his own closure containing the suffix variable. Because the innerFunction is inside of an other scope, it can make use of the parent scope as long as it is created in the parent scope, and there aren’t duplicate names. In the case inner- and outer scope have the same variable or function defined, the value of the inner scope will be the one defined in the inner scope.



   1: function a(){
   2:     var c = "1"
   3:     function b(c){
   4:         alert(c);
   5:     }
   6:     return b;
   7: }
   8:  
   9: var x = a();
  10: x("5"); // The message box will show 5.

Objects


To create a new object in JavaScript, we need to do 2 things. First make a class definition using a function. In the example bellow we define the class in the function “Class”. You see we can have private fields and private methods inside. Because none of this 2 are returned at the end, they will only exist inside the closure and be only accessible inside it. At the end of the class definition we return an object literal. This object literal will contain all the public functions and fields the object has.



   1: function Class(){
   2:     var privateField = "private"
   3:     function privateFunction(){
   4:         return privateField;
   5:     }
   6:  
   7:     return {
   8:         publicField: "public",
   9:         publicFunction: function(){
  10:             return privateFunction();
  11:         }
  12:     }        
  13: }
  14:  
  15: var instance = new Class();
  16: alert(instance.publicField); // shows "public"
  17: alert(instance.publicFunction()); // shows "private"

There is also a second way to expose fields and functions public. You can do this by adding the fields and the functions to the scope of the function. You can access the scope of a function, by using the this keyword.



   1: function Class() {
   2:     var privateField = "private"
   3:     function privateFunction() {
   4:         return privateField;
   5:     }
   6:  
   7:     this.publicField = "public";
   8:     this.publicFunction = function () {
   9:         return privateFunction();
  10:     };
  11: }
  12:  
  13: var instance = new Class();
  14: alert(instance.publicField); // shows "public"
  15: alert(instance.publicFunction()); // shows "private"

Prototype Pattern


In the objects chapter of this post I showed you 2 ways to define a class. The disadvantage of the above code is that everything inside the class definition gets created when you create a new instance of the class. When creating a lot of instances you can imagine that you will get memory issues after a while. That is when the prototype pattern can come to the rescue.


The prototype pattern is a creational design pattern used in software development when the type of objects to create is determined by a prototypical instance, which is cloned to produce new objects (Source Wikipedia). This means instead of creating the entire object every time, only the state (which makes the object instances unique) will be created, but all the other stuff like methods will be cloned. By doing this you can save a lot of memory allocation.


In the example below you can see the prototype pattern in action. As mentioned only the state of the object is kept in the object, but every thing that can be shared will be cloned when a new instance is created. This means the publicField we have will always have the value “public” no matter how many instances of the object you created and if you change the value of it, it will be changed for all existing and new instances. This is the same for the publicFunction and the body of the function, but in the function you are able to access the state of the instance. This way you can write the function signature once, but access and change the state of the instance individually.



   1: function Class(p) {
   2:     var privateField = p;
   3:     function privateFunction() {
   4:         return privateField;
   5:     }
   6:  
   7:     this._privateFunction = privateFunction;
   8: }
   9:  
  10: Class.prototype.publicField = "public";
  11: Class.prototype.publicFunction = function () {
  12:     return this._privateFunction();
  13: };
  14:  
  15: var instance = new Class("private");
  16: alert(instance.publicField); // shows "public"
  17: alert(instance.publicFunction()); // shows "private"

With the above example you can also create a second instance with “public” as argument and in this case the publicFunction will return “public”.


One of the disadvantages of this approach is the fact that all the fields you want to access in the shared functions (defined in the prototype), need to be public on the instance. Meaning they need to be returned or added to the scope. To solve this a bit there is a convention that private fields, who need to be accessible in the shared parts have a “_”-prefix. This won’t make them inaccessible, but intellisense will ignore them, so making it a bit harder to know.


Revealing Prototype Pattern


As you see with the prototype pattern every thing you define as shared is public. This is where the revealing prototype goes further. It allows you to have private functions and variables inside the shared scope.



   1: function Class() {
   2:     var privateField = "private"
   3:     function privateFunction() {
   4:         return privateField;
   5:     }
   6:  
   7:     this._privateFunction = privateFunction
   8: }
   9:  
  10: Class.prototype = function () {
  11:     var privateField = "Hello";
  12:     var publicField = "public";
  13:     function privateFunction() {
  14:         return privateField + " " + this._privateFunction();
  15:     };
  16:  
  17:     return {
  18:         publicField: publicField,
  19:         publicFunction: privateFunction
  20:     };
  21: }();
  22:  
  23:  
  24: var instance = new Class();
  25: alert(instance.publicField); // shows "public"
  26: alert(instance.publicFunction()); // shows "Hello private"

Conclusion


By using the revealing prototype pattern you have several advantages:



  • Less memory use, everything defined as shared is only created once.
  • You have the advantage to encapsulate your private code and expose public only the functions and fields you want
  • The classes are open for extension but closed for modification: you can easily add extra functionality without affecting the existing implementation.

Of course there are some down sizes too:



  • The definition of the constructor and the class implementation (for the prototype parts) are separate
  • You can make your state fully private if you need it in the shared parts.

For me the leverage of the advantages are bigger than the disadvantages. I’m using this pattern more and more. I’m also busy migrating my library to use this pattern, so other people can easily extend my library and add there own functionality without changing the source file and the existing implementation.