Showing posts with label Web development. Show all posts
Showing posts with label Web development. Show all posts

Tuesday, August 23, 2016

Learning Angular 2: Upgrading to Angular 2 RC5

Version 0.0.3 of my GuildRunner sandbox Angular 2 application is now available.  All of the differences between this version and the previous version (minus the updates to the version number and the README file) are changes made to upgrade the application to use Angular 2 RC5 (release candidate 5).

While there were some changes to the router/routing syntax, the biggest change that comes with RC5 is the introduction of Angular modules and the @ngModule decorator.  There is a long documentation page about Angular modules in the official developer guide, but essentially Angular modules allow you to bundle sets of shared injectable dependencies into a single file that provides those dependencies "downstream".

For example, prior to the upgrade my MainNavigationComponent received the directives needed for routing (like RouterLink) via the "directives" metadata property (which also meant it had to be imported):


//main-navigation.component.ts (previous version)
import { ROUTER_DIRECTIVES } from '@angular/router';
...
@Component({
  ...
  directives: [ ROUTER_DIRECTIVES ]
})

As another example, the GuildsMasterComponent received the GuildService for its constructor method via the "providers" metadata property:


//guilds-master.component.ts (previous version)
import { GuildService } from '../guild.service';
...
@Component({
  ...
  providers: [ GuildService ]
})

Now both of those dependencies are declared in the new application-level Angular module - app.module.ts:


//app.module.ts
...
import { routing } from './app.routing';
import { GuildService } from './guild.service';
...
import { SandboxModule } from './sandbox/sandbox.module';
...
@NgModule({
  imports:      [
    BrowserModule,
    HttpModule,
    routing,  //Provides the routing directives as well as the route definitions
    SandboxModule
  ],

  declarations: [
    AppComponent,
    VersionComponent,
    MainNavigationComponent,
    HomeComponent,
    GuildsMasterComponent
  ],

  providers: [
    { provide: XHRBackend, useClass: InMemoryBackendService }, // in-mem server
    { provide: SEED_DATA,  useClass: InMemoryDataService },     // in-mem server data
    VersionService,
    GuildService //Provides the GuildService downstream
  ],

  bootstrap:    [
    AppComponent
  ],
})

Because the MainNavigationComponent and GuildsMasterComponent are included in the module via the "declarations" block, they are part of the feature bundle of this module, and so they have access, via dependency injection, to the routing and GuildService dependencies without the need for the "directives" or "providers" metadata properties of the @Component.

Note the four properties in this @ngModule decorator. The "declarations" property is where you list all of the components, directives, and custom pipes used in your module: anything your templates need in order to operate (example: the inclusion of the MainNavigationComponent in the declarations allows the AppComponent template to understand how to render the "<app-main-navigation>" tag in the AppComponent HTML). The "providers" property is where you list your module services (things that would be injected into your component as a constructor argument).

The "imports" property is where you list other modules that provide functionality (services, directives, etc.) to your feature module. Some of the modules may be Angular library modules, such as the required BrowserModule or the HttpModule needed for performing HTTP operations. But it can also include other modules in your application. Note the inclusion of the "SandboxModule" in this example:


//app.module.ts
import { SandboxModule } from './sandbox/sandbox.module';
...
@NgModule({
  imports:      [
    ...
    SandboxModule
  ],

That is a separate Angular module file dedicated to the "sandbox" feature area of my application:


//sandbox/sandbox.module.ts
import { NgModule }       from '@angular/core';
import { sandboxRouting } from './sandbox.routing'
import { GuildListComponent } from "./guild-list/guild-list.component";
import { SandboxService } from './sandbox.service';

@NgModule({
  imports: [
    sandboxRouting
  ],

  declarations: [
    GuildListComponent
  ],

  providers: [
    SandboxService
  ]
})

export class SandboxModule {}

This module encompasses the components, services, and the routing related to the sandbox feature area of the application, and it's integrated with the rest of the application simply by the fact that it's declared in the "imports" property of the main application Angular module. Note that it doesn't contain the fourth property seen in app.module.ts: the "bootstrap" property is mainly for declaring the top-level component of a given module, something the sandbox feature area doesn't have.

So the introduction of Angular modules adds a new organizational construct to Angular 2 and cuts down on typing since there's no need to add "directives" and "providers" properties to your components in order to perform dependency injection.

However, there is one small caveat, best explained by example. Even though my app.module.ts file declares the GuildService in its array of providers, and I no longer need to use the "providers" metadata property on my GuildsMasterComponent, I still need to import the GuildService:


//guilds-master.component.ts (new version)
import { GuildService } from '../guild.service';
...
export class GuildsMasterComponent implements OnInit {

  guilds: Guild[] = [];

  constructor( private guildService: GuildService ) { }

This puzzled me, and in perusing some of the updated documentation and tutorial examples I couldn't find an explanation for why that import was still necessary if the GuildsMasterComponent was getting its instance of the GuildService from the application Angular module.

But then I looked at the JavaScript being generated from the guilds-master.component.ts file. Here are the relevant lines from that JavaScript file, with the significant lines followed by comments:


//guilds-master.component.js
...
var guild_service_1 = require('../guild.service');  //Significant line #1
...
GuildsMasterComponent = __decorate([
        core_1.Component({
            moduleId: module.id,
            selector: 'app-guilds-master',
            templateUrl: 'guilds-master.component.html',
            styleUrls: ['guilds-master.component.css']
        }), 
        __metadata('design:paramtypes', [guild_service_1.GuildService]) //Significant line #2
    ], GuildsMasterComponent);
    return GuildsMasterComponent;

Those two significant lines are generated by Angular compiler based on the argument declaration of the GuildsMasterComponent constructor. If you try to type the "guildService" argument of the constructor as another data type (like "any"), the compiler won't know what object/export it's supposed to use. And you can't set the "guildService" argument to type GuildService without importing GuildService so that the TypeScript compiler can recognize the data type.

Two other tidbits:

  • When I initially created the separate sandbox Angular module (sandbox.module.ts), I did not create a separate routing file with a route configuration for the single sandbox route:  that route was still part of the application Angular module and the app-level routing.  But when I ran the application, that generated an error message that my single sandbox component was "part of the declaration of 2 modules."  Fortunately I found a Stack Overflow post that pointed out the need for separating routing configurations.

  • In an earlier blog post, I noted how my IntelliJ IDE would automatically add the "import {} from ..." statement for any component I added to my router configuration via auto-complete (where IntelliJ would provide me a list of options as I typed the component name, and I could select the one I wanted from the list using the Tab key).  I was happy to see that same convenience feature at work as I added components to the "declarations" list of my app module.

Wednesday, August 17, 2016

Learning Angular 2: Adding a Master Guild List Component To My Sandbox Application

I recently released version 0.0.2 of my GuildRunner sandbox Angular 2 application.  Changes made during this release include:

  • The addition of a Bootstrap-powered navigation bar (main-navigation.component.ts) which necessitated adding CDN calls for jQuery and Bootstrap JS (index.html).
  • The addition of Angular 2 routing (main.ts, app.routing.ts).
  • The creation of guild data for use by the in-memory web API (db/guilds.ts).
  • The creation of Guild, Address, and Member domain classes to be populated with the guild data (the address.ts, guild.ts, and member.ts files in the "domain" folder) via the GuildService (guild.service.ts).
  • The creation of a master view that displays data from a list of Guild objects within a table (guilds-master.component.ts).
  • The creation of a "sandbox" area of the application where I can keep experimental and diagnostic features (the "sandbox" folder). 

Some lessons learned during the coding of this release:

The in-memory web API has limitations

The in-memory web API is currently limited to mimicking a shallow data graph. It allows you to mock the following types of REST calls:

  • app/guilds (to retrieve all guilds)
  • app/guilds/1 (to retrieve the guild with an id of 1)
  • app/guilds/?name=Blacksmiths (to retrieve the guild or guilds based on the query string)

...but you cannot simulate deeper REST calls:

  • app/guilds/1/members/1 (retrieving the member with id of 1 from guild 1)

For this application at this particular point, it's not a big deal, but I will probably be looking at alternatives methods for faking HTTP calls at some point down the line.

Instantiating domain model classes

In the Tour of Heroes tutorial, the instructions have you create a Hero class with properties for the id and name of the hero. Later, that class is used to declare a component variable with a data type of an array of Heroes:


heroes: Hero[];

...which is then populated with the hero data, an array of object literals:


[
  {id: 11, name: 'Mr. Nice'},
  {id: 12, name: 'Narco'},
  ...
]

...like so:


this.heroService.getHeroes().then(heroes => this.heroes = heroes);

From a coding standpoint, declaring the "heroes" variable as an array of Hero objects ensures that another developer cannot use code to populate that variable with anything but Hero objects, but that declaration is meaningless at runtime. Doing the same thing with my guild data:


//Populates the "guilds" variable with the raw retrieved data (array of object literals)
export class GuildsMasterComponent implements OnInit {

  guilds: Guild[];
  constructor( private guildService: GuildService ) { }

  ngOnInit() {
    this.guildService.getGuilds().then( guilds => this.guilds = guilds )
  }
}

...results in the "guilds" variable being populated with the raw array of guild object literals, each with an address object literal and an array of member object literals. But that's not what I wanted: I wanted an array of Guild objects with included Address and Member objects.

So I wrote the code to instantiate the desired objects, populating the property values via the constructor method:


//guilds-master.component.ts
...
import { Guild } from '../domain/guild';
...
export class GuildsMasterComponent implements OnInit {

  guilds: Guild[] = [];
  
  constructor( private guildService: GuildService ) { }

  ngOnInit() {
    this.guildService.getGuilds().then( guilds => {
      guilds.forEach( guild => {
        this.guilds.push( new Guild( guild ) )
      })
    } );
  }
}

//guild.ts
import { Address } from './address';
import { Member } from './member';

export class Guild {
  id: number;
  name: string;
  expenses: number;
  revenue: number;
  profit: number;

  address: Address;
  members: Member[] = [];

  constructor( guildData:any ) {
    this.id = guildData.id;
    this.name = guildData.name;
    this.expenses = guildData.expenses;
    this.revenue = guildData.revenue;

    this.profit = this.calculateProfit();

    this.address = new Address( guildData.address );

    guildData.members.forEach( member => {
      this.members.push( new Member( member ) );
    }, this );

  }

  calculateProfit() {
    return this.revenue - this.expenses;
  }

}

Providing my master view with an array of Guild objects allowed me to display the profit of each guild in addition to the raw guild data provided by the in-memory web API.

The currency pipe

This release marked my first time using one of the built-in Angular 2 pipes, though it was pretty similar to my experiencing using the built-in filters in Angular 1.

I was a tad surprised that the default CurrencyPipe settings would result in the number being prefixed with "USD" rather than a dollar sign. But a quick glance through the CurrencyPipe documentation gave me the settings I wanted and instructions on how to further control the output with the DecimalPipe:


<td align="right”>{{guild.revenue | currency:'USD':true:'.2-2' }}</td>

In cases where you wanted your application to support internationalization, I imagine you could use a component variable to dynamically affect the currency code:


<td align="right”>{{guild.revenue | currency:currencyCode:true:'.2-2' }}</td>

 

Monday, August 8, 2016

Learning Angular 2: Creating My First Sandbox Web Application

While there is still a lot out there for me to read regarding Angular 2, I tend to learn by coding and solving problems.  Even though there are a few aspects of Angular 2 that are in flux at this time (like forms), I feel that I can start writing an application without much fear that I'd have to go back and redo things because the API has changed.

So I've created my first "sandbox" Angular 2 application where I can practice writing Angular code and figure out ways to accomplish specific application tasks with Angular 2.  I'm going to keep a copy of the code up on GitHub and release milestones in my development so that I have a historical picture of the development and so I can potentially backtrack and create different solutions to a given problem.  Plus, it will allow anyone to pull down a tagged version on their own machine to look at the code.

My first sandbox is an application called "GuildRunner".  My plan is that it will be an application for managing a fictional collection of trade guilds, and so I can use it to exploring dealing with common application issues like authentication and authorization, data relationships, and searching.  But I wanted to start by simply creating the foundation for the application structure and getting it up and running.

So I started by creating the repo in GitHub, and then cloned the repo into a new IntelliJ IDEA project via the IntelliJ option for creating projects via GitHub.  I then opened a command prompt in the project directory and invoked the Angular CLI command "ng init" to create the starting project files: I was pleased that the command let me decided whether or not to overwrite the README.md file cloned from GitHub.  Another nice thing about the CLI-generated file I hadn't noticed before:  the .gitignore file is configured to ignore the IntelliJ-specific files as well as the node_modules and typings folder during commits, which is nice.

At that point, I had a basic, single-component app that I could run with the "ng serve" command of Angular CLI.  But I wanted to use the in-memory web API (at least initially) to provide the data for the application as if it was interacting with a server and a database, so I needed to reconfigure the application to utilize that feature.  The details of that reconfiguration ended up as a separate blog post: Adding the In-Memory Web API to a SystemJS-based Angular CLI Application.

In the Tour of Heroes example of using the in-memory web API, the mock data was defined/written out within the InMemoryDataService createDB() method.  Since I plan on creating a fair amount of mock data, I created a "db" folder under the "app" folder that would house all the modules that would export the data collections.  I then created my first bit of mock data: a single version record.


//src/app/db/version.ts

let version = [
  {id: 1, name: '0.0.1'}
];

export { version }

...and provided that to the InMemoryDataService via an import:


//src/in-memory-data.service.ts

import { version } from './db/version';

export class InMemoryDataService {
  createDb() {
    return { version };
  }
}

The purpose of the version record was to have some data to display on the main application page that would confirm that the in-memory web API was working (and also confirm the version of the sandbox application I was working with). So with the Tour of Heroes code as a guide, I created a VersionService to retrieve the version data and a VersionComponent to display it:


//src/app/version/version.service.ts

import { Injectable } from '@angular/core';
import { Http } from '@angular/http';

import 'rxjs/add/operator/toPromise';

@Injectable()
export class VersionService {

  private versionUrl = 'app/version'

  constructor( private http: Http ) { }

  getVersion() {
    return this.http.get(this.versionUrl)
      .toPromise()
      .then(response => response.json().data )
      .catch(this.handleError);
  }

  private handleError(error: any) {
    console.error('An error occurred', error);
    return Promise.reject(error.message || error);
  }
}

//src/app/version/version.component.ts

import { Component, OnInit } from '@angular/core';
import { VersionService } from './version.service';

@Component({
  moduleId: module.id,
  selector: 'app-version',
  templateUrl: 'version.component.html',
  providers: [
    VersionService
  ]
})
export class VersionComponent implements OnInit {

  versionNumber = '-.*.-';

  constructor( private versionService: VersionService ) { }

  ngOnInit() {
    this.versionService.getVersion().then( versions => this.versionNumber = versions[0].name );
  }

}

<-- src/app/version/version.component.html -->

<p><strong>Version:</strong> {{versionNumber}}</p>

Then (after adding Bootstrap to the project and adding some Bootstrap layout containers), I added the VersionComponent as a sub-component of the AppComponent:


//src/app/app.component.ts

...
@Component({
  ...
  directives: [
    VersionComponent
  ]
})

<-- src/app/app.component.html -->
<div class="container">
  <div class="row">
    <div class="col-md-10">
      <h1>{{title}}</h1>
    </div>
    <div class="col-md-2 text-right">
      <app-version></app-version>
    </div>
  </div>
</div>

Once all of that was done, I could run my application using "ng serve" and a moment after the page loaded I could see the version number.

I finished up this version of the sandbox by updating the current set of unit and end-to-end (e2e) test files such that they would pass. Previous experience with e2e testing let me update the single e2e test pretty easily, but getting the minimalist unit tests to work took some trial-and-error, and I don't yet have a clear sense of how to set up the unit tests such that the component under test has all the dependencies (or mocks of the dependencies) it needs.

The release of the GuildRunner sandbox that contains the application foundation code and the changes described above can be found on the GuildRunner GitHub repo as release 0.0.1:

https://github.com/bcswartz/angular2-sandbox-guildrunner/releases/tag/0.0.1 

Instructions for running the sandbox on your own machine via Angular CLI can be found on the main page of the GitHub repo.

Monday, June 20, 2016

Learning Angular 2: The Official Quick Start guide

I'm starting my Angular 2 journey with the official quick start guide on the Angular 2 website, https://angular.io/.  I'm going to use the TypeScript version of the quick start guide, since I plan on coding Angular 2 with TypeScript.  

I already have Node installed on my Mac laptop via nvm (Node Version Manager), so that's taken care of.

I want to figure out any nuances involved in using IntelliJ as my IDE for coding Angular 2 (it SHOULD work similarily to how WebStorm works in the regard), so I want to create a new IntelliJ project to house the quick start files.  Did a quick search for any IntelliJ plugins that would semi-automate the setup of a new Angular 2 codebase in an IntelliJ project, but didn't find any.  Probably just as well:  I expect to start using the Angular CLI tool for that sort of thing as I get further along anyway.  So I created a new project in IntelliJ pointed to my "angular-io-quickstart" directory.  IntelliJ expects all project files to live in modules, so I created a static web module called "root" (a module choice that doesn't come with any preconfigured structure or code files) which will hold all of the project files.  I then created all of the configuration files (package.json, tsconfig.json, etc.) in that root module.

I then ran "npm install" from the angular-io-quickstart/root directory.  A "typings" folder was created along with a "node_modules" folder, and there were "typings" modules in "node_modules", so I didn't need to execute the "npm run typings install" command cited in the guide.

The guide does a good job of explaining the different pieces of the app.component.ts file, but a few extra notes:

  • The import statement is a conventional TypeScript import statement (it's not something unique to Angular 2), so further information about its syntax can be found by searching for TypeScript articles on the web.

  • Decorators are also not exclusive to Angular:  they are also used in some server-side languages as a means of adding new properties and/or functionality to an object without using extension.

When I added the main.ts file as described, IntelliJ drew a red line under the AppComponent argument for the bootstrap function in the final line.  Hovering over the argument displayed the following error "Argument AppComponent is not assignable to parameter type Type".  The error did not prevent me from running the quick start once I finished the rest of the instructions, however.

Did some research and the solution for that is to go into the IntelliJ settings  (File/Settings in Windows / IntelliJ/Preferences in OS X), open up the Language & Frameworks settings, and in the TypeScript settings check the "Enable TypeScript Compiler" and "User tsconfig.json" options.  Reference:  http://stackoverflow.com/questions/34366797/angular-2-bootstrap-function-gives-error-argument-type-appcomponent-is-not-assi

The IntelliJ TypeScript settings also include a "Track changes" option.  With that option turned on, I don't need the compiler watcher turned on as part of the "npm start" command in the quick start instructions:  I can simply use "npm run lite" to get just the lightweight web server running, and IntelliJ will make sure my changes are picked up.  Using both "Track changes" and "npm start" together doesn't produce any sort of conflict or error however.

In either case, whether invoking the stand-alone TypeScript compiler (tsc) via the "npm start" command or via the built-in TypeScript compiler in IntelliJ, you end up generating a ".js" and "js.map" file for each ".ts" file in the project (though when IntelliJ creates those files, you can visually collapse the generated files under the TypeScript file that generated them).  I imagine that any build process that gatheres the code needed to deploy a project to production would exclude the ".ts" files (not sure whether the ".map" files would need to live on production as well).

The index.html file is pretty straightforward with the exception of the System.import statement.  If I understand things correctly, in the systemjs.config.js file:
  • The 'app' entry in the "map" object literal tells SystemJS what directory contains the 'app' modules (in this case a directory of the same name).

  • The 'app' entry in the "packages" object literal tells SystemJS that when System.import() is used with the argument 'app' (with no filename specified), it should execute the 'main.js' file...which SystemJS knows from the "map" lives in the "app" directory.

A brief consultation of the configuration API for SystemJS seems to confirm all that.

Overall, the quick start guide was easy to follow and well-explained.  Then again, it IS essentially the "Hello World" example for Angular 2, so it isn't supposed to be hard.  

Next step:  starting to work through the "Tour of Heroes" tutorial.

Learning Angular 2: The Journey Begins

Over the past few years, my blog posts have been few and far between.  Partly because of the pressure I'd put on myself to write very polished, very "correct" posts, which for me takes a lot of effort and energy, and partly because I know that I don't have a consistent audience...which doesn't inspire a great deal of motivation to make the effort to create the posts I'm used to writing.

So I'm going to try something a bit different.  I'm going to try writing a series of blog posts documenting my experience learning Angular 2.  When I'm learning new technology or working through a coding project, I tend to take my own "stream of consciousness" notes as I go anyway, so the plan is to write blog posts instead of notes, writing them candidly and "in the moment" as much as possible so I don't get hung up self-editing as I go.  I will undoubtedly "mislearn" (and hence mis-post) things as I go, but I need to be okay with that:  mistakes are a part of learning.  And I think, mistakes or not, my posts will be of some value to other folks trying to learn Angular 2.

I come to Angular 2 with a mix of ignorance and experience.  I've worked with JavaScript for many years, initially coding with "raw" JavaScript, and then with jQuery and jQuery UI, and eventually with Angular 1.  I"m experienced in making it do what I need it to do, but I wouldn't say I"ve ever had a deep understanding of its inner workings.  I'm comfortable coding in Angular 1, but rusty because I haven't had the opportunity to do any serious coding in it in awhile.  What code I've been writing during my day job lately has been server-side code in Groovy or Java.

I'm a bit more ignorant in the Angular 2 "companion" technologies.  I've used Grunt in some personal projects, so I have some familiarity with using a task manager as a build tool.  I've run browserify once or twice, but I'm hardly comfortable with it; I've never run webpack.  I've played with Node, so I'm familiar with npm and the notion of pulling in modules with require statements.  I took the TypeScript course on Pluralsight, so I know what the deal is there even though I haven't used it yet.

My hope is that this mix of ignorance and experience translates to blog posts that provide some value to both readers who are very new to JavaScript/client-side programming in general as well as to those readers who are already experienced (if not yet expert) client-side developers.

Anyway, enough of this introductory talk:  time to get conclude this post and start learning.

Monday, March 9, 2015

Using a Route Naming Convention to Control View Access in AngularJS

Suppose for a moment that you have an AngularJS single-page application, one with view routes managed with then ngRoute module, that is used by users with different roles.  A user in your company's Sales group has access to certain areas of the application, while a user in Accounting works in other parts of the application.  And there are also some areas of the application that are common to all users.

Now, you already have the navigation menu wired up so that users only see the navigation links appropriate to their user roles.  And even if a Sales user somehow ends up in a view meant for an Accounting user, the server answering the REST calls for the data powering that view is going to check the security token sent with the request and isn't going to honor that request.  But you'd still like to keep users out of UI views that aren't meant for them.

You could do a user access check at the start of each controller, or perhaps within the resolve property of each route, but that would be repetitive and it's something you could forget to do on occasion.

What's cool about the route URLs you can construct with the ngRoute $routeProvider is that they don't have any relation to the actual file structure in your application.  You could define a route like so:


$routeProvider.
  when( '/the/moon/is/full/and/the/night/is/foggy', {
  templateUrl: 'views/fullmoon.html',
  controller: 'moonController'
 })

...and it's perfectly okay as long as the templateUrl and controller are legit.

So let's use that fact about route URLs to solve the challenge presented by having the route URL determine user access to the view attached to the route.

Pretend you have a sessionService module in your Angular application that manages the current session and stores the current user's information in a user object, and that user object has a "roles" property that is an array of all of the security roles that user has. If you add the following code in your main application module (the module named in the ng-app attribute in your application)...


.constant( 'authRoles', [ 'sales', 'accounting' ] )

.run( [ '$rootScope', '$location', 'sessionService', 'authRoles', function( $rootScope, $location, sessionService, authRoles ) {

    $rootScope.$on( '$routeChangeStart', function ( event, next, current ) {
        if( sessionService.isActive() ) {
            var targetRoute = $location.path();
            if( targetRoute.split( '/' )[1] == 'auth' ) {
                var routeRole = targetRoute.split( '/' ).length > 2 ? targetRoute.split( '/' )[2] : '';
                 if( sessionService.getUser() == undefined ) {
                    $location.path( '/login' );
                } else if ( routeRole && authRoles.indexOf( routeRole ) > -1 && sessionService.getUser().roles.indexOf( routeRole ) == -1 ) {
                    $location.path( '/unauthorized' );
                }
            }
        }
    });

 }])

...then when a route change is initiated, the requested route URL will be parsed to determine if the route is secured in any way. Any route URL beginning with "auth" is only accessible to authenticated users, and any URL where the location segment after "auth" matches one of the user roles defined in the "authRoles" constant (in this case, "sales" and "accounting") will only be accessible to a user having that role in their "roles" array.

So the given users attempting to navigate to the given routes will get the given results:

User Route Result
Unauthenticated user /home Routed to "home" view
Unauthenticated user /auth/viewAccount Routed to "/login" (login page)
User with Sales role /auth/viewAccount Routed to "viewAccount" view
User with Sales role /auth/sales/customer/1 Routed to detail page for customer 1
User with Sales role /auth/accounting/stats Routed to "/unauthorized" ("you are not authorized" page)
User with Accounting role /auth/viewAccount Routed to "viewAccount" view
User with Accounting role /auth/accounting/stats Routed to the "stats" view
User with Accounting role /auth/sales/propects Routed to "/unauthorized" ("you are not authorized" page)
User with Accounting role /auth/marketing/addAccount Routed to the "addAccount" view, as "marketing" isn't a recognized role
User with Sales and Accounting roles /auth/accounting/stats Routed to the "stats" view

 

What if you had three roles (Sales, Accounting, HR) and you had a route that you wanted to only be accessible to the Sales and Accounting users?  In that scenario, you'd either have to create a new user role that Sales and Accounting folks would both have, or you'd only secure the route with "auth" and fall back to doing a manual role check in the route's controller or resolve property.  So it's not a technique that solves every scenario, but it's a good first step.

Monday, March 2, 2015

Introducing Sparker: A Codebase Library Management Tool Showcasing AngularJS, Protractor, and Grunt Techniques

Sometimes projects take on a life of their own, and you end up with something unexpected.

I set out to create an template for CRUD-focused single page AngularJS web applications, something I and perhaps my colleagues could use as a foundation for writing new applications.  But under the momentum of self-applied scope creep, what I ended up creating was a Grunt-powered codebase library management tool, with my original template concept as the first codebase of potentially multiple foundational codebases.

After sitting back and looking at what I ended up with, I decided to name the project Sparker for two reasons:

  • I didn't want to use terms like "templates", "scaffolds", or "foundations" for the codebases housed in the project (partly because the project encourages creating demo codebases to accompany the "clean slate" template codebases).  So the codebases are organized into "sparks" and Sparker is the tool for managing them.

  • I wanted to focus on the inspirational aspect of the project:  that, at worst, the techniques in the sparks and in Sparker could "spark" ideas in other developers for how to approach certain problems.

At the end of the day, I see Sparker as a functional proof-of-concept, something individuals or particular teams can play around with or use in their shops to maintain and build off of their own spark libraries, but not something adopted for widespread use by the developer community.  It's not designed to compete with things like Yeoman or Express.js because, like I said, it didn't come out of a design.  But it was fun to develop, and I think there's value in sharing it.

Some of the things you'll find in Sparker today:

  • An example of how to organize Grunt tasks and task options into separate files, and how to execute two different sets of tasks from the same directory/Gruntfile.

  • A Grunt build task that determines which resource files to concatenate based on the <link> and <script> tags in your index.html file.

  • A technique for executing the login process and running certain sets of Protractor tests based on the user role being tested.

  • Convention-based Angular routes for restricting user access based on authentication state/user roles.

  • Example of mocking REST calls within a demo Angular application using the ngMockE2E module.

  • Examples of Angular unit tests and Protractor tests that utilize page objects.

Sparker is available on GitHub at https://github.com/bcswartz/Sparker.

Tuesday, February 24, 2015

Lightning Talk Presentations from the Recent AngularJS DC Meetup

Last week I participated in a series of lightning talks at the AngularJS DC Meetup, hosted by Difference Engine, and I thought I'd share the links to the slide decks and demos presented (unfortunately, the equipment recording the entire event failed, otherwise I would just share that).

Not all of the presenters have posted their material, but here what's been shared so far:

Monday, February 2, 2015

Using Grunt to Concatenate Only the JavaScript/CSS Files Used in Index.html

One of the most common uses of the Grunt task runner is to build a deployment package out of your development code for your website or web application, and part of that build process is usually a task that concatenates the CSS and JavaScript files into singular (or at least fewer) files for optimal download.

The grunt-contrib-concat Grunt plugin allows you to configure a concatenation task to target individual files or entire directories, like so:

concat: {
            js: {
                src: [ 'dev/jquery/jquery.js', 'dev/angular/services/*.js', 'dev/angular/directives/*.js' ],
                dest: '../build/combined.js',
                options: {
                    separator: ';'
                }
            },
        }

The only drawback is that you have to update the task's "src" property as you add or remove CSS and JavaScript assets from your web application.

As I was playing around with Grunt on a personal project, I came to wonder: could I create a Grunt task or set of tasks that could figure out which files to concatenate based on the <link> and <script> tags in my code?  Here's what I came up with.

My project is a single page web application powered by AngularJS.  It has an index.html file that serves as the "single page" that displays the appropriate view fragment (HTML files stored in a "views" directory) for a given page/route in the application.  None of those view fragments require additional CSS or JavaScript resources, so my index.html page pulls down all of the necessary CSS and JavaScript files.

In my index.html file, I wrapped my blocks of <link> and <script> tags with special HTML "build" comments, a set for my CSS block and a set for my JavaScript block:


<!DOCTYPE html>
<html>
<head lang="en">
    <meta charset="UTF-8">
    <title>My HTML Page</title>

    <!--build-css-start-->
    <link rel="stylesheet" media="screen" href="assets/app.css" />
    <link rel="stylesheet" media="screen" href="packages/bootstrap/dist/css/bootstrap.css" />
    <!--build-css-end-->

</head>
<body ng-app="demoApp">

    <!--Div where my Angular views will be displayed-->
    <div ng-view></div>

    <!--build-js-start-->
    <script type="text/javascript" src="packages/angular/angular.js" ></script>
    <script type="text/javascript" src="common/app.js" ></script>
    <script type="text/javascript" src="controllers/loginController.js" ></script>
    <script type="text/javascript" src="controllers/logoutController.js" ></script>
    <script type="text/javascript" src="services/authService.js" ></script>
    <script type="text/javascript" src="services/session.js" ></script>
    <!--build-js-end-->

</body>
</html>

I then created a Grunt task powered by the grunt-replace plugin that finds and parses those blocks via regex, extracting and modifying the file path within each "src" and "href" attribute and appending them to arrays stored as Grunt configuration properities. Each block is replaced by a single <link> and <script> tag pointed at the combined CSS and JavaScript file. The overall build task then executes the concat task, which concatenates all of the files in the respective configuration arrays into the combined files.

So when I run the master build task against my original set of development files:

...I end up with a build directory containing the index.html file, the views folders with the view HTML files, and a single CSS and JavaScript file: 

...and the index.html file itself looks like this:

<!DOCTYPE html>
<html>
<head lang="en">
    <meta charset="UTF-8">
    <title>My HTML Page</title>

    <link rel="stylesheet" media="screen" href="combined.css"/>

</head>
<body ng-app="demoApp">

    <!--Div where my Angular views will be displayed-->
    <div ng-view></div>

    <script type="text/javascript" src="combined.js"></script>

</body>
</html>

Cool, right?

If you want to try it out for yourself, the entire Gruntfile used to power the build task is below, with comments:

Wednesday, January 15, 2014

Determining if Data Has Changed in Your AngularJS-Powered Form

So here's the scenario:  you've got a form enabled with AngularJS.  The form is populated with data from a data model object retrieved from a REST call.  You need to know at a certain point (perhaps at the end of every user action, or perhaps at the moment of submission) whether the form data is different from when it was originally retrieved.  How would you do that?

If you wrap your form elements within a set of HTML form tags and name the form, Angular automatically (via the ngFormController) monitors the overall state of the form and provides some status flags, one of which is the $dirty property.  Problem solved, right?  Well, not quite.

Take a look at this example form I've set up with AngularJS 1.2.6 and Bootstrap 3:

https://bcswartz.github.io/AngularJS-formDataChangeDetection-demo/index.html#/demo1/1

I threw a bit of a twist into my form.  The tasks at the end of the form (stored as an array of objects in the main project object model) can be reordered by dragging:  that functionality is accomplished with the addition of jQuery, jQuery UI, and the Angular UI Sortable project.  As you might guess, if you did nothing to the form but reorder the tasks, the $dirty status of the form (displayed in the diagnostic area beneath the form) would remain "false":  that status evaluates the data in the form fields but doesn't recognize other changes to the underlying data model.

But even if you didn't have a UI element like that in your form, $dirty isn't going to tell you what you need to know.  Go into the form and change "Form" in the Name field to "Forms".  Note the $dirty status changes to "true".  Now change it back to "Form."  $dirty still evaluates to "true".  That's because $dirty isn't evaluating differences between the original data and the new data, it's just tracking interactions.  Any form field the user interacts with is considered to be dirty, and that propagates up to the form.

So what you need then is a means of recording the original state of the data when the page/form is loaded and a means to compare that original data with the current model data.

Fortunately, Angular provides two convenience functions that can answer those needs.  The angular.copy() function can create a clone of an object, while angular.equals() will do a deep comparison between two objects (comparing all property names and values, rather than telling you if one object is a reference/pointer to the other).

Here's another version of that same form (open up the page but don't do anything with it yet):

https://bcswartz.github.io/AngularJS-formDataChangeDetection-demo/index.html#/demo1/2

With this version of the form, I added code to the controller function to create an "original" clone object of the starting "project" model object as soon as the data was made available, then used angular.equals() to verify that both objects were identical.  I also added a $watch function that would monitor changes in the "project" model object and re-compare the "original" and "project" objects.

projectResource.get().$promise.then(function(project) {
  $scope.project= project;
  $scope.original= angular.copy(project);
			
  $scope.initialComparison= angular.equals($scope.project,$scope.original);
  $scope.dataHasChanged= angular.copy($scope.initialComparison);
});

...

$scope.$watch('project',function(newValue, oldValue) {
  if(newValue != oldValue) {
    $scope.dataHasChanged= angular.equals($scope.project,$scope.original);
  }
},true);

Now go to the form and reorder the tasks, and you'll see that the first block of diagnostic data indicates that the data has changed, even though $dirty is still false.  Go into the form and change "Form" in the Name field to "Forms," and then back again, and you'll see that the change status updates accordingly.

There's still a catch, though.  Note the second second of diagnotic information, where (in both the "original" and "project" objects) it denotes the result of adding 1 to the estimated hours, and whether the Meeting Notes property is a null value or an empty string.  They start off the same, but you can probably guess where this is going.  Change the hours from 40 to 41, and then back to 40.  Type a character or two in the Meeting Notes field, and then delete them.

The form inputs set the corresponding model data as strings, so once you edit the data in the inputs, even if you put it back to the way it was, anything that was not a string before (like a number or a null value) is now a string, and now the "property" object doesn't equal the "original" object anymore.

The workaround is straightforward enough:  before you create the clone of the original data, parse it and "stringify" the appropriate properties.

Here's the final example:

https://bcswartz.github.io/AngularJS-formDataChangeDetection-demo/index.html#/demo1/3

This time, before using angular.copy(), I processed the "project" data model via a service function, which in turn executed a "stringification" function on each of the relevant properties:

Controller code:

$scope.project= service.stringifyProjectData(project);

 

Service code:

.factory("demo1_service", [function() {
  var stringifyProperty= function(property) {
    if(property== null) {
      return "";
    } else {
      return property.toString();
    }
  };
		
  return {
    stringifyProjectData: function(project) {
      var simpleProperties= ["projectName","description","hours","meetingNotes"];
      for(var p= 0; p < simpleProperties.length; p++) {
        project[simpleProperties[p]]= stringifyProperty(project[simpleProperties[p]]);
      }

      for(var t= 0; t < project.tasks.length; t++) {
        project.tasks[t].description= stringifyProperty(project.tasks[t].description);
      }

      return project;
    }
  };
		
}]);

 

...the code could probably be more elegant, but it gets the job done. So now if you change the Estimated Hours and Meeting Notes fields in the final example but then change them back, the code will correctly denote that "original" and "project" are back in sync after you remove your changes.

Since the demos consists solely of HTML, CSS, and JavaScript, the entirety of the code is there in the GitHub repo that powers the demo for you to look at if you wish.  I created separate controller functions for each demo to make clear what was happening in each one.

Wednesday, April 10, 2013

Overcoming the Cookie Size Limit When Using jQuery DataTables and Saved State

One of the features of the jQuery DataTables plugin is that you can use the configuration option "bStateSave" to store the current view of the table (the sort order, the number of rows displayed, the filter term, etc.) in a cookie, so if the user navigates away from the page then comes back, the table view is the same as how they left it.

However, if your website or web application stores the state of several DataTables, and a user hits all those tables faster than those cookies can expire (their default lifespan is 2 hours but can be customized with the "iCookieDuration" option), the user could hit the browser cookie size limit and start seeing errors on your site.  I ran into this problem today with an application I've been working on.

Fortunately, starting with version 1.9, DataTables provides functions that let developers intercept the process of saving the table state and reloading that state, and the plugin author provides a documentation page on how to use those functions to store the state data in localStorage, which in most browsers allows you to store a few MBs of data per site (far more than the 4K limit per site for cookies):

http://datatables.net/blog/localStorage_for_state_saving

In my case, I decided that I would store the state data in sessionStorage rather than localStorage, but the principle was the same.

Tuesday, March 5, 2013

Need A CF App for Tracking Errors From Multiple Applications? Check Out BugLogHQ

Last month, during a brief lull in my work projects, I decided to tackle an issue I'd been putting off for awhile:  finding a better way of managing errors caught and reported by my ColdFusion applications.  All of my apps are designed to email me a full report of any errors that reach the global error handler, and while that means I'm immediately notified if there's a problem I need to address, the error message folder in my email account ends up serving as a de-facto error archive.  And now that we're (slowly) moving towards building applications as a team, it makes sense to have the errors stored in a centralized location.

After doing a bit of research, I ended up at the home page for the open-source bug logger BugLogHQ, created by Oscar Arevalo.  Based on the project changelog list, it sounded like it might have the features I wanted, but there wasn't a lot of explanatory documentation on the site and the screenshot images were thumbnails that couldn't be enlarged.  And it didn't support Oracle, which is our primary database engine at work (and a perusal of the Google Groups forum for BugLogHQ confirmed that).

Despite those factors, I decided it was worth trying to add Oracle support to the codebase so I could run it and try it out.  I'm glad I did:  BugLogHQ is a well-designed, robust application for managing errors in a way that will still let me keep track of what's happening with my apps.  Now that I've seen how it operates and what it can do, I'm kind of surprised that I had to do research to come across it:  either it completely slipped under my radar or it needs more publicity/exposure in the community.

Several things I like about BugLogHQ:

  • I like the fact that it's easy to get up and running.  Download the files, put them in a "buglog" directory under the webroot (or use a mapping), run the SQL install script for your database of choice (mySQL, PostgreSQL, MS SQL 2000/2005, MS Access, and now Oracle) update the config file with your datasource name and database engine, and simply point your web browser to that "buglog" directory.  After logging in (which forces you to change the default password for the built-in admin user account), you can verify everything's working by going into the settings area of the app and running the tests, which all mimic the methods your other applications (referred to in BugLog parlance as "client" apps) can use to transmit errors to BugLogHQ.  Simple and straightforward.

  • Just like most email clients have rules for filtering and acting on incoming email messages, BugLogHQ provides some rules for managing the errors that come in.  You can set up multiple "mailAlert" rules that will forward on particular errors (filtered by error type/severity, the source app's hostname or application name, or keywords in the configured error message text) to an email address of your choosing, or you can create a mailRelay rule that relays every error received to an email address.  And the emails sent out by BugLogHQ aren't just notifications about the errors:  they provide the full error details so you don't have to log into BugLogHQ to get more information.  There are also rules for discarding certain errors or for monitoring "heartbeat" transmissions from an app, so if the app is offline for a certain length of time BugLogHQ can report the problem.

  • The methodology BugLog uses to report errors is very well-designed.  If your client app is also a CFML app, you simply instantiate a single CFC (with settings for interacting with the BugLogHQ web service you want to utilize) in either the application scope or a bean factory, then call it with its notification function whenever you want to send information to the BugLogHQ app.  If the client app is reporting an error, you can send BugLogHQ a line of message text, the error struct (the raw "cfcatch" struct), the type/severity of the error, and a struct of any additional information you want as part of the report (like the affected user's ID or name). Here's how I called it from the main.error() controller function in the FW/1 application I used during my testing (where I've instantiated the CFC as a service object):

    public void function error(rc) {
        var extraInfo= {username= session.user.getUsername()};
        variables.bugLogService.notifyService("Error report from TestApp1",request.exception,extraInfo,"INFO");
    }
    

    All of those parameters are optional, so you don't need an error struct if you're reporting something that's a status or informational message rather than an error.  If there's a problem with sending the report to BugLogHQ, it'll fallback to sending an email containing all that information to whatever address you specify so the error doesn't get lost.

  • I also appreciate how BugLogHQ "eats its own dogfood".  If an error occurs in BugLogHQ itself, it'll record the error just like it would for any client app, and if the error occurs during the recording process it'll report the error via email.

  • BugLogHQ utilizes a scheduled task (with a default interval of 2 minutes) referred to in the app as the "BugLog Listener Service" to denote and process incoming error messages.  When the task fires, the error messages that have been received since the last run of the task are processed and then written to the error database.  If there's a problem with writing to the database (which I understandably encountered as I was tweaking the code to support Oracle), it'll maintain the errors in the queue until the next iteration (and you can view the contents of the queue within the BugLogHQ dashboard).  Sometimes you need to restart that service in order for certain changes to take effect. If you somehow forget to reactivate the service, it'll be restarted when it receives a new error report from a client app.

  • If one or more of your clients apps are NOT CFML-based web applications, there are library files provided that let you submit errors from PHP or Python apps, or straight from Javascript.  I haven't tried out those files yet, but we do have a few PHP apps in our shop now that could make use of the PHP option.

  • There are configuration options for tying BugLogHQ to a JIRA instance, providing error summaries via RSS or a digest email, and setting an API key to ensure that only your apps can submit data to BugLogHQ.

Bottom line:  if you don't yet have an error-logging application or other mechanism for storing and managing errors from your application(s), you should take a look at BugLogHQ.  Oscar and his contributors have done a great job with this application.

Wednesday, January 16, 2013

dirtyFields jQuery Plugin Updated with New Options/Functions

I've made some updates to my dirtyFields jQuery plugin.  Here's the rundown:

  • Added two new public functions:
    • getDirtyFieldNames() returns an array of all currently dirty fields.
    • updateFormState() checks the clean/dirty state of all form fields in the specified container
  • Made two changes to how the CSS class denoting a dirty/changed form is applied:
    • Added a new configuration option ("self") to apply the class to the actual form element.
    • Split the single option for applying a style to a changed text input and select drop-down into two separate options for granular control (if you used the textboxSelectContext option with a previous version of the plugin, you will need to update your code).
  • Added three new configuration options to control plugin behavior:
    • The denoteDirtyFields option controls whether or not the dirty CSS class is applied to the form elements that are dirty/changed.
    • The exclusionClass option specifies the name of the CSS class that, when applied to a form field, will exclude that field from being processed by the plugin.
    • The ignoreCaseClass option specifies the name of the CSS class that, when applied to a form field, will instruct the plugin to do a case-insensitive evaluation of the current and original states of the field.

All of these changes were implemented in response to code suggestions made a team of developers (listed in the GitHub readme.txt file) who modified the plugin to meet their specific needs in one of their intranet sites.

Friday, October 12, 2012

Getting Out the Vote, ColdFusion-Style

Normally I don't blog about the actual ColdFusion web applications I create at the University of Maryland College Park (UMCP).  Most of them are extranet applications inaccessible to the public and computerize business processes that would take too long to explain.  But my most recent application doesn't fit the normal mold and it's received some attention from the local media because of what it does.

The application is an online application form for registering to vote in the State of Maryland in November designed specifically for our students at the university.  It was commissioned by our undergraduate Student Government Association (SGA) as part of a larger, multi-faceted effort to get out the vote this year, and supported by the university administration and the State Board of Elections (who provided the guidelines for how the data would be collected and sent to them).

The application runs on Adobe ColdFusion 9 and uses FW/1 as its application framework.  The model is comprised of record and service objects, all written in script syntax, with the service objects instantiated via ColdSpring.  Most of the client and server-side validation is handled by ValidateThis, and the application is styled with Bootstrap as well as some custom CSS and a bit of jQuery.

After reading the introduction page, students log into the application using our university's single-sign on solution.  As soon as they're logged in, the system pulls information about the student from our LDAP system.  That information is used to pre-populate certain form fields (name, date of birth, etc.) and it also affects the behavior of the application.  For example, if the LDAP information indicates that the student has a home ("permanent") address in Maryland that is different from their current on-campus / near-campus address, it will show both addresses to the student and let them choose which one they want to use as their residential/voting address. 

But if the student only has one Maryland address on file, they're only presented with one set of address form fields.  And in both cases, the addresses pulled from LDAP are formatted to conform to the way the state recognizes voter street addresses, making it more likely that the submitted address exactly matches an address on file with the state (that part took awhile: the state parses street addresses in a rather unique manner).

In short:  if the LDAP information is accurate and up-to-date, most students can complete the online registration without ever typing a key on the keyboard.

Perhaps the coolest part of the application is the last step prior to the review page.  In that step, the student has to acknowledge that they understand the legal requirements to vote.  They acknowledge this by clicking on a checkbox next to an image of their electronic signature.  When students enroll at UMCP and get their student photo ID, they write their signature on an electronic pad, and the signature is stored in binary format in a database table.  It was the fact that we had these signatures on file that made this project viable, because the new Maryland law that permits online voter registration made the submission of a signature a mandatory requirement.  And thanks to ColdFusion's image manipulation functions, rendering that binary data as an image takes but one line of code.

Admittedly, we did run into one issue with the signatures:  some of the older ones were stored in a proprietary format that couldn't be read outside of the ID card system.  But we found an option in that software that let us export large batches of those signatures as JPEG files, and from there I used ColdFusion's image functions to reduce each image to a reasonable size and write them to a new database table in binary format.

Once the student reaches the review page and submits their voter registration application, the form data is written to a table record and their electronic signature is output as a file to a directory outside the web root.  And every night a scheduled task runs that writes out all the new registration records to a data file in the format specified by the state and uploads that file and the related signatures files to a server run by the state.

And, like most of the applications I build, there is also an administrative interface that certain individuals can log into to view some anonymous registration statistics and to update the list of politicial parties currently recognized by the state (as they change from year to year).

The system went live October 1st, and as of right now over 1650 students have used it to submit voter registration applications.  The SGA is promoting the application at every opportunity and they're hoping to get to 2,000 registrations before the October 16th deadline.  While that might not seem like a huge number, I believe it would be the highest number of on-campus registrations since the SGA started doing voter registration drives several years ago.

Once we hit the October 16th deadline, we'll shut down the application until after the election has past, then reopen it.  The goal is to let the system accept registrations year-round, and I already have a few small improvements planned based on user and stakeholder feedback.  I also need to add a tool or two to the administrative interface so the stakeholders can run the system without my help. 

But otherwise, I'm pretty happy with how it turned out, and so are the SGA and the other stakeholders.

Sunday, July 29, 2012

Technique For Adding Basic CSRF Protection in a FW/1 Application

Just over two years ago, I wrote a blog post about how I protected my web applications from cross-site request forgery (CSRF) attacks using Model-Glue's event types feature.  Recently, I've started building ColdFusion applications using the Framework One (FW/1) MVC framework, so I needed to come up with a new approach to try and block CSRF attacks. Ideally, I wanted to do it in such a way that once I had my CSRF protection set up, I didn't have to think about it anymore, that I wouldn't have to remember to add one or more lines of code to each action I wanted to safeguard (and risk overlooking an action or two).

First, a refresher on what CSRF is: it's an attack where the hacker is counting on the user still being logged in/authenticated to the web application he's targeting.  The hacker creates a web page that posts data to the target application using a known URL or form submission destination and lures the user to that page.  When the user triggers that page, whatever data that page sends to the target application is accepted because the target application sees it as a valid request from a valid user.  It's a serious vulnerability, which is why ColdFusion 10 comes with new functions designed specifically to help block CSRF and why those functions were included in the CFBackport project (an open-source project that makes some ColdFusion 10 functions available to ColdFusion 8 and 9).

The basic way to prevent CSRF attacks (or at least make them far less effective) is to generate a unique value that is stored in the user's current session, include that value in the URL or form submissions you want to protect, and then match the value in the URL or form submission to the value stored in session and make sure they match before allowing the action to proceed.  So unless the hacker can figure out what that unique value currently is for a particular user, they cannot mimic it in their fake web page and data from that page won't be accepted.

So first I added code to my controller function in charge of validating user authorization that would create a token variable in the session:

...
session.user= arguments.rc.user;
session.token= CreateUUID();
....


At this point, I could have manually started adding the token variable to URL strings or placed it in hidden form fields in my forms, but again I wanted my CSRF to be as "automatic" as possible.

In FW/1, the best practice for creating URLs for hyperlinks and form submissions within the application is to use the buildUrl() function built into the framework.  With buildUrl(), the following line of code:

<form name="addItem" method="post" action="#buildUrl(action='item.add',queryString='foo=2')#" >

 

...gets rendered as (assuming you've stuck with the FW/1 defaults)...

<form name="addItem" method="post" action="index.cfm?action=item.add&foo=2" >


The buildUrl() function lives inside the framework.cfc file that is the heart of FW/1.  Implementing the FW/1 framework in your application is as simple as changing your Application.cfc to extend that framework.cfc file.

I wanted buildUrl() to automatically incorporate my session.token (if it existed) in the resulting URL. So to do that, I created a subclass of framework.cfc called frameworkExt.cfc with the following code:

component extends="framework" {
    public string function buildURL( string action = '', string path = variables.magicBaseURL, any queryString = '', string tokenKey= "token") {

        //check for presence of tokenKey in session
        if(StructKeyExists(session,tokenKey)) {
            if(isStruct(arguments.queryString)) {
                arguments.queryString[arguments.tokenKey]= session[tokenKey];

            } else {

                if(arguments.queryString== "") {

                     //If the action has the url query elements hard-coded (which means queryString should be empty), append to that
                     if(ListLen(arguments.action,"?") GT 1) {
                         arguments.action &= "&#arguments.tokenKey#=#session[tokenKey]#";
                     } else {
                         arguments.queryString= "#arguments.tokenKey#=#session[tokenKey]#";
                     }

                } else if(ListLen(arguments.queryString,"?") EQ 1) {
                    arguments.queryString &= "?#arguments.tokenKey#=#session[tokenKey]#";

                } else {
                    arguments.queryString &= "&#arguments.tokenKey#=#session[tokenKey]#";
                }

            }
        }

        return super.buildUrl(arguments.action,arguments.path,arguments.queryString);
    }
}


Basically, I created a new version of buildURL() that acts as a kind of pre-processor to the original buildURL() function. If session.token exists, it will get incorporated into the URL just as if I had explicitly submitted it to buildURL() in either the action or queryString parameter, then transformed into the final URL by the original function in the super class.

(Some caveats here...I don't know for certain if this technique would work for every use case of buildURL(). It's a complex function capable of creating URLs in a number of different formats in order to produce SES-style links and point to subsystems. But I would think that since I'm simply adding in another URL variable that it ought to work in every case. Also, any time you write a function that modifies a framework function (even via extension) you run the risk of that framework function changing in a future version, potentially breaking your modification).

Once I changed my Application.cfc to extend my frameworkExt.cfc instead of framework.cfc and reloaded my application, the session token became part of all of the URLs I visited or invoked as an authenticated user.

The final step was to add code that would check the token value submitted in the URL or form against the value of session.token, and if it didn't redirect the request to an error or login page.  In Model-Glue, I could add that check as part of a message broadcast in an event type, and then apply that event type to any request where I wanted to enforce CSRF protection.  FW/1 doesn't have event types, but it has something similar:  in an FW/1 controller, you can create before() and after() functions that execute code (you guessed it) before and after the processing of each function in that controller.  So if I wanted to check the submitted token against the session token before every page request that invoked a function in my "widgets" controller, I could put that code in the before() function.

That would have been the conventional way to go about it, but I decided to do it a bit differently in my application.  Since I wanted to add CSRF security to any action undertaken by an authenticated user, and all of my unauthenticated requests were handled by a small set of controllers, I decided to do a conditional check on the session token in the setupRequest() function in Application.cfc (setupRequest() being a function inherited from framework.cfc that runs at the start of each request):

function setupRequest() {

    if(!ListFind("main,auth",getSection())) {
        controller("auth.checkSessionSecurity");
    }

    if(StructKeyExists(session,"user")) {

        if(!StructKeyExists(rc,"token") OR rc.token NEQ session.token) {
            controller("main.notAuthorized");
        }
        rc.user= session.user;
    }
    ...
}


The FW/1 getSection() function returns the section value of the request (in FW/1, the section equates to / matches up with a controller CFC of the same name), so the first if statement basically states that unless the request is invoking the main or auth controller, the action should redirected to the checkSessionSecurity() function of my auth.cfc controller, which in this application makes sure the user has authenticated and a user object exists in session.

If a user object does exist in session, that means session.token should exist (since it's added to the session at the same time the user object is). So the nested if statement verifies that a token variable was submitted via URL (and hence appears in FW/1's rc struct) and that its value equals that of session.token, and if either of those conditions fails then the user is kicked out to an error page and prevented from executing the original request.

So with this setup, I get protection from CSRF attacks simply by using buildURL() as normal and putting all of my sensitive controller functions in controllers other than main or auth.

Some Additional Notes

  • The drawback to having URLs in your application that change with every user session is that it can interfere with behavior testing using automation tools like Selenium IDE, because any token value captured during the creation of your testing script will be invalid the next time you run them.  One way you can get around this is to add conditional logic to the code that that determines if the application is running in your local development environment (your own machine or a test server), and if so sets the session token value to the same value each and every time (regardless of user session), giving you an unchanging URL in your test environment that you can record in automation scripts.

  • If your application has access to very sensitive personal or financial information, you may want to take your CSRF security a step further.  Some applications rewrite the session token value every few minutes instead of leaving it the same during the user's authenticated session.  You should be able to modify the technique described in this post to do that if you feel it's warranted.

Thursday, May 3, 2012

Simple Technique For Creating Multiple Twitter Bootstrap Popovers

One of the JavaScript/jQuery plugins that comes with Twitter Bootstrap is the Popover plugin, which lets you create windows of content that appear when you hover over or focus on a DOM element.  They behave like tool tips but are larger and better suited to support styled content.

As noted in the documentation, the title and content of an individual popover can be coded in one of three ways:  via HTML attributes within the DOM element that triggers the popover, by assiging text values to the title and content properties when applying popover functionality to the DOM element, or by assigning functions that return markup to those title and content properties.  Here's an example of the last of those options:

$("#pop1".popover(
    {
        title: getPopTitle(),
        content: getPopContent()
    }
);

function getPopTitle() {
    return "Title 1";
};
		
function getPopContent() {
    return "Content 1";
}

The page I was working on needed to have several popovers, and the popover content included styled text: paragraphs, bolded text, etc. I didn't want to have to put that content into HTML attributes or long string values, nor did I want to code several different invocations of the popover function.

So what I decided to do was to create hidden blocks of HTML to hold the popover content and associate each set of "content" (title and content) with a unique id:

<style>
    .popSourceBlock {
        display:none;
    }
</style>

<div id="pop1_content" class="popSourceBlock">
    <div class="popTitle">
        Title 1
    </div>
    <div class="popContent">
        <p>This is the content for the <strong>first</strong> popover.</p>
    </div>
</div>

<div id="pop2_content" class="popSourceBlock">
    <div class="popTitle">
        Title 2
    </div>
    <div class="popContent">
        <p>This is the content for the <strong>second</strong> popover.</p>
    </div>
</div>


Then I assigned ids to the DOM objects that would trigger the popups when hovered over and gave them a CSS class of "pop". I opted to use one of the icons provide by Bootstrap to trigger the various popovers:

<i id="pop1" class="icon-question-sign pop"></i>
...
<i id="pop2" class="icon-question-sign pop"></i>


Finally, I used jQuery's each() function to loop through all the DOM elements with a class of "pop", grab the id value and use that to locate the matching content to associate with each popover:

$(".pop").each(function() {
    var $pElem= $(this);
    $pElem.popover(
        {
          title: getPopTitle($pElem.attr("id")),
          content: getPopContent($pElem.attr("id"))
        }
    );
});
				
function getPopTitle(target) {
    return $("#" + target + "_content > div.popTitle").html();
};
		
function getPopContent(target) {
    return $("#" + target + "_content > div.popContent").html();
};

With this code in place, I can add new popovers by creating additional hidden HTML blocks and DOM elements with the "pop" CSS class and an id that follows the pattern I've established.

Another nice aspect to this is that, down the road, I could put the page with the HTML blocks in a directory where a web designer or even a client with some basic HTML skills could edit the content, and I could call that file into my page programmatically. So long as they didn't add new content blocks, they could tweak the wording of the content without having to call me to do it (assuming I can trust them not to mess up the HTML formatting).

Wednesday, April 4, 2012

Letters In Place of Numbers in Phone Number Links on Mobile Phones

Part of an upcoming project involves displaying office phone numbers on a web page such that, if the page is being viewed on a smartphone, tapping on the number pulls up the phone's dialing program with the number filled in and the user just needs to start the call.

Some of the phone numbers in question use letters in place of numbers (like "1-800-555-CFML" as an example) and I wondered if such numbers would work.

The answer is "probably not," as it's not something supported in the spec (the spec being RFC3966).  Tried it on my Galaxy Nexus anyway: no dice.  So I'll have to convert any such letters to their numerical equivalent

Some other things I learned today:

  • Per the W3C recommendation regarding "click-to-call" links, you should always include the "+" sign in front of the number to support international callers ("+1-800-555-CFML").
  • You can also use "." as a separator instead of a "-", and you can get away with using both types of separators in the same number.

Friday, November 25, 2011

dirtyFields jQuery Plugin Updated To Fix jQuery 1.6 Compatibility Issue

A few days ago, I received an e-mail informing me that there was a bug in my dirtyFields jQuery plugin having to do with the plugin's evaluation of checkbox and radio button state (the plugin lets you track the state of form elements and highlight changes).

At first, I didn't see the problem and didn't understand the need for the code suggestion he provided because the demo file included in the plugin download worked just fine.  But then I realized that the demo was coded to use an older version of jQuery (the plugin is over 2 years old now), and once I updated the demo to use jQuery 1.7 I could see the problem.

I had coded the plugin to use the jQuery attr() function to check the selection state of checkboxes, radio buttons, and the option elements in <select> elements.  Prior to jQuery 1.6, code like myCheckbox.attr("checked") would return a Boolean value based on whether the checkbox was currently checked or not.  But that changed in jQuery 1.6: now (in 1.6 and higher), myCheckbox.attr("checked") only returns the initial value of the "checked" attribute of the element ("checked" rather than "true"), and returns undefined if the checkbox/radio button element does not possess the checked attribute at all.  The same hold true for the "selected" attribute of option elements.  In jQuery 1.6 and higher, the new prop() function is designed to return the current state of the selected attribute (this change to jQuery is in fact described on the API page for prop()).

So I updated the plugin to use the is() function in conjunction with the ":checked" and ":selected" selectors to evaluate checkbox, radio button, and option state, since that method works with both older and newer versions of jQuery.  The new version of the plugin is available via the GitHub link from its dedicated GitHub Pages site.

Monday, October 3, 2011

Two Thoughts On Today's Adobe MAX (2011) Day One Keynote

I'm not going to try and summarize the keynote:  you can either watch the replay at http://www.max.adobe.com/online/ or read a brief synopsis at http://max.adobe.com/news/2011/keynote_day1.html.

My first thought is actually about some news that wasn't part of the keynote, but was announced on the Internet during the keynote:  Adobe's acquisition of PhoneGapPhoneGap is an application framework that allows developers to build mobile apps using web technologies (HTML, CSS, and Javascript) and the PhoneGap API (to access particular device functions like geolocation, the camera, internal storage, etc) and then package and deploy those apps such that they work on most mobile operating systems (Android, IOS, Blackberry, and a few others).

What struck me about that news was that it means Adobe now provides two ways for app developers to develop cross-platform apps.  If a developer is already familiar with Flash and Flex development, they can build mobile apps using Flash Builder and Adobe AIR.  If the developer is more skilled with web design and programming, they can build apps using PhoneGap.  In both cases, the end result is an app that runs natively on your device of choice (IOS, Android, whatever).

First of all, it's yet another example of Adobe's willingness to support both Flash and HTML5 as development platforms going forward.  A number of tech analysts contend that Adobe's development of new tools that play nice with HTML5 is a capitulation of some kind to the way the industry is moving.  I don't see it that way: Adobe may be synonymous with Flash, but it's never been all about Flash with them.  They have always supplied the tools that the digital and web creation markets have demanded, just as any sane company would, regardless of what their internal preferences might be.

The other thing about the PhoneGap acquisition is that it will expand the number of developers using Adobe tools and technologies to create mobile apps for both Android and IOS.  And now suddenly Adobe potentially a big player in the smartphone and tablet space, not by release their own mobile OS or mobile hardware, but by providing the tools to any developer who wants to write on any and all of these platforms.  To be clear, at the moment there are certain kinds of apps or app functions that a developer would want or need to develop in native OS code, but as the Adobe tools provide ways to overcome those limitations (such as the ability to include native code extensions in Flash/AIR apps), the number of reasons for writing an app in native OS code (and hence writing a second version for deployment on another platform) are going to become fewer and fewer.

My second observation regarding the keynote concerns the suite of touch-optimized mobile apps - the Adobe Touch Apps - coming in November to Android devices and later to IOS devices.  While some of the apps are geared more towards visual designers, the Proto wireframing app is something I could use in the design phase of my web and mobile application work, and, well, who wouldn't want a touch-optimized version of Photoshop to use to mess around with their photos?

And that's when the thought occurred to me:  here were some apps that I could use to be productive with a tablet.  I'm an application developer who writes code.  I have access to two tablets and I don't use either of them to write code or generate digital content of any kind.  For me, tablets have always been digital consumption devices (something Amazon seems to recognize with their new Kindle Fire tablet) for viewing video, looking at pictures, or reading web pages or books, and kinda superfluous given I can do all that and much more with a laptop (to be clear, I do love my Nook Color as an e-reader, but I rarely use it to surf the web or anything else).

But the apps Adobe showed off today made me feel like I would in fact want to use them on a tablet, because they were designed specifically with tablets and touch-interaction in mind, not as desktop apps shrunk into a mobile form factor.  That to me is a step in the right direction, and it's apps like those that are going to grow the tablet market by making the tablet experience more compelling.