Making of


If you haven’t tested DrawPad yet, you can try it here.

DrawPad started as an examination assignment in the course Mobile Applications during my education Graphical Design and Web Development. It was actually two assignments; one mobile web site and one native Android app. Since web is closer to my interests than native application development, I chose to put a lot more energy into the mobile web site. The requirements for approval were basically to use the viewport meta tag, use a few different HTML tags and CSS declarations, handle a touch event and animate something as a respond to the touch, and finally, store something using local storage. To get a higher grade there were also requirements to use an AJAX call to fetch new content and to use all three touch events (touchstart, touchmove and touchend). I have a passion for web coding, so of course I went for the higher grades.

The Base of the Project

I wanted to make something that used some of the new HTML5 techniques, and with the requirements listed in the assignment I started creating a simple drawing app using HTML5 canvas, with just a blank canvas and a simple black brush. It got bigger and bigger, with color picker, toolbars and other features. I had decided in the beginning not to use jQuery (don’t get me wrong, I love jQuery), because of its filesize (26 kB minified and Gzipped, compared to 32 kB for all the JS for the final app) and all the unnecessary code for IE fixes and such that I don’t need.

I ended up coding my own little library with only the stuff I needed. It uses a syntax very close to jQuery’s. Since HTML5 canvas isn’t supported in IE yet, I could use a lot of new stuff from HTML5 and CSS3. I also didn’t need Sizzle, which is the selector engine included in jQuery, because the browsers I was aiming to support all have implemented querySelector() and querySelectorAll(). I have an animation method in my library, very similar to jQuery’s syntax, but this uses CSS transitions when they are available. It works well in this case, but as John Resig points out there are drawbacks for using this in some cases. My small library concists of basic functionality like event binding and style related stuff. Other more specific things like the color picker and sliders were added as plugins to this library.


When I started the project I just started coding and trying out different things and it just got larger and larger. I am a big fan of structure, so I separated the project into different files with a nice folder structure. I had wanted to try Git for a long time, but never had the time to dig in to it. So when I had hit the deadline and could do it more for my own sake than as a school project, I started being even more structured by learning Git. Git is a distributed version control system which allows for a very good workflow and management. I can now work in different branches to try out features, develop more efficiently and keep track of different release versions. It’ll be hard to start a new project without Git now that I’ve tried it.

I develop everything on my 2009 MacBook Pro 15″ 2.8 GHz, using Coda as my editor and Firefox as my dev browser since I can’t live without Firebug. I’m now also using Gity to get a nicer GUI than just typing stuff in the terminal. I do a few stuff from the terminal too, and I have used Linux some before, so I like the terminal, but for things I do all the time like with Git, a nice GUI feels better.

For the phone specific things I used my iPhone 3GS and the iOS Simulator. Being able to test it on a real device was really important. Just testing in the simulator doesn’t give you the right feel for the flow. I have also tried it on an iPhone 4, but because of the higher resolution and pixel density, there is more to improve there to get it good. This will be the difficult part, because the simulator does not act the same way the iPhone 4 does, which makes it difficult to test things in the simulator. I tried it on an iPad once, and it seems to be working just fine there. As I will explain further down, Android has some difficulty with multitouch. This combined with that I don’t own an Android device made me focus less on how it performs on Android and more on the iOS devices and desktop browsers. Future releases will probably have better support for Android.

Dealing with Touch

The app was supposed to be made specifically for modern mobile devices, so touch was something you had to deal with. I also wanted it to work well in a desktop browser. So for each event I wanted to bind to, I had to bind it both to the mouse event and the touch event. For this I borrowed the syntax used in jQuery, where you can bind to multiple events like this:

$("#element").bind( "touchstart mousedown", function(){
    // Code
    var reg = /.*?/;

This was the easy part. A little more tricky part was the fact that mouse events are emulated on touch devices. So all mouse events had to be cancelled on touch devices, which made me create a simple method for checking if it is a touch device:

isTouchDevice: function( e ) {
    if( this.touch_enabled === undefined || this.touch_enabled === false )
        this.touch_enabled = !!e.touches;
    return this.touch_enabled;

And then you would just check the value of isTouchDevice() and if it’s true and the current event is a mouse event, cancel the event handler.

Click events were another problem. Click is also a mouse event and is emulated on a touch device, but it is really slow and triggers a while after the actual tap. To solve this, you will have to use touch events. If you use only touchstart, the event handler is triggered before you want it to. Touchend would result in bugs where the user could start dragging from another element and release the tap on the current element, and then the event handler is triggered even though the user didn’t intend to tap on it. So to really emulate the click event in a fast way, I had to create a custom event that I called touchclick, which adds handlers for touchstart/touchend and mousedown/mouseup and also checks if the pointer position is still inside the element when released. This helped me a lot in creating all the buttons in the app. By just typing this I could target both taps and clicks, and get good responsiveness on both.

$("#element").bind( "touchclick", function(){
    // Code

Drawing Process

I thought a lot about how to actually do the drawing by testing some myself and googling around to find examples of other apps. I found a simple demo by Orion Elenzil that I was inspired by. I did my own implementation of it and it turned out to be a really good solution. The app uses circles for each position the pointer triggers an event for. This is not always that even and the distance between these positions can be quite large sometimes, which would normally render circles with gaps in between. To prevent this I calculate the distance in pixels and run a loop as many times as the number of pixels, in which I draw a circle for the current position. This way, all the gaps are filled in and everything looks good. There is also a density setting (ranges from 0 to 1), which is used to determine how close the circles should be drawn. If the density is set to 1, one new circle is drawn for each pixel. The code below is taken from my app.

In the beginning I had a big issue with the opacity slider. Since I draw a lot of circles on top of each other, it turned out to be a problem with just setting the opacity for that circle. I was on a deadline because it was a school project, so this wasn’t top priority to fix; I had lots of other functionality left to implement. But now this is all fixed and works really good. I ended up introducing a second canvas element on which the image will be put. The user draws on the first canvas (which is on top with transparent background) and when the pointer is released, the new image data is copied to the second canvas that will contain the whole image, and finally the top canvas is cleared.

// Calculate positions
var pos = $.getPos( e, i, this.canvas ),
	last = pointer.last,
	dist = {
		x: pos.x - last.x,
		y: pos.y - last.y,
	x = last.x,
	y = last.y,
	steps, step;
dist.d = Math.sqrt(dist.x*dist.x + dist.y*dist.y);
steps = dist.d*this.settings.density;
step = {
	x: dist.x * ( 1 / steps ),
	y: dist.y * ( 1 / steps )

// Draw several times to fill in gaps between event triggerings
for(var n = 0; n < steps; n++){
	this.context.fillStyle = this.settings.fillStyle;
	// Increment the x and y position for the next iteration
	x += step.x;
	y += step.y;

Making a Simple User Interface

I had lots of things i wanted to add to the interface, but making room for everything in the interface was not easy when mobile devices are pretty small. I needed buttons for going to the app’s home screen, undo/redo, saving to both local storage and cloud storage, sliders for brush properties and ways to change the color. All of this, while not covering too much of the drawing surface.

I had a look at some other native apps like Adobe Ideas and Whiteboard. While developing the interface I just looked at one screenshot of the draw mode in Ideas, I didn’t download it and look at the whole app. So I was pretty surprised when I actually downloaded Ideas and saw that I had designed my home screen very much like Ideas looked like, even though I had never seen it before. Guess that makes it a good design then? Anyway. I liked that you could paint while the toolbars were visible in Ideas, but I thought it would be better to cut off the space on the longest side instead of the shortest as in Ideas. I also liked the idea of tapping with two fingers to hide the toolbars, as in Whiteboard. But I also wanted to support multitouch, to be able to draw with multiple fingers at the same time. This turned out to be difficult though, because the two finger tap is sometimes recognized as two separate drawing pointers. Also, Android doesn’t yet support multitouch in the browser, which makes it impossible to hide the toolbars on Android.

The interface also had to be fluent. Going to the home screen without having saved the image should pop up a question if you want to save it, and if you do, the save menu with login form and registration should pop up, and when that is done, the image should be saved and then take the user back to the home screen. By creating functions for the different tasks like doLogin() and saveToCloud(), those could be accessed easily from all the scenarios and create a fluent interface.

Even though I’m pretty happy with the interface right now, I’m going to improve it a lot in the next release. Making it easier to understand and more responsive, and also make it work better on Android.

Creating a Color Picker

I had no idea about how color pickers worked before I made this, but I found a great article written by Mark Kahn that explains how they work and how to create one. I used techniques from this and made my own color picker. It is basically two images that the user sees. The first one is a square with a white to transparent horizontal gradient and one transparent to black vertical gradient. The second image is a bar of the color spectrum, which is easy to create with the gradient tool in a graphic app like Photoshop. You could actually create these with CSS gradients too. Then you read the position of the pointer inside of these areas and calculate the color values. Here is a code example to calculate the color value of a position inside the spectrum:

// parameter x is the x position in the spectrum
var getColorspectrumValue = function( x ) {
		// Section width (300px is the spectrum width)
	var sW = 300 / 6,
		// Section number, 1-6
		sN = Math.ceil( x / sW ),
		// Position in current section
		sP = x % sW,
		// Color value used when value should increase
		vI = ( 255 / sW ) * sP,
		// Color value used when value should decrease
		vD = ( 255 - vI ), 
		// Red channel
		r = Math.round(
			x < sW ? 255 : // First section - full red
				x < sW * 2 ? vD : // Second section - decreasing red
					x < sW * 4 ? 0 : // Third and fourth section - no red
						x < sW * 5 ? vI : // Fifth section - increasing red
							255 // Sixth section - full red
		// Green channel
		g = Math.round(
			x < sW ? vI : // First section - increasing green
				x < sW * 3 ? 255 : // Second and third section - full green
					x < sW * 4 ? vD : // Fourth section - decreasing green
						0 // Fifth and sixth section - no green
		// Blue channel
		b = Math.round(
			x < sW * 2 ? 0 : // First and second section - no blue
				x < sW * 3 ? vI : // Third section - increasing blue
					x < sW * 5 ? 255 : // Fourth and fifth section - full blue
						vD // Sixth section - decreasing blue
	return { red: r, green: g, blue: b };

The following code is an example of how to get the color value of a specific position inside the color space:

// x and y are pointer positions with origin at top left corner of the color space
// baseColor is an object containing the current color chosen from the spectrum
var getColorspaceValue = function( x, y, baseColor ){

    // Convert positions from the colorspace size (300px wide) to the color value size
    x = x / 300 * 255;
    y = y / 300 * 255;
        // White (horizontal, saturation) / black (vertical, lightness)
    var white = x / 255,
        black = 255 - y,
        // Percentages of base color
        red_percent = 1 - / 255,
        green_percent = 1 - / 255,
        blue_percent = 1 - / 255,
        // Calculate new values
        r = Math.round( ( 1 - red_percent * white ) * black ),
        g = Math.round( ( 1 - green_percent * white ) * black ),
        b = Math.round( ( 1 - blue_percent * white ) * black );

    return { red: r, green: g, blue: b };

Then you would bind event handlers to the mousemove/touchmove events of the colorspace and spectrum and inside of that call those methods with the current pointer position and get the color value back.

The Storage Choices

The first kind of storage is the history states used for undo/redo. These are currently saved as the raw image data of the canvas in a regular array. For better performance (less memory usage) I will in the next version change this to storing the base64 encoded data url instead. The undo/redo buttons simply take the stored image data from the new position in the history states, and put that data on the canvas. This storage is of course only temporary, until the next reload of the page.

The second kind of storage is local storage in the browser, made possible by window.localStorage. This is the easiest way to save the image between visits, but it will also only be available in the current browser on the current device. Any clearing of the cache will delete the images, which makes this storage a bit insecure. When I implemented this I went one step further, as I should have done in the history states too, and saved the base64 encoded data url.

The third kind of storage is cloud storage on the app server. This is a bit more work for the end user, since registration is required, but by doing this, the image will always be available through a login on whatever browser/device and doesn’t disappear when the cache is cleared. The registration process is very simple, only email and password are required, but it’s psychologically more work to do and might scare off the user from saving this way. All the server interaction is made through AJAX with a PHP backend, which makes it work a lot like a native app when no page reloads are required.

Offline Support

Even though this is a web app, there is nothing really that forces it to have an active Internet connection, except for the cloud storage feature. If you are somewhere without Internet connection and still wants to draw something, you can draw it, save it to the browser storage and later when you get access to the Internet you can load the image and save it to the cloud.

By default, the browser tries to download the page from the server every time it is accessed. With HTML5 Offline Application Cache, you can create a cache manifest where you specify which files the browser should cache for offline usage, and which needs a network connection. The file can be named anything, but it is usually called cache.manifest and looks like this:

# rev 1



The first line must be CACHE MANIFEST. In this example I have two sections, CACHE and NETWORK. If you don’t have any network requests, you can ignore those headings and just specify the files to be cached below CACHE MANIFEST. Usually there is also a comment like in this example, specifying a version of the manifest. This can be anything; a version, a date or something else. It is used to make the browser aware of changes. If you would change the logo but just replace the file and keep the same filename, nothing in the cache manifest has changed, and so the browser doesn’t update the cached logo. When the browser loads the page, it goes through a number of different states. In one of those states it checks if the manifest has been updated. If it has been, the browser downloads all of the resources all over again and caches those. If it hasn’t been updated, the browser uses the cached resources. So whenever a file is changed, you need to make a change to the manifest. An easy way of doing this is by using a comment with a version number that you just increment for each change.


Creating this app was definitely a challenge, but a really fun one. I learned a lot about both how to create different modules and how to write better JavaScript code. The first release is out and I will continue the development to make it even better. All of the code is available on the project page on GitHub.

Please try the app and leave a response below if you want.

This post has
2 responses.

Leave a response

Allowed HTML tags and attributes:

  • <a href="" title="">
  • <abbr title="">
  • <acronym title="">
  • <b>
  • <blockquote cite="">
  • <cite>
  • <code data-lang="">
  • <del datetime="">
  • <em>
  • <i>
  • <q cite="">
  • <strike>
  • <strong>

The attribute data-lang for the <code> tag supports all languages that JUSH supports.