Using Git to Deploy Your Site

If you’re using git for your source control but still using FTP to publish updates to your website, you’re doing it wrong!

TL; DR

You can set up git so that you can use a single command to deploy your code to live. This post takes you through the typical steps to set this up.

About git

Git is a distributed version control system. It’s the “distributed” part that makes it useful in this situation. Git allows you to specify multiple remote repositories at addressable locations that it can sync with. To be able to use git to deploy your site, one of those remote repositories must be on the live server.

Some assumptions

In order for this to work, you will need:

  • Code for a website under version control in git
  • An addressable web server to which you want to deploy the website
  • SSH access to the web server
  • Git available on the web server

I’ll go through some other steps that require you to be working on a *nix computer, but they are not essential and are really for convenience. You should be able to do this from a Windows machine without a problem.

Making life easier

This section is optional, but it reduces hassle and increases security so why not?

Use an SSH config file. You can set up aliases, default parameters and security options in ~/.ssh/config. It’s a real time saver - it basically means you don’t have to provide all the options and switches every time you sign in via SSH to your server. My host provides a jailed shell on port 2222 and for security I like to use a public key for authentication so my config file looks a little like this:

1
2
3
Host mydomain.com
  Port 2222
  PreferredAuthentications publickey

Set up public key authentication. This increases security on your server by not allowing password access and by installing a certificate on your local computer means you never have to manually authenticate either (unless you choose to password protect your certificate). I won’t go through it here but if you’re interested you can learn how to do it.

Server-side

Sign in via SSH to your server. You’ll start in your home directory.

Create a folder and initialise a new git repo inside it. You can call it what you like; I gave mine the same name as the domain.

1
2
3
4
$ mkdir mydomain.com
$ cd mydomain.com/
$ git init
Initialized empty Git repository in /home/username/mydomain.com/

A git repository and the filesystem containing the checked-out files from that repository are two different things in different locations on disk. When you push from your computer to a remote repository you will only alter what’s in the remote repository; the checked out files on the server remain unchanged. This is a problem since you need the checked-out files to be updated as well so they can be read by the web server software and served to people visiting your site. This is where git hooks come in handy.

Git hooks are scripts that are automatically run when certain events occur in git. In this case we want a script that checks out the latest version to run after the repo receives an update. The hook’s name is post-receive.

1
2
$ cd .git/hooks/
$ vim post-receive

post-receive does not exist by default and so vim should open with a blank file. For the benefit of those not fluent in vim-ese I’ll give blow by blow instructions.

  • Type ‘i’ to enter Insert Mode
  • Paste in the following code:
1
2
#!/bin/sh
GIT_WORK_TREE=../ git checkout -f
  • Press Escape to exit Insert Mode
  • Type ‘:’ followed by ‘w’, ‘q’, and Enter, which should (w)rite the file and (q)uit.

The script we just created simply performs a forced checkout which makes sure the state of the checked-out files match the latest version of the repository (i.e. The version just received).

Now it needs to be made executable:

1
$ chmod a+x post-receive

Finally, we need to configure the repository to accept incoming updates without complaining. By default it’s set up to grumble because without the git hook we just set up, the working tree (checked-out files) and the repository would be out of sync. With the git hook in place this becomes a non-issue.

1
$ git config receive.denyCurrentBranch ignore

Connecting the dots

Now the server is ready to receive updates we need to tell the local repository about it.

On your local computer:

1
$ git remote add live username@mydomain.com:mydomain.com

This sets up the repository you just set up on your server as a ‘remote’. You can now refer to that repository as live.

Doing it live

All that remains now is to push your site live. From your local computer:

1
$ git push live master

That’s it. Whenever you want to publish, from now on that’s all you need to type.

One more thing

You may notice that your site isn’t showing up just yet. Log back on to your server. You can see, if you check, in the mydomain.com folder your latest changes are there, but we need to alter the folder that currently contains your website to point to the mydomain.com folder instead.

There will be a folder in your home directory called htdocs, www or public_html containing all the files that currently form your website. I’ll assume it’s called public_html. Make sure there’s nothing in that folder that you haven’t got backed up or that you want to keep - it’s about to be deleted!

1
2
$ rm -rf public_html/
$ ln -s mydomain.com public_html

This creates a symbolic link from public_html to your repository. Reload the page in your browser and you should see the version you just pushed from git.

Make a small change and try git push live master again. As soon as it has finished you should see the change on your site. Deployments, no matter how large or small, will never be a hassle again.

A note on security

If you navigate to http://mydomain.com/.git you may notice that the contents of your git repository are available for the world to see. Some reasons why you might not want this:

  • Your email address is probably associated with all your commits
  • Anyone could download your repository and all of its history
  • Your code will be public and it would be easier to find security holes in it

I get around this by having all the code to be served in a subfolder of the repository, htdocs, and link to that instead of the main repository folder. Alternatively you could use .htaccess (if you’re running Apache) to block access. Just copy these lines into your .htaccess file:

1
2
RewriteEngine On
RewriteRule ^(.*/)?\.git+ - [F,L]

In Defence of the BBC and Its Clock

The BBC has said that due to a single person complaining, they will be removing the clock from the home page. The news has been reported fairly widely, particularly through technology channels, and people have been very vocal about their feelings on one particular point:

  • How could it possibly take 100 days to fix the problem?

Having worked for the BBC for 4 years I would agree on that estimate, and I’ll explain why. First, I need to explain a little about the BBC some of the history surrounding the clock.

History

The BBC clock has been around for a long time. It has had many incarnations and is an integral part of the BBC’s history. It first appeared on BBC1 as an actual clock with a camera pointing at it, and then digital representations in the same style appeared. During a major redesign of the BBC home page it was lovingly recreated in Flash as an homage to this iconic timepiece.

A subsequent redesign removed the clock from the home page but due to an outcry from regular visitors to the site it was reinstated, and rewritten to use the modern canvas HTML element with Flash as a fallback to give the best available experience.

The Trust

The BBC Trust is an independent body who governs the BBC and is intended to act in the best interests of the licence fee payers (for the benefit of international readers, everyone in the UK who owns a TV must pay a yearly fee which mainly funds the output of the BBC which is broadcast ad-free). This basically means that the Trust have final say in matters such as this. A single complaint such as this can out-weigh the opinions of many if the Trust feel that the BBC is not upholding its core values.

One of the core values of the BBC is to never knowingly broadcast inaccurate information. As the clock is on the BBC home page, it looks like that information comes from the BBC, and therefore should be accurate. As the time displayed actually comes from the user’s computer it has the potential to be inaccurate and therefore make the BBC home page inaccurate, and that is unacceptable.

Breaking down the problem

As the clock has been criticised for its accuracy, a fix that makes it anything less than perfect for every single user would be unacceptable. Here are the facts:

  1. As already established, you can’t trust the user’s clock.
  2. The site is not restricted to people within a single time zone, the clock will have to display the correct time according to the time zone the user is in.
  3. As we can’t trust any time information on the user’s computer(1), we can’t rely on their computer to provide a time zone automatically.
  4. Getting a user’s location by IP address is never 100% accurate, particularly when using mobile broadband so we can’t use that.
  5. This leaves us with only one option (since we can’t rely on every device having GPS built in) - the user must manually provide their time zone, which can be stored in a cookie or a similar client-side storage mechanism.
  6. To make the clock accurate if they move time zones, or if they have not yet provided a time zone, the time zone must be displayed along side the clock.

Now that the facts are clearer the problem can be broken down.

User Experience & Design:

  • How should the time zone be displayed next to the clock?
  • How does the user know they can change it?
  • How does the interaction to change the time zone work / look?
  • Does this all fit within the BBC’s UX guidelines?

These are all decisions that will need consideration and questions that will need answering before at least some of the development starts. Design documents will be created in order to accurately pass on the outcome of these decisions to the developers.

UX&D will also perform some user testing to make sure the solution is easy to use and understandable. There may also be accessibility testing to ensure disabled users can still get the most out of the site possible including use of the clock.

Development:

  • How do we build this so that the home page retains its current high level of cachability which is necessary for the load the BBC experiences?
  • How do we deliver the time accurately to the end-user?
  • How do we minimise load and rendering times on the home page?

When you have a site with as much traffic as the BBC there is a strong need for a good caching strategy to avoid overloading the servers and mitigating downtime. When it comes to delivering the time, one option would be to embed it in the body of the page, but this would mean that the cache would have to be regenerated every second which is an inefficient solution. There is the possibility of using post-cache/edge includes to insert the time with placeholders but this would be on every request and even more inefficient.

Logically then, you would need a separate AJAX call to get the current time. This could result in a short period of time when the clock displayed no time at all before the time had loaded which would look bad. This would have to be referred back to UX&D for approval.

To serve the accurate time the BBC would need to set up one or more time servers capable of handling the constant load of every single person who visited the home page. Not only would the servers need to maintain perfectly accurate time but they would need to deliver it to the user in a fraction of a second wherever they were in the world.

Because any request that took longer than 1 second would make the clock inaccurate there would have to be a series of servers at various locations around the world capable between them of delivering a latency lower than 1 second to any country. This would take some setting up, particularly if you want to defend against DDoS attacks.

100 days?

Given the complexity of the task, the number of different teams & discipines involved, the infrastucture and the attention to detail required, perhaps 100 days doesn’t seem that mad after all. I don’t think so.

Update 07/06/2013:

I’ve been made aware of NTP, which provides a mechanism of transmitting an accurate time across a network. However, (according to the FAQ it can take up to 5 minutes to determine the time accurately)[http://www.ntp.org/ntpfaq/NTP-s-algo.htm]. This is because it has to make multiple requests to sanity-check the result and ensure latency isn’t affecting the time.

I doubt many people will remain on the home page for 5 minutes and since the page would not have access to set the time of the user’s computer clock the process would have to start from scratch each time the page loaded.

Javascript Named Arguments

What’s In A Name?

Javascript is an extremely versatile language, however it is missing various useful features that other languages aren’t. In many cases these features can be added with a shim or a polyfill, if required. However, there is a set of missing features that are low-level enough that a shim isn’t possible. They are all surrounding function arguments:

  • Named arguments
  • Default argument values
  • Argument type enforcement

Default argument values is actually a feature proposed in ES6 but as we aren’t quite there yet with ES6 I decided to patch the functionality myself in the best way I could. I’ve made a little library and it’s called Names.js. Please fork it and help me improve it!

Names.js

As it is impossible to alter the fundamental way that functions work without making changes to the Javascript engine, it’s written as an augmentation to the Function object adding a couple of instance methods to the prototype and a static method to the main object. The static method is this:

Javascript
1
2
3
4
5
6
7
8
9
var myFunction = Function.createNamed({
    args: [
        ['arg1', 'string', 'defaultValue'],
        ['arg2', MyClass, someVar]
    ],
    method: function(arg1, arg2) {
        // ...
    }
});

It’s a bit more verbose than simply

Javascript
1
2
3
var myFunction = function(arg1, arg2) {
    // ...
}

however it offers the following benefits:

  • You don’t need to provide an argument each time if you’re happy with the default value:
Javascript
1
myFunction.applyNamed(null, { arg1: 'Hello world' });
  • You know what each argument means (this advantage is a lot clearer when the arguments are named well and the values you are assigning them are all booleans and integers).
  • You can provide the arguments in whichever order you like:
Javascript
1
2
3
4
myFunction.applyNamed(null, {
    arg2: fooBar,
    arg1: 'Hello world'
});
  • The function will throw an error if you pass it an incorrectly typed value:
Javascript
1
myFunction.applyNamed(null, { arg1: 1234 }); // throws, as 1234 is not a 'string'

applyNamed works just like Javascript’s native apply method, taking the scope as the first argument and the arguments to apply as the second.

Adding validation

In many languages, type checking is mandatory but at the end of the day it’s just a form of validation. In an effort to reduce the usual argument-checking clutter at the beginning of a function I thought it was worth extending the validation functionality to allow for custom validations.

You can either use a regex or a function:

Javascript
1
2
3
4
5
6
7
8
9
10
11
myFunction.addValidaton({
    arg1: {
        test: /-a-diddly$/, // Check for Flanders
    },
    arg2: {
        test: function(myClass) {
            return myClass.isInitialised;
        },
        required: true
    }
});

The required option on arg2’s validation object means that arg2 is required. Ordinarily if the argument is omitted when calling applyNamed then that validation will not be run. If the argument is omitted but the validation for that argument states that it is required then it will fail instantly.

Dog’s dinner

Names.js eats its own dogfood wherever possible and practical. There are only 3 public functions:

  • applyNamed wouldn’t make sense to be able to applyNamed to.
  • createNamed would be an even more verbose way of creating a function if you used it with applyNamed. However it does use an applyNamed-style syntax and goes on to use applyNamed internally to create the new function.
  • addValidation is not only created with createNamed but also its own validation rules are added using addValidation. Talk about meta!

And…

…that’s pretty much it really. I also wrote a release script for the library that I’m pretty pleased with but maybe that’s for another blog post :)

Interviewing for CSS

One day in my previous job I was kind of dropped in it when my boss asked me at about 5pm to interview a potential new recruit at 11am the next day. I’d never been on that side of the table in an interview before. Considering how long I had got to prepare I was understandably a little nervous - and for good reason. When you’re interviewing, not only do you have to display an excellent knowledge of the topics you are discussing but you also have to assess whether the person you are talking to possesses that same knowledge. If you do not do this well you are either rejecting a good candidate (which is bad) or hiring a bad one (which is worse).

I was told that the role was for a short contract and the successful candidate would be showing a little love to our badly neglected CSS, both tidying it up while also making sure the styles were rendering accurately according to the designs (plus a few extra tweaks here and there). So basically they had to be fluent in CSS-ese.

Twitter saves the day again

When faced with a daunting task such as this, I did what any other self-respecting geek would do and turned to the all-knowing masses on Twitter.

Some brilliant people came to my rescue almost immediately and here are their responses:

I like this one because it’s really getting back to basics. Not only do you have to know that specificity exists but you have to understand what it does and how it does it. The spelling part is a bit of fun - hopefully the candidate will understand that that’s not the important part! Here’s an article on CSS specificity if you need a refresher.

This is a nice one as well. It checks that they have some understanding about how the document flow works. You could follow this question up by asking if they know any other ways of hiding content and why they might want to do it that way instead.

The answer, of course, is that display: none removes the element from the document flow while visibility: hidden does not. Another way of hiding content (which is good for accessibility as it doesn’t hide it from screen readers) is by using a negative margin. There are various drawbacks and other ways of achieveing this which might also be a good thing to discuss.

It is static. But it’s not the most well-known property as normally we are changing the position from static to something else, like relative or fixed.

inline-block has only really been useful since IE started supporting it properly (around v8) and so while it’s very handy, not everyone has used it.

An element with display: inline-block flows in the document like an inline element but allows you to apply styles to it that normally only apply to block-level elements such as margin, width and height.

I expect most candidates would ask for clarification or look confused at the first part of that question. That’s fine, seeing their reaction allows you to gauge the way they might react to a poorly defined spec, for example. It’s kind of open ended so they might simply say “It’s the latest version of CSS”. Others may list a few new features. If they’re really smart they’ll tell you that CSS3 hasn’t been formally specified because it’s still in development.

Drawing an arrowhead with borders requires the knowledge that when fat borders meet on adjacent edges they form a diagonal. Check out this article for more details. Alternatively you can ask them to create a close button with pure CSS if you’re feeling mean!

Yep :)

Other ideas

Vendor prefixes

I quite like an open ended question, particularly if it calls for some opinion. A lot of this industry is based on facts and rigid concepts that are either right or wrong so hearing someone’s opinion and their ability to express it well could be the difference between a hire and a no-hire.

Simply ask them to explain vendor prefixes, what they’re for and importantly whether they think they are a good idea, making sure to push them for both positive and negative points).

Box model

Get them to explain the box model to you. You’re looking for a good understanding of the different parts that come into play, particularly how it can affect how wide an element will be (and how it won’t always be the same width as its width value). This conversation can then lead onto the box-sizing property introduced in CSS3. See if they know the default value for all element (content-box) and for bonus points see if they know which mode IE uses in Quirks Mode (border-box).

SCSS

Of course these days CSS isn’t quite as static as it used to be with code generators like Sass and less becoming more and more popular. You can ask if they know about these technologies (and what they think of them). If you’re using one in your business, you should probably have a few specific questions about it as well ;)

In favour of CSS generation, it is nice to be able to use a DRY methodology through mixins and inheritance, and it’s even better to be able to define things once and use them many times throughout the document (like colours). Against it, the code generated can be quite verbose and it’s typically hard to debug as looking at the compiled code gives you no indication where a rule exists (or was generated from) in the source.

Implement {something} with no classes or ids

Finally, get them to code up some HTML and CSS for a small interface element (Eg. a tweet (including all the meta data around it)) but do it without using classes or ids. Ask them to use HTML5 and to use semantically relevant tags.

This task can take a while but it shows a lot about their thinking process.

  • It shows they have a knowledge of HTML5 tags and semantic markup
  • It forces them to improvise with the use of CSS selectors

There are many under-used selectors out there which would be perfect for this sort of task. Just look at them! Do you know all of them? I know I don’t, certainly not without looking. If you really wanted to test just their CSS skills then you could write the HTML for them in a way that would force them to use specific selectors. The sky really is the limit here… get creative!

Any more?

If you’ve got any favourites (and I’m sure you have, whether they’re questions you use or have had used on you), please stick them in the comments. I’d be really interested to hear them!

Zero Image Close Buttons

While I was shaving yaks recently, I came across and interface element in our product: a tiny button that removed an item from a list. The button was just a simple circle with an X through it.

Originally it had been created by hiding the text of a link, using the :before selector and the content property to add an X and then styling it. It wasn’t ideal because it never really lined up properly, especially when the font size changed and especially if the font family was changed. I wanted to find a better way.

Using a gradient as a gradient is so overrated

You can create a much better X with a few gradients! In fact, gradient backgrounds, multiple background images and rounded corners give you pretty much all the tools you need to create the best X button ever. You just have to realise that just because it’s called a gradient it doesn’t have to look like one.

There are two ways of doing this, but I’ll start with the simplest. Let’s deconstruct our button:

As a single linear gradient can only make lines parallel to each other, we need to make two lines by using two gradients.

By placing two colour stops of differing colours right next to each other we create a solid transition between colours instead of a gradient.

CSS
1
2
3
4
a.close{
    background-image: linear-gradient(135deg, rgba(0,0,0,0) 0%,rgba(0,0,0,0) 41.5%,rgba(0,0,0,1) 41.5%,rgba(0,0,0,1) 58.5%,rgba(0,0,0,0) 58.5%,rgba(0,0,0,0) 100%),
                      linear-gradient(45deg, rgba(0,0,0,0) 0%,rgba(0,0,0,0) 41.5%,rgba(0,0,0,1) 41.5%,rgba(0,0,0,1) 58.5%,rgba(0,0,0,0) 58.5%,rgba(0,0,0,0) 100%);
}

You can see it in action and also check the source for the full gamut of vendor extensions.

Adding pzazz

We can make our close button look a lot like the X-Men logo by adding a couple of lines of extra CSS. I added a hover state as well just for kicks:

CSS
1
2
3
4
5
6
7
8
9
10
a.close{
    background-image: linear-gradient(135deg, rgba(0,0,0,0) 0%,rgba(0,0,0,0) 41.5%,rgba(0,0,0,1) 41.5%,rgba(0,0,0,1) 58.5%,rgba(0,0,0,0) 58.5%,rgba(0,0,0,0) 100%),
                      linear-gradient(45deg, rgba(0,0,0,0) 0%,rgba(0,0,0,0) 41.5%,rgba(0,0,0,1) 41.5%,rgba(0,0,0,1) 58.5%,rgba(0,0,0,0) 58.5%,rgba(0,0,0,0) 100%);

    border: 3px solid black;
    border-radius: 5px;
}
a.close:hover{
    background-color: rgba(0,0,0,0.1);
}

See it in action here.

The next level

The above approach is a bit limited. A couple of problems:

  • It’s easy to change the background or to leave it transparent, BUT what if we want the background to have a solid colour and have the X transparent, letting the colour or image behind show through? It doesn’t work.
  • With one line of the X overlaid on the other it would be impossible to give the lines of the X a gradient with any kind of rotational symmetry.

Both these problems can be solved by deconstructing the X a little further:

This means as well as creating four gradients we must also give them each different positions within the element they apply to.

CSS
1
2
3
4
5
6
7
8
9
10
a.close{
    background-image: linear-gradient(135deg, rgba(0,0,0,0) 0%,rgba(0,0,0,0) 33%,rgba(0,0,0,1) 33%,rgba(0,0,0,1) 67%,rgba(0,0,0,0) 67%,rgba(0,0,0,0) 100%),
                      linear-gradient(45deg, rgba(0,0,0,0) 0%,rgba(0,0,0,0) 33%,rgba(0,0,0,1) 33%,rgba(0,0,0,1) 67%,rgba(0,0,0,0) 67%,rgba(0,0,0,0) 100%),
                      linear-gradient(135deg, rgba(0,0,0,0) 0%,rgba(0,0,0,0) 33%,rgba(0,0,0,1) 33%,rgba(0,0,0,1) 67%,rgba(0,0,0,0) 67%,rgba(0,0,0,0) 100%),
                      linear-gradient(45deg, rgba(0,0,0,0) 0%,rgba(0,0,0,0) 33%,rgba(0,0,0,1) 33%,rgba(0,0,0,1) 67%,rgba(0,0,0,0) 67%,rgba(0,0,0,0) 100%);

    background-size: 50%;
    background-repeat: no-repeat;
    background-position: right top, left top, left bottom, right bottom;
}

See it! (Yes, it looks the same as the first one).

That’s a bit fancy

Now that each ‘spoke’ of the X is created by a separate gradient, we can get super-fancy and create effects like this, this and this.

Getting SASSy

It only seems right that all this goodness should be rolled into a simple-to-use SCSS mixin. Copy and paste this code and create close buttons as easily as typing @include XBackground($foreground: white, $background: gray, $width: 34%) (adding your own border and border-radius as required).

SCSS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
@mixin XBackground($foreground, $background, $width) {
    $stop1: 50% - ($width / 2);
    $stop2: 50% + ($width / 2);

    background-image: -moz-linear-gradient(-45deg, $background 0%, $background $stop1, $foreground $stop1, $foreground $stop2, $background $stop2, $background 100%), -moz-linear-gradient(45deg, $background 0%, $background $stop1, $foreground $stop1, $foreground $stop2, $background $stop2, $background 100%),-moz-linear-gradient(-45deg, $background 0%, $background $stop1, $foreground $stop1, $foreground $stop2, $background $stop2, $background 100%), -moz-linear-gradient(45deg, $background 0%, $background $stop1, $foreground $stop1, $foreground $stop2, $background $stop2, $background 100%);
	background-image: -webkit-gradient(linear, left top, right bottom, color-stop(0%,$background), color-stop($stop1,$background), color-stop($stop1,$foreground), color-stop($stop2,$foreground), color-stop($stop2,$background), color-stop(100%,$background)), -webkit-gradient(linear, right top, left bottom, color-stop(0%,$background), color-stop($stop1,$background), color-stop($stop1,$foreground), color-stop($stop2,$foreground), color-stop($stop2,$background), color-stop(100%,$background)),-webkit-gradient(linear, left top, right bottom, color-stop(0%,$background), color-stop($stop1,$background), color-stop($stop1,$foreground), color-stop($stop2,$foreground), color-stop($stop2,$background), color-stop(100%,$background)), -webkit-gradient(linear, right top, left bottom, color-stop(0%,$background), color-stop($stop1,$background), color-stop($stop1,$foreground), color-stop($stop2,$foreground), color-stop($stop2,$background), color-stop(100%,$background));
	background-image: -webkit-linear-gradient(-45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%), -webkit-linear-gradient(45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%),-webkit-linear-gradient(-45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%), -webkit-linear-gradient(45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%);
	background-image: -o-linear-gradient(-45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%), -o-linear-gradient(45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%),-o-linear-gradient(-45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%), -o-linear-gradient(45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%);
	background-image: -ms-linear-gradient(-45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%), -ms-linear-gradient(45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%),-ms-linear-gradient(-45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%), -ms-linear-gradient(45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%);
	background-image: linear-gradient(135deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%), linear-gradient(45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%),linear-gradient(135deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%), linear-gradient(45deg, $background 0%,$background $stop1,$foreground $stop1,$foreground $stop2,$background $stop2,$background 100%);

	background-size: 50%;
	background-repeat: no-repeat;
	background-position: right top, left top, left bottom, right bottom;
}

Vendor schmendor

Looking again at the CSS of the more advanced solution, it turns out that there’s quite a lot of it. This is thanks to all the variations and vendor prefixes we have to use (THANKS FOR THAT VENDORS).

Of course another way we could do this is just to use an image. Old school. We can base64 encode and include it directly in the CSS. While doing that makes it a little larger than a separate file it cuts out the overhead of an extra request so it should be quicker overall.

Here’s the base64 version in action. View source and you’ll see that it’s a lot smaller (about half the size) of the gradient-based solution. The gradient version will scale to any size but if you use it at a fixed size is it worth the extra bandwidth?

Fortunately you don’t need to make that choice! Here’s a handy little chart explaining why:

Chars Uncompressed Gzipped
Gradients 3969 3.84KB 313 bytes
Base64 image 1961 1.91KB 1.33KB

And you are gzipping all your stuff as you serve it, right? I thought so.

Moving From Wordpress to Octopress

My Wordpress install kept disabling all the plugins. No idea why. Because I don’t browse my own blog too regularly (especially with the regularity of posts I’ve been writing recently) the only indication I had was when I started getting lots of spam comments thanks to the lack of Askimet.

When a colleague mentioned Wordpress recently, in particular how he would never use it in a million years, I asked what he would use instead thinking perhaps something like Tumblr, Blogger, Posterous or something similar. Instead, he suggested something decidedly more… fishy.

Octopress

Octopress is a very neat little blogging environment. You write your posts in markdown and then run a little rake script to generate your blog. The files generated are static HTML files which is great for performance and caching.

The blog’s structure is generated from templates and the CSS is generated using SASS so you get all the benefits of a dynamically generated site but none of the overheads. As if that wasn’t awesome enough, it’s virtually seamlessly integrated with GitHub and GitHub Pages, so you write locally, generate and then deploy with another rake script.

Migrating from Wordpress

To get all my stuff out of Wordpress I used Wordpress’ own export function to generate an XML document of all the posts and metadata. Then I used exitwp to convert the XML to markdown, which worked OK. That converted everything except comments. Generating static files means that comments are not really possible natively but Octopress has built in support for Disqus which works seamlessly. This does mean that the old comments are lost but you know what they say: you can’t make a seafood paella without chopping up a few octopi.

It wasn’t all plain sailing though.

This almost seems like work

I noticed as I clicked through some of the generated markdown files that some scamp had been at my articles and inserted spam links directly into the articles! Honestly, I was getting more and more glad to be moving from Wordpress by the minute. This did mean I had to read each of them through to make sure there wasn’t anything in there I didn’t want.

Reading through your old blog posts is hard. There’s stuff that doesn’t matter any more, there’s stuff that’s just plain inaccurate, there’s stuff which sounds really goofy reading back 3 years later. I tried my hardest not to change the content in them and for the most part I managed for historical reasons to preserve it.

As I was going through each article I adjusted the formatting of the markdown making sure the heading level was correct, the line spacing was correct, the code correctly marked down etc. All the usual OCD stuff that needs doing once you know it’s there.

I had been using a plugin to handle media on Wordpress so images were kind of messed up and I had to copy all the images and reformat all the image tags by hand. I was so glad I hadn’t written all that many articles! Keeping it really simple in markdown seems so sensible I can’t believe we didn’t always blog like this.

The last thing that needed doing was to give it more of the look and feel of the old site. The default theme, classic, is very nice but I’m rather fond of that little pixel dude in the header. Plus I got to try out a friend’s clever CSS Patterns Workbench to create the background in pure CSS instead of images.

CNAME. CNAME RUN.

One last awesome thing: GitHub Pages support domains. By setting your CNAME to point at the page (or the A tag if it’s the whole domain, not just a subdomain) you can point your site to GitHub Pages and have the whole thing hosted for free, no bandwidth charges or anything. I love you GitHub.

Verdict

I’m so glad I’m not using WordPress anymore. Wordpress blogs are such a target for hackers and spammers and I was hit several times over. I tended to keep fairly on top of the updates as well, so you can image how bad it might be for people who don’t bother. If anyone wants to hack this blog now they have to hack my GitHub account. Of course having the whole blog on GitHub has its advantages too… if you like you can correct any mistakes or typos by submitting a pull request!

3DS Browser Revisited

It’s been a few months since I made the post speculating about the 3DS’ browser capabilities. Since the browser is now available and that post is easily the post that has drawn the most attention to my humble blog I thought it would be worth writing a quick follow-up.

Not letting you down easily

There’s no way of saying this easily: the browser kind of sucks.

I’ve never found browsing on a console a particularly satisfying experience and I didn’t expect this to be any better. What excited me was the potential for extending the web into a new dimension. The browser doesn’t even do that. It’s nowhere near as awesome as I’d hoped. No 3D, no custom CSS extensions. It ‘does’ 3D… sort of. You can link to .mpo 3D image files; it’ll display them in 3D and allow you to save them to the SD card. It does not display the images in 3D inline in the page, though, and the 3D capabilities doesn’t extend beyond image files to other page elements.

Viewing 3D photos on the web is pretty cool though, I’ll admit. If you want to try out viewing 3D pictures on the web for yourself, head over to 3D Porch where they have stacks of them waiting to be seen.

Interestingly, because .mpo and .jpg are both encoded using the standard jpeg format it seems that you can use them both in the same way. If you visit this little demo page I put together you can see that most browsers are happy displaying .mpo files natively as well as .mpo files renamed as .jpgs. The 3DS browser will display both images on the page without a problem. Sadly they are in 2D, although if you click on them (they are linked to their original files) then the image is shown in 3D. This works for both the .mpo file and the .mpo renamed as a .jpg.

The browser doesn’t have Flash. This is no great loss, however, and with the 3DS’ weak battery life you can’t blame them for not wanting to include something that would drain it even more quickly than in most situations.

Getting technical

For some fairly technical details about the browser, 3DS Buzz wrote it up at the bottom of this article. I’ll attempt to filter some useful information out of that…

It is claimed that the browser supports HTML5 and CSS3 (partially in both cases). As such it isn’t much of a surprise that it doesn’t pass the Acid 3 test. It is more of a surprise, however, that the 3DS does not even pass the Acid 2 test. Don’t expect your pages to render perfectly (I’ve already had reports of pages not looking as they should).

The JavaScript engine is said to be “high speed” but it takes it’s sweet time to finish failing the Acid 3 test.

Here is the 3DS’ User Agent string:

Mozilla/5.0 (Nintendo 3DS; U; ; en) Version/1.7412.EU

This is interesting for several reasons.

  1. It doesn’t list the rendering engine or make any mention of NetFront
  2. It DOES list the device name (not really that interesting, it’s just cool to see it)
  3. It’s fairly concise (it doesn’t come loaded with all the cruft you normally find in a UA string)
  4. It has the version number, but only for what appears to be the browser and not the system software

Compare that UA string with the one of the version of Chrome I’m using right now:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30

Chrome’s is much longer and more verbose. You can see it not only contains the hardware but the system software version. It has the rendering engine and it’s version, the browser and it’s version and for the life of me I can’t think why but it also has a reference to Safari.

What’s odd to me about the 3DS’ UA string is the inclusion of the .EU on the end of the version number, right at the end. Does this mean that there’s an EU version of the browser that’s different from the browser they get in Japan and America? Why would they want to have to maintain multiple versions of the software? Will they use that user agent information to serve different content on Nintendo sites based on region? Probably not (but they could…).

Utilitarian

While I’m sure it’ll be fine for jumping onto GameFAQs to figure out how to beat that boss you’re stuck on, the 3DS browser is not going to set the world on fire like I hoped it might. I hope that not too many people got their hopes up about it, but if you did (like me) and are now feeling a little disappointed I feel your pain. At least it’s never going to gain a significant enough market share that we’ll ever have to worry about fixing broken sites in it… (famous last words).

Webkit Doesn’t Fire the Load Event on Images

Well that’s not strictly true. The full headline reads something like this:

Webkit doesn’t fire the load event on images when you change the src attribute and the new src is the same as the old

That seems reasonable

That seems like reasonable behaviour. I mean, the image is already loaded. Changing the src attribute to it’s current value isn’t really changing it at all. It’s staying the same. If the src is the same and the image is already loaded, why fire the load event? You would only want to do that if the image was reloaded but doing that would be pointless as it’s already loaded. Loading it again would be a waste of bandwidth and make the experience feel slower; not what browser manufacturers are aiming for.

So what’s the big deal?

Inherently lazy

Developers like myself are inherently lazy. I don’t mean we’re workshy, but rather we always look for the easiest, cleanest solution to problems. This behaviour in WebKit fails twice on this count.

  1. It’s inconsistent with other browsers. I have to work around it, potentially adding browser-specific code. That’s not good.
  2. It forces me to add extra code to cope with it’s specific requirements. Let me explain:

If I was writing for a JS-guaranteed environment this wouldn’t be such a problem but I’m a conscientious sort of guy and realise that not everyone will have the benefits of a modern browser with all the options set to ‘awesome’. I want to cater for the JS-disadvantaged as well.

Let’s assume I’m writing a carousel for a photo slideshow that shows 4 pictures at a time. I want to show the first 4 pictures by default so that at least some content appears even for the non-JS users. Then, using non-intrusive JS I augment the slideshow to add next / previous buttons and the ability to click the image to enlarge it in a lightbox.

To avoid repeating a lot of code in a setup function that would also be present in the next/previous function I can write a single function to set the page of the carousel, setting up the images and their click events.

Carousel setup
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
var picturesPerPage = 4,
    pictures = $('#pictures img');

var loadGalleryCarouselPage = function(pagenumber){
    var imageStart = pagenumber*picturesPerPage;
    pictures.each(function(i){
        var picture = $(pictures[i]),
            pictureContainer = picture.parent();
        picture.hide();
        if(carouseldata.images[imageStart+i]){
            picture.show();
            picture.bind('load',function(){
                pictureContainer.removeClass('loading');
                picture.unbind('load');
            });
            pictureContainer.addClass('loading');
            picture.attr('src',carouseldata.images[imageStart+i].thumbnailurl);

            picture.unbind('click');
            picture.bind('click',function(e){
                e.preventDefault();
                pictureLink.fancybox({
                    "href": carouseldata.images[imageStart+i].imageurl
                });
            });
        }
    });
};

loadGalleryCarouselPage(0);

I’m using JQuery and Fancybox for this example.

So what we have there is a function that loops over the four img tags, pulls information out of an array (carouseldata) based on the page offset passed as an argument, sets up click and load listeners and changes the image’s src attribute. This will work for any page at any time. In theory we could add a ‘jump to page’ option where the user could choose the page number to skip to. But we won’t.

This is especially handy as we can simply call loadGalleryCarouselPage(0); to set up the event listeners when the page first loads without having to duplicate most of the lines elsewhere. We even get a natty little loading spinner if we take advantage of the loading class that is set.

Making things difficult

When the page loads it’s a bit of a race. The results of this function varies between refreshes for me. If the image has not yet loaded when the JS runs then it works fine. If the image has already loaded, however, here’s what happens:

  1. A load event listener is set
  2. The loading class is applied which shows a spinner and hides the image
  3. The src of the img is set
  4. The load event DOES NOT FIRE in WebKit because the image is already loaded
  5. The picture remains hidden and the spinner keeps spinning even though the image is loaded

And that is frustrating.

It’s an intermittent problem though, only when loading race conditions fail. Here’s another situation where it happens every time.

The dead cert.

The total number of images in the carousel doesn’t divide perfectly by four, so on the final page there are only two images showing. The final two of the four img elements are hidden from view. They are hidden rather than removed because:

  1. They act as spacers so that other elements flow around them correctly
  2. The img tag needs to stay so that we can easily change the src attribute if the user navigates back to a page with 4 images on it.

So say we’re on page 9 of 10 and click ‘Next’. Images 1 & 2 are updated to show the final two pictures and images 3 & 4 are hidden. Importantly: the src attributes of images 3 & 4 don’t change. When we click ‘Previous’, images 1 & 2 are changed back but 3 & 4 are stuck with the loading spinner. That’s because, like before, the src was already set and it was equal to the new value.

Working around it

We could set the hidden images to a transparent .gif or .png instead of hiding them which would solve the second problem but because we want the images showing for non-JS users when the page loads we can’t use that technique to fix this. Also, downloading that extra image means extra bandwidth and latency times that we’d rather not have to deal with.

It turns out that setting the src to '' (empty string) immediately before setting the image url will fix the problem. But! It causes the images (and consequently their container) to collapse to zero width and height in Firefox while the new images are loading which looks really bad if you’re trying to navigate a slideshow.

Here’s my solution:

Carousel setup improved
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var picturesPerPage = 4,
    pictures = $('#pictures img');

var loadGalleryCarouselPage = function(pagenumber){
    var imageStart = pagenumber*picturesPerPage;
    pictures.each(function(i){
        var picture = $(pictures[i]),
            pictureContainer = picture.parent();
        picture.hide();
        if(carouseldata.images[imageStart+i]){
            picture.show();
            picture.bind('load',function(){
                pictureContainer.removeClass('loading');
                picture.unbind('load');
            });
            pictureContainer.addClass('loading');
            picture.attr('src',carouseldata.images[imageStart+i].thumbnailurl);

            picture.unbind('click');
            picture.bind('click',function(e){
                e.preventDefault();
                pictureLink.fancybox({
                    "href": carouseldata.images[imageStart+i].imageurl
                });
            });
        }
        else{
            picture.attr('src','');
        }
    });
};

if($.browser.webkit){
    $('#pictures img').each(function(i){
        $(this).attr('src','');
    });
}
loadGalleryCarouselPage(0);

I added an else so that if there aren’t enough pictures to fill all the img tags the src of the unused images is set to an empty string. There will always be at least one image on each page so there will always be an image at full height to prop up the carousel container while those hidden img tags are primed to receive more content.

I also added a little if block directly before initialising the carousel, at the bottom. If the browser is webkit-powered then it’ll loop over the img tags and prime them (set their src to empty) before initialisation. Because this is done using JS, non-JS users will still see the images.

Grumpy

I’m grumpy about having to put in that extra, browser specific code. Setting the src to an empty string seems hacky. But it works and the logic is still clean and minimal. So it’ll do.

I hope that helps anyone having image loading javascript issues. And as usual I’d be interested to hear if you have any alternative / better solutions!

Footnotes

Check out the carousel in action here.

CSS3 Gradients, Multiple Backgrounds and IE7

You know how, according to the W3C, CSS selectors that are not understood should be ignored, without error? IE7 doesn’t do that 100% of the time.

How dare it

That’s right. Just when you thought you had a nice system in place IE comes along and stomps all over it. I’m sure more and more people will come up against this as they start using CSS gradients in earnest. I can see it being quite a common situation, too. I have two background images: one, a CSS generated gradient and two, an image to be laid over the top of it. A nice shiny button with an icon, for example.

We know that certain browsers can’t render gradients and so we define the background to initially be just a solid colour with the icon image (the users of the older browsers will never miss what they didn’t know was there). Then we go on to define the styles for the modern browsers. These styles use the same selector (background-image) so they will override the initial declaration but according to the rules, browsers that don’t understand the gradient instructions will just ignore the whole declaration leaving us with just the initial icon for the background.

As we know, the backgrounds will appear top down from the order you specify them so we specify the icon first and then the gradient, otherwise the gradient would obscure the icon.

We also define the background-position twice. This is so we can position the gradient+icon background differently from the icon on it’s own. Browsers that don’t support multiple backgrounds should not see this syntax as valid and should ignore it.

Here’s the code:

HTML
1
<a href="#" class="mybutton">Register now</a>
CSS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
.mybutton{
	background-image: url(icon.png);
	background-image: url(icon.png),
		-webkit-gradient(
			linear,
			left bottom,
			left top,
			color-stop(0, rgb(233, 233, 233)),
			color-stop(0.5, rgb(249, 249, 249)),
			color-stop(0.5, rgb(255, 255, 255)),
			color-stop(1, rgb(255, 255, 255))
		);
	background-image: url(icon.png),
		-moz-linear-gradient(
			center bottom,
			rgb(233, 233, 233) 0%,
			rgb(249, 249, 249) 50%,
			rgb(255,255,255) 50%,
			rgb(255,255,255) 100%
		);

	background-position: 5px center;
	background-position: 5px center, left top;

	background-repeat: no-repeat;

	padding-left: 30px;
}

Here it is in Firefox:

And in IE7:

Or you can see it for yourself in your browser.

It seems that IE is not behaving as we might expect. It’s not showing the gradient (expected) but it’s not failing back to just showing the icon either. A quick look at the IE developer toolbar (in IE9, IE7 mode; the IE7 dev toolbar would leave you stumped) shows us why:

It’s picked up the background image declaration that includes a gradient. In this case it’s the Mozilla-specific gradient and the reason it’s this one and not the Webkit one is that we are defining the Mozilla one last. If they were defined the other way around it would have picked up the Webkit one instead.

Why? Oh why??

I’m no expert on how IE parses CSS but I would presume it’s something like IE recognises the URL part just fine and when it reaches the closing parenthesis it figures that’s it and all’s well. Maybe not. Whatever, for some reason it thinks it’s a valid declaration, scoops up the whole lot gradient and all and tries to render it. And fails.

That’s annoying

Yes it is.

The fix

Importantly, IE correctly recognises the background-image declaration as invalid (for itself) if it starts with a gradient, even if it contains a regular image later on. So we just start the declaration with a gradient. The trouble is, we don’t want to put the gradient first as it’ll obscure the icon, so we define another gradient that is OK to put on top of the icon. That would be an empty or transparent gradient.

We will use the minimum amount of code that is necessary to trigger a gradient in the rendering engine. For Webkit, it is -webkit-gradient(linear, left bottom, left top). No color-stops required. This is good because no colour means no visible gradient. For Mozilla, it requires some colour information, so we just give it completely transparent colours using rgba: -moz-linear-gradient(center bottom, rgba(0,0,0,0) 0%, rgba(0,0,0,0) 0%).

Putting these gradients first mean that IE7 won’t incorrectly think it can render that style and so it’ll stick with just the icon.

Important: Because we now have 3 background images, we also need to declare a third value for background-position.

CSS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
.mybutton{
	background-image: url(icon.png);
	background-image: -webkit-gradient(linear, left bottom, left top),
		url(icon.png),
		-webkit-gradient(
			linear,
			left bottom,
			left top,
			color-stop(0, rgb(233, 233, 233)),
			color-stop(0.5, rgb(249, 249, 249)),
			color-stop(0.5, rgb(255, 255, 255)),
			color-stop(1, rgb(255, 255, 255))
		);
	background-image: -moz-linear-gradient(center bottom, rgba(0,0,0,0) 0%, rgba(0,0,0,0) 0%),
		url(icon.png),
		-moz-linear-gradient(
			center bottom,
			rgb(233, 233, 233) 0%,
			rgb(249, 249, 249) 50%,
			rgb(255,255,255) 50%,
			rgb(255,255,255) 100%
		);

	background-repeat: no-repeat;
	background-position: 5px center;
	background-position: left top, 5px center, left top;

	padding-left: 30px;
}

Here it is in Firefox:

And in IE7:

Or you can see it for yourself in your browser.

But wait, there’s more!

You thought this was over? Of course it’s not! IE9 is a heck of a lot better than previous versions but it’s still not perfect. For example, it does support multiple backgrounds but it doesn’t support CSS gradients. This means that it’ll ignore the gradients but it’ll use the background-position multiple background declaration we made, resulting in the icon being positioned left top as opposed to 5px center.

I tried getting around this by inserting another background-image defining three images (two of them transparent spacers) directly after the first background-image and before the first gradient:

CSS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
.mybutton{
	background-image: url(icon.png);
	background-image: url(spacer.gif),
		url(icon.png),
		url(spacer.gif);
	background-image: -webkit-gradient(linear, left bottom, left top),
		url(icon.png),
		-webkit-gradient(
			linear,
			left bottom,
			left top,
			color-stop(0, rgb(233, 233, 233)),
			color-stop(0.5, rgb(249, 249, 249)),
			color-stop(0.5, rgb(255, 255, 255)),
			color-stop(1, rgb(255, 255, 255))
		);
	background-image: -moz-linear-gradient(center bottom, rgba(0,0,0,0) 0%, rgba(0,0,0,0) 0%),
		url(icon.png),
		-moz-linear-gradient(
			center bottom,
			rgb(233, 233, 233) 0%,
			rgb(249, 249, 249) 50%,
			rgb(255,255,255) 50%,
			rgb(255,255,255) 100%
		);

	background-repeat: no-repeat;
	background-position: 5px center;
	background-position: left top, 5px center, left top;

	padding-left: 30px;
}

But that didn’t work as IE7 still parsed it (incorrectly) just as it did in the first instance and therefore didn’t show the icon at all. Back to square one.

At this point I’m sure most people are thinking

“Oh come on, why not just use Modernizr and only apply the gradients to browsers that can use them?”

That’s one way of doing it, although I would rather not use JavaScript if possible. This leaves one option… go back to the original CSS and add this conditional comment in the <head>:

Conditional Comment
1
2
3
4
5
6
7
8
<!--[if IE]>
    <style type="text/css" media="screen">
        .mybutton{
            background-image: url(icon.png);
            background-position: 5px center;
        }
    </style>
<![endif]-->

As no versions of IE yet support gradients, we just reset the background to be the plain ol’ icon. Problem solved.

Here it is in Firefox:

And in Webkit (Chrome):

And in Opera:

And in IE6:

And in IE7:

And in IE8:

And in IE9:

Or you can see it for yourself in your browser.

A side serving of gradient

You may have noticed that two of the color-stops have the same percentage/distance value. This effectively give us the ability to have more than one gradient on the same element. The end result is a gradient from the top to the middle, a sudden stop and change of colour and another gradient from the middle to the bottom. It’s useful to be able to change sharply from one colour to another as well as smoothly!

Footnotes

  • If anyone has a better solution, please get in touch in the comments or on Twitter.
  • The images I’ve used come directly from the project I’m working on in my day job. If my employer has any objection to their use here I will of course replace them with something else. But I’m sure they won’t.

The Web in 3D - the Nintendo 3DS Web Browser

Last Sunday my wife and I went and had a sneaky preview of the new games console from Nintendo: the 3DS.

Let’s not beat around the bush: this is a very impressive device. It’s tricked out with all the latest technologies (or the latest applications of ‘old’ technologies, wherever you choose to draw the line). The thing people are really talking about, of course, is the 3D aspect of it. I’m sure you have read about it - the top screen is a 3D display which importantly doesn’t require glasses. I can’t stress enough how good the 3D effect looked. It felt completely natural and I didn’t find myself getting any kind of a headache or nausea like some people are worried about.

There were demos available of most of the functionality: Lots of games that’ll be available on launch or shortly thereafter, 3D photography, augmented reality (including the ‘reality’ part shown in 3D due to the 3D cameras on the lid - the most impressive thing for me) and street pass (Nintendo’s social discovery system). But the thing that actually holds the most interest for me wasn’t shown and indeed is barely talked about. I’m hoping that will change.

A complex web

I’m talking, of course, about the web browser which will come built in to the device as part of the extensive suit of software bundled on-board.

*YAWN*

A web browser? What’s so great about that?

I don’t know yet because no-one is talking about it, but I’m hoping it will inspire (even more) innovation and creativity on the web. I’m hoping it will have some semblance of 3D integration and capability. And if not, why not? Surely this is the way the web is going. More and more devices will be 3D enabled in the near future and you can bet that if the 3DS doesn’t kick-start the 3D web some other device will. You can buy a 3D TV to put in your living room for crying out loud - this is 2011! They reckon you’ll be able to buy a HOLOGRAPHIC 3D TV in 2012. I’m all over that. And I want the web to make sure it isn’t left behind. After all, a lot of modern TVs have integrated browsers. It’s the next logical step.

Least they could do

The least I could hope for is support for 3D images displayed in web pages. The LEAST. The standard open format is .mpo and fortunately the same format in which the 3DS saves it’s 3D photos.

That’s not to say you will be able to simply embed the 3D photos in your site and have them work in the 3DS’ web browser though. Think how that would look in a desktop browser… Well it probably wouldn’t show up or show a broken image.

No, no, no, don’t even think about making a separate site for 3D devices. I thought we were past all that. What are you going to have yet another separate site for 3D+Mobile? We want to serve one page that works on all devices.

The trouble is, without images having a similar failover pattern to the one available to video and audio in HTML5, you simply couldn’t use the image inline in your page as non-3D-enabled browsers wouldn’t recognise the format. This just proves that there are always new image formats emerging; they are not all supported by all browsers as it’s easy to assume (if you forget about IE6 and .png’s) so why should we assume that that’s the case with the markup?

This has been discussed by Bruce Lawson and makes sense (no matter how frustrating it is). So until all browsers support the display of 3D images on 2D screens we will have to find another way.

The other way to include images in the page is, of course, CSS background images. This one has legs. The 3DS browser could easily respond to an @media query, something like @media screen and (-3ds-min-device-spatial-dimensions: 3) { ... }. Then you could alter how the page looks on a device that has 3D capabilities. Once you have the 3D background image in place you can mark it up to include a 2D version for the rest of the world:

HTML
1
2
3
<div class="forest-picture">
    <img src="/static/img/forest-2d.jpg" alt="Pretty forest scene" height="250" width="400"></img>
</div>
CSS
1
2
3
4
5
6
7
8
9
10
@media screen and (-3ds-min-device-spatial-dimensions: 3) {
    .forest-picture{
        background: transparent url(../img/forest-3d.mpo) no-repeat 0 0;
        width: 400px;
        height: 250px;
    }
    .forest-picture img{
        display: none;
    }
}

The best of both worlds! We can dream…

Reality check

Before we go on, I just need to make it abundantly clear (if it isn’t already) that this article is pure speculation. I don’t know if the 3DS browser supports any of this kind of stuff, but imagining the possibilities and how they might work is an interesting exercise. Oh wait, it looks like Google has already looked into 3D browsing. My mistake ;)

Let’s explore further down the rabbit hole…

Going the extra dimension

What if we wanted to move beyond just sticking 3D images in our pages? As awesome as a 3D gallery might be, there are so many more possibilities. Imagine if the whole page could be rendered in 3D; if each element on the page had it’s own depth setting. I think the most obvious thing to do would be to push the background actually into the background giving the site content more prominence, and if you start down that road you should just be able to let your imagination carry you forwards.

I know what you’re thinking, and it’s what I thought at first too… why not use z-index for that? The reason why not is because z-index controls the stacking order of elements on a single plane. If you change the function of z-index to control depth on 3D devices, how would you re-order a group of elements sharing the same depth on a 3D page? It’s clear that we need a separate property to do that. I’m going to be bold and use depth in examples, for want of a better attribute name.

So where are we? We’ve got 3D images and the ability to assign depth to elements. That’s a good start, but it seems a little restricted, doesn’t it? A bunch of flat panels sitting at different depths in a 3D space. We’re not really making the most of the technology. We need to add a little style in there… style that can bridge the gap between depth-levels. Fortunately, Webkit is one step ahead of this game with it’s CSS 3D transforms. These could easily be adapted to show in real 3D instead of 3D rendered in 2D.

Curves would be nice

Yes they would, and so would a mansion on the beach in Barbados. We don’t even have the ability to define curves in 2D CSS yet. But then in 2D we might not have wanted to do crazy things like making a callout or title bow inwards or outwards, which would work pretty well in 3D. But maybe just one step at a time…

What is 3D anyway

To develop in 3D you need to understand how it really works. Fortunately understanding it is a lot simpler than getting your head around designing and developing in it:

3D works by each of your eyes seeing a slightly different image.

Simple enough, and in real life this works pretty well. But when generating your own 3D content you have to be ever-mindful of it.

Mind the gaps

Imagine a blank page. You make the background a fetching pinkish sort of red colour and set the depth to be way back in the distance.

You then have a look at it and wonder why it doesn’t look like it’s way off in the distance. You check to see that your 3D depth slider is turned all the way up and when you find that it is you’re left feeling a little confused.

The reason why this doesn’t appear to be in the background is because your eyes are seeing the exact same image. There needs to be some more detail in there before your eyes can be tricked into thinking that it’s way off in the distance. Here are some suggestions:

  1. You could give it a border that makes it look like you’re looking into a box. Of course the edges of the border would need to be firmly in the foreground for it to work.

  1. You could give it a pattern or image. Beware with repeating patterns though: looking at 3D images forces your eyes to cross slightly and a repeating pattern could cause you to think it’s not at the depth you intended.

  1. Lay something else on top of it with a higher depth. For demonstration purposes I’m going to go with this one.

But even laying something on top like this isn’t too easy for our brains to process. Have a look what each eye would be seeing.

There’s not a great deal to differentiate these two images and while your brain knows it’s seeing different things from each eye it is struggling because there are things missing that it’s used to. Usually when you see an object in front of another object it casts a shadow somewhere. Because they are in different locations your eyes will each see that shadow slightly differently. Also the way the object is lit and how it reflects the light could be different in each eye. To make sure we don’t give people headaches we’ll have to sort this out.

CSS
1
2
3
.floating-box{
    box-shadow: 5px 5px 5px #ccc;
}

Now the panel has a nice drop shadow which should make it easier on the eyes and easier to see the 3D effect.

But how does it get rendered so that each eye sees the shadow differently?

Seeing the light

The way I see it there are two options:

  1. The browser provides a default (override-able) light source:
CSS
1
2
3
body{
    light-source: 25% 25% fixed;
}

fixed would position the light source relative to the browser viewport, and as an alternative absolute would position it relative to the document.

  1. You, the developer, get granular control over what each eye sees:

If you had control over each eye the possibilities would be endless. Set the difference in box shadow offset, show a different background image to achieve a rippling effect. You would OWN all the dimensions.

CSS
1
2
3
4
5
6
7
8
9
10
@media screen and (-3ds-min-device-spatial-dimensions: 3) and (-3ds-perspective: left-eye) {
    .floating-box{
        box-shadow: 3px 5px 5px #ccc;
    }
}
@media screen and (-3ds-min-device-spatial-dimensions: 3) and (-3ds-perspective: right-eye) {
    .floating-box{
        box-shadow: 7px 5px 5px #ccc;
    }
}

I think a combination of both would probably be in the interests of developer and user alike.

It’s not all giant blue humanoids and bio-luminescent flowers

This technology has it’s disadvantages, and you can be sure that there will be some nasty surprises out there when it comes along. As with most visual effects, subtlety is king. Of course there will always be the developers who are irresponsible with this great power and make some eye-bleeding creations, but that’s just inevitable. No, what I’m really worried about can be summed up in two words: Internet. Advertising. If you thought pop-over ads were intrusive now, you ain’t seen nothing yet.

The waiting game

Who knows what you’ll be able to do with the browser? Nintendo maybe? Or if it’s Opera providing the software again, as they did for the Wii and original DS/DSi then I expect they will know. (Please do get in touch if you have insider knowledge!) But until that information is made available or the 3DS is in our hands we won’t know for sure. I hope it’s got at least a few fun 3D features to play with. I’m sure the full set will develop over time.

Update: Now that the browser is available, I had a little play with it and wrote down a few of my thoughts.