Tag Archives: World Wide Web

Popular Misconceptions about Designing for the Web

I’ve been working on web projects for a few years now. Most of the things I’ve contributed to have been based, at least in part, around the principle of ‘one-URL-per-thing’. This, as should be clear from pretty much everything else on my blog, is something I consider to be a good thing. With a URL for every ‘thing’ that a user might be interested in (within reason, and within the scope of your content and data), you greatly increase the findability, sociability and utility of your content. But, time and again, the same criticism comes back – “ah, but not everyone is a geeky wanderer of websites, not everyone will want to use this; this won’t appeal to the mainstream, passive audience; it’s too much like a reference work; a boring user experience; certainly not delightful.” I’m afraid I have to disagree with pretty much each and every one of those criticisms – and now seems as good a time to do so as any. So here goes.

The false dichotomy

Firstly, and most importantly, there’s a huge misunderstanding going on here. It is not a straight choice between building something based around ‘one-URL-per-thing’ and building a ‘delightful experience for the majority of (apparently passive) users’. The two aims are not, and should not be presented as being, in competition with another. In fact, I’d argue that the ‘one-URL-per-thing’ approach is a massively helpful thing to build as a stepping stone to creating the ‘delightful experience’.

Sure, it will depend on the amount of resource and content you have – sometimes, it just won’t be practical to build a Web presence for every single piece of content (and yes, you can easily tie yourself up in knots trying to work out how granular you want to be with addressability) – but I’m sure it’s much easier, and much less expensive, than you’d think. Remember, in general, the more things you do make addressable, the more discoverable your content is, so it’s quite possible that the investment will be worth it.

It’s also a mistake to think that you have to make every single piece of content, at your ideal granularity, available at launch. No, you only need the fundamental shape of your content to be addressable, and in the future you can go in and make things more addressable and more granular. I’d recommend that on the back-end, you perhaps make things a little more granular than you do in your front-end, so that it’s easier to add new features and content quickly, but that’s a hunch rather than a time-honoured, proven strategy.

Now, whilst the ‘delightful experience’ can be built on top of the one-URL-per-thing, it’s much, much harder to do it the other way around. Indeed, most of the pain of innovation that I’ve found is in trying to make websites more granular after-the-fact, when none of the systems or user experience have been designed to accomodate this, even though it would give tremendous benefit to the user, oh, and to the business, too.

Silos are senseless

Because here’s the second major flaw in the criticism. If there’s one thing we learned from the first decade of the twenty-first century in Web development and product management, it’s that it’s expensive to build good experiences. Now, of course the tools may have got better since the early years, but the point still stands, especially for any company that intends to be alive on the Web for longer than the initial buzz around a release of content. Building flashy silos of content, with exceptional user experiences just doesn’t work as a mid-to-long term business strategy.

Never mind the fact that your ‘exceptional’ user experiences are almost certainly tied to a short lived technology, are designed based on some, let’s face it, educated guesswork about your possible users, and probably don’t work for any user not on the latest equipment or with any kind of visual or motor impairment. It just doesn’t make sense as a business model, unless you truly are a one-shot kind of company – which is fine, but, really, is that true of most companies? I’m not sure. This is because the content that you’ve tied up into your flashy experience – you’re almost certainly going to want to redesign, or incorporate into a different experience, in the future. Not just in the future – what happens if there’s a piece of content that you want to design multiple experiences around?

Simply put, it makes business sense to try and achieve economies of scale and of scope. This is done by abstracting common content wherever you can. Again, abstraction can certainly go too far, but to not try at all is a big mistake. Give the common, reusable bits of your content their own handles on the Web, then string together experiences on top of them. This way, you don’t have to re-invent the content every single time, and you almost certainly save yourself a lot of time and effort in doing so. This is a careful process, admittedly – you need to think hard about which things are likely to stay the same over time, and where the change will happen, but again, the return on investment will be worth it. Once you’ve done the hard work this time, the next time, you can concentrate much more on the ‘delightful experience’ – and people who are still engrossed in the previous experience can much more easily be upsold to the newer one.

Good architecture does not impede creativity or experience

Finally, it’s important to note that the design of ‘one-URL-per-thing’ does not have to restrict the experience in any way. Yes, it can produce a ‘boring, reference-work’ style experience – if that’s all you let it be. Indeed, the work of producing the basic site is not meant to represent the ideal experience anyway. It’s certainly not meant to be at odds, or in opposition to, your preferred user experience. Instead, it’s laying the groundwork for your future masterpieces. It’s about the long-term effectiveness of your core content, about ensuring that users can find the content as much as possible, and that they can share it and so on.

You can still go on to design your experience that you think is suitable for ‘Mabel, the woman who’s worked in a dry cleaners all her life, and just wants a passive experience’ (a statement which, while on the surface, and in our weaker moments, might be true, but suggests a slightly disdain for your audience – even a passive experience needs to engage the brain in some kind of inquisitive mode, in order to be successful). I certainly don’t want to restrict your creativity, and force you to only produce experiences that geeky information-hunter-gatherers would enjoy. I want us to produce things that as many people as possible will want to invest in, and enjoy, in as many ways as possible, not just for a one-time experience, but something that can be returned to, built upon, and perhaps even turned from a ‘passive’ into an ‘active’ experience.

But if we ignore the ‘one-URL-per-thing’ approach, not only do you miss out on the business and audience benefits I’ve described, I’d argue you’re misunderstanding the medium of the Web. The Internet is the two way channel, that brings us great things, and great experiences. The Web, as I’ve said previously, opens us up to building longer term, layered, generative experiences that are both delightful and truly creative, in ways we haven’t even imagined yet. Far from being a boring user experience, I think that’s actually a much more exciting future to aim for.

The Medium is the Message on the Web

Disclaimer: This post arose as a result of some half-baked thoughts & tweets on the train journey in to work today, partly inspired by a lot of thinking about social networking, also by Michael Smethurst’s excellent blog post which is basically the post to read for any self respecting developer or designer on the Web.

Two of the great success stories of recent years have been Facebook and Twitter. Now, no matter what you think of them now, whether you agree or disagree with where they are headed (I for one think that they’re both showing various signs of getting slightly too big for their boots, Twitter much less so than Facebook), it’s extremely interesting and important to investigate why they’ve been so successful. The simplest analysis to make is that they’ve succeeded because of some ill-defined ‘power of social networking’ or ‘social media’ or ‘user-generated content’. Whilst this is true, it’s a very cursory explanation, and often leads to the conclusion – well, we better put ‘social media’ features on our company’s website if we want to succeed/compete with the likes of Facebook & Twitter.

This isn’t a criticism of social networking in general, either. Rather, it’s a call for us to examine and understand why social networking has taken off so well on the Web. Obviously, an important part of people’s affection or dislike of Facebook & Twitter is the content. Meaningless, useful, lighthearted, dangerous, and so on. I’d like to take the time to examine this phenomenon from the point of view of the theories of Marshall McLuhan.

I’ve gone on about his theories a fair amount before, but I do think there’s something useful to be gained by testing out his theories on contemporary culture. The theories may not be perfect, but they provide an interesting perspective on the success or otherwise of media, and, I’d argue, can inform the design and strategy of new things that can utilise the properties and effects of a medium to its full advantage. To put it simply, his theory that ‘the medium is the message’ means that what’s important when studying the effect of a particular media is not the content, but rather the form and characteristics of the medium itself. From what I’ve read, McLuhan had a pretty hard-line stance on the issue of content – that, essentially, any debate over whether a medium was ‘good or bad’ that focused solely on the content was ultimately useless. I’d temper this by saying that the content is important to some extent, but I’d certainly agree that of much more importance is an examination of the medium itself.

McLuhan was writing in the 1960s, and so was able to examine various forms of media, from the formation of language and written communication down to television, radio and cinema. I’ve been gathering his quotes as I’ve been reading his seminal work, Understanding Media, and what’s interesting is that much of what he says can be applied to the Web, providing an interesting angle for discussion and debate – something he was unable to take part in, in terms of the Web.

I won’t go into massive detail here, but I’d argue that when examining the Web as a medium in and of itself, we need to ignore the content and indeed, to some extent, the tech stack. What’s more important for me is the general conceptual form of the Web, by which I mean that it is a web. Dots and lines connected to each other. The dots can represent anything. The lines link dots together, but they also describe how and why the dots are linked.

So, if it’s important to study the medium rather the message, and Facebook & Twitter are the two services we’re going to use as case studies, why have they been so successful? Acknowledging but ignoring the actual content for now, let’s take a look at each service in turn. What we can see is that they both have taken advantage of characteristics of the Web in slightly different ways.

Facebook – Yes, they’re not great. They’re obtuse, follow the ‘walled garden’ pattern and are showing rampant signs of the misguided mentality of a ‘big’ successful company, namely, that they know best and everything should go through them. But think about it. Part of the big struggle the Linked Data and Semantic Web communities, myself included, have had to face in the past year or so, is the obsession with documents and pages. All too often when talking about the Web or designing new ‘products’ and services, we fall back into the metaphor of pages. “What does the page look like?”, “We need to promote this page” etc. But notice that we scoff at people who talk of ‘my Facebook page’ or indeed ‘my Twitter page’. That’s because Facebook have succeeded to some extent where we’re still struggling. The user mental model when engaging with Facebook, primarily through networking with friends, is not through linking and visiting pages. You don’t become a friend of a page, in the user’s mind. You’re not making links between pages in their minds. You’re connecting between people. You’re using the Web to represent (for the most part) the connections you make between people in the real wold – linking things, not pages. They’ve seamlessly converted users to thinking in terms of a Web of things, even if the technological background isn’t quite there. And thus, social networks are so successful precisely because they’re networks of things. And that’s what the Web allows us to do – which we should celebrate and make more of. There’s so much creativity we can unleash if we don’t limit ourselves to the restraints of pages, and think in terms of things.

Twitter – Now, obviously Twitter has similar characteristics to Facebook, when looking at a glance, even if the ‘friendship’ model is rather different – again, they’re making you think things, not pages. But there’s a couple of other things they do which use the Web as a medium. Firstly, as Michael puts it – “every nugget of content [is] addressable at a persistent URI…Every tweet, no matter how mindless or empty of content and meaning has it’s own URI.“. In other words, in this case the message hasn’t shaped the platform – the medium has. Of course, with domain driven design, the content should also shape the platform to some extent – design with the ‘world’ of content you’re aiming the platform to hold in mind – but with something like Twitter, the whole point is that you can talk about anything. Secondly, although this seems to be changing more and more (perhaps for the good, perhaps for the worse), Twitter has kept things simple. It hasn’t tried to do everything. Instead, it has a clear, simple structure, which, importantly, is open and addressable. There’s no fancy widgets or complicated APIs. There’s just simplicity, so that others can build on top of this. Who would those others be? Anyone, even Twitter themselves. but the important thing is that they not only use the medium in terms of URIs for each Tweet, but they don’t try to ‘own’ everything – others can build stuff on top of their data, and if it’s successful, everyone wins.

To be honest, these aren’t earth-shattering revelations, and indeed I’m sure I’m not an expert in any way on either of the services above – but just thinking about the Web in these terms is what I’m trying to encourage.

In conclusion, if you want to think about why Facebook & Twitter have been so successful, and you want to achieve such heady heights, you should at least consider the above factors. And remember, it’s not so simple as adding a Twitter stream to your company’s page, or having an account on there. Nor is it about Facebook widgets or a ‘fan page. Instead, look at what made them successful, They used the Web. Really used it. They concentrated on things, not pages – they had a domain model and an idea of their users’ mental model. They didn’t try to do everything themselves (at first, at least), and (at least one of them) kept things simple and open, so others could build upon it, and make things better for everyone. Most importantly, they took the time to think about how their service would work when stitched into the Web, and moreover, how it would work as a web itself. So think about how you can do the same too. Oh, and again, seriously, if you want to do any kind of Web design/development, read this first.

On Avatar, 3D, Augmented Reality and Truly Interactive Television

Happy New Year! Firstly, a little apology – I put a note at the top of my last blog post saying that I’d explain my use of the terms ‘Internet’ and ‘Web’ soon – I did in fact write a post – but somewhere along the line it never made it out into the wild world. So for that, sorry – but the gist of it was that perhaps I should have used the term ‘Internet’ to refer to the underlying infrastructure network, and the ‘Web’ to refer to the network of information that can be built on top of this.

And now on to the main topic for today. I’ve talked previously about how we could/should be using the Web to provide representations of the narratives we currently tell via radio and television. I said that whilst on-demand services such as iPlayer have had great success, and have certainly improved the consumption of media, they’re not really game-changers, in that they are an attempt to replicate the form of a linear medium within a non-linear medium. As such, although they benefit from the latent abilities of the Internet (speed, distribution, on-demand), they do not take full advantage of the Web. These are still TV or Radio ‘adaptions’ of stories, being distributed by the Internet. What we need is the ‘Web’ adaption of the same story.

I’ve been working over the past month or so on a prototype that will explore these possibilities. At first glance, it seems to be very similar to Wikipedia, in that there are pages for characters, places, events, and links between them. The audience can undertake similar journeys to that of a visitor to Wikipedia – i.e. non-linear, explorative journeys – things which people are already doing (for instance when they say they got ‘lost’ on Wikipedia – in a positive sense!). However, what is different is that these URIs, and the HTML representations of their subjects, are connected directly in the same way as the story itself is being told. Thus, a collection of these URIs, joined together through hyperlinks, can be seen as a small web, or constellation, representing the story itself – a Web adaption – which allows the audience to explore the story from all angles, and gain new perspectives.

For a while now, I’ve been thinking of how the mainstream user will benefit from all this. I think the effect will be fairly subtle at first, and I was imagining two ways of experiencing the story – firstly by hopping from URI to URI and being immersed in the ‘world’ of that thing, and secondly, by taking a step back and examining the web of connections between things, and travelling through this web along a particular path – the act of telling the story. The latter, I imagined, would be through the form of some fairly standard ‘dots and lines’ visualisation, but at the back of my mind, I wasn’t satisfied with this. Co-incidently, I then saw Avatar in 3D at the cinema. Personally, I felt pretty let down by the paucity of imagination shown in the storytelling, but I had to admit that the 3D effect was intriguing. Perhaps, rather than visualising the links through a limited, 2D ‘dots and lines’ diagram, the audience could gain a greater understanding by viewing the story’s Web in 3D, allowing them to see all sides of it.

This is still a possibility, though again I’m very aware of the lack of availability of devices and technologies in the consumer market which support 3D. That, of course, may change, but I wondered whether there were other ways of improving the experience. I was worried that without this, it would just seem, to the general audience, like a replication of Wikipedia (albeit containing information that neither Wikipedia nor fan-wikis hold in such a structured, clickable manner).

And then I considered the ideas of convergence and Augmented Reality – essentially reminding myself that the Internet and the Web, and our interaction with it, need not be restricted to the browser. The Web is, at its heart, merely the highly structured data store – on top of which we can build user interfaces across virtually any connected platform. So, I started to think about mobile and TV viewing. When I’m watching drama, or sport, or the news, I often want to know more – why something is important, what someone is referring to, more about a player, what’s the bigger picture etc. At present, the content is communicated to me via the screen, I interpret it, and then have to go off on my own search to find out more. When doing so, I have to begin again from scratch, communicating the same content (or a near approximation of it) with a computer connected to the Web. What if the content presented to me on screen also had the underlying semantic structures that meant it could do the communication with the Web?

The simplest form of this would be on a mobile device, where, whilst watching the programme (and indeed at any other time), you would navigate to a portal which can guide you to the correct URI contained within the narrative structure – this could take the form of a search engine, or a listing – that way, I could search for ‘Jack Bauer’ and be taken straight into the ‘world’ of 24 – or, more powerfully, if I witnessed an important event happening on screen, I could click the relevant link in the portal, and see other events that have led up to this, more information etc.

But there’s an even more advanced version of this, which I strongly believe could be prototyped and developed pretty quickly. There are technologies available which can take a drama script, and output RDF triples, creating Web structures which represent every element of the narrative, down to the words. These can also be enhanced by matching the triples to timing information within a media representation – so, for instance, identifying that an event happens at 20 minutes into this particular version of the episode, but 15 minutes into another version.

Couple this with the growing links between the consumption of media and the Internet – TV over IP, such as BT Vision, or even on-demand services such as iPlayer. The media is being streamed to the audience – but this is potentially a two way channel – and if we have all the information about the narrative structure and timings for the programme available on the web, then the user can access that wealth of information whilst they are watching – either directly onto the screen, or on a supplementary mobile device.

Just think of what this means for drama, for starters. The ‘flashback’ device in storytelling, essentially used to give the audience a reminder of previous events, so that they can greater enjoy the current story, no longer needs to be incorporated into the linear representation of the story – because as the story is produced, it is connected on the Web to all previous parts of the story. Thus, if the audience wishes to learn more about something, or get a reminder of previous events, they can access them. If you were watching a programme, you could pause it, or activate your mobile device – the playback device would know the timing information of the audience’s action, and could query the Web to find the relevant URIs of information, and present the knowledge found there, back to the user. For instance, if a reference was made on-screen to a past event, rather than the production team having to add in a flashback sequence, the audience could activate the communication at the point of reference, and be presented with the original clip of that event happening. Taken further, this then starts to really break down the linearly-imposed walls between ‘episodes’ of programmes – which are, of course, relics of the original linear nature of television – and instead presents the audience with something much more suited to their own mental models of the narrative they are consuming. In the end, it wouldn’t really matter what episode you were watching – you could be freely exploring the whole universe of narrative, surfing between clip and clip, consuming the story in the order you prefer. Obviously it’s not something you’d want to be doing constantly, but it brings the freedom of the Web to the self-imposed closed structure of the television – and opens up whole new ways of experiencing the stories we tell.

As I mentioned, these are only fresh ideas being formed as we speak, so I’m sure the solution isn’t completely straightforward, but it really does seem that all the various puzzle pieces exist, they just need to be brought together – and the potential could be huge.