Latest Posts (17 found)
Lea Verou 1 weeks ago

Web dependencies are broken. Can we fix them?

Abstraction is the cornerstone of modern software engineering. Reusing logic and building higher-level solutions from lower-level building blocks is what makes all the technological wonders around us possible. Imagine if every time anyone wrote a calculator they also had to reinvent floating-point arithmetic and string encoding! In healthy ecosystems dependencies are normal, cheap, and first-class. “Dependency-free” is not a badge of honor. And yet, the web platform has outsourced this fundamental functionality to third-party tooling . As a result, code reuse has become a balancing of tradeoffs that should not have existed in the first place. In NodeJS, you just and reference specifiers straight away in your code. Same in Python, with . Same in Rust with . In healthy ecosystems you don’t ponder how or whether to use dependencies. The ecosystem assumes dependencies are normal, cheap, and first-class . You just install them, use them, and move on. “Dependency-free” is not a badge of honor. Instead, dependency management in the web platform consists of bits and bobs of scattered primitives, with no coherent end-to-end solution . Naturally, bundlers such as Webpack , rollup , and esbuild have picked up the slack, with browserify being the one that started it all, in 2012. There is nothing wrong with bundlers when used as a performance optimization to minimize waterfall effects and overhead from too many HTTP requests. You know, what a bundler is supposed to do. It is okay to require advanced tools for advanced needs , and performance optimization is generally an advanced use case. Same for most other things bundlers and build tools are used for, such as strong typing, linting, or transpiling. All of these are needs that come much later than dependency management, both in a programmer’s learning journey, as well as in a project’s development lifecycle. Dependency management is such a basic and ubiquitous need, it should be a part of the platform, decoupled from bundling. Requiring advanced tools for basic needs is a textbook usability cliff . In other ecosystems, optimizations happen (and are learned) after dependency resolution. On the web, optimization is the price of admission! This is not normal. Bundlers have become so ubiquitous that most JS developers cannot even imagine deploying code without them. READMEs are written assuming a bundler, without even mentioning the assumption. It’s just how JS is consumed. My heart breaks for the newbie trying to use a drag and drop library, only to get mysterious errors about specifiers that failed to resolve. However, bundling is not technically a necessary step of dependency management. Importing files through URLs is natively supported in every browser, via ESM imports. HTTP/2 makes importing multiple small files far more reasonable than it used to be — at least from a connection overhead perspective. You can totally get by without bundlers in a project that doesn’t use any libraries. But the moment you add that first dependency, everything changes. You are suddenly faced with a huge usability cliff : which bundler to use, how to configure it, how to deploy with it, a mountain of decisions standing between you and your goal of using that one dependency. That one drag and drop library. For newcomers, this often comes very early in their introduction to the web platform, and it can be downright overwhelming. It is technically possible to use dependencies without bundlers, today. There are a few different approaches, and — I will not sugarcoat it — they all suck . There are three questions here: There is currently no good answer to any of them, only fragile workarounds held together by duct tape. Using a dependency should not need any additional song and dance besides “install this package” + “now import it here”. That’s it. That’s the minimum necessary to declare intent . And that’s precisely how it works in NodeJS and other JS runtimes. Anything beyond that is reducing signal-to-noise ratio , especially if it needs to be done separately for every project or worse, for every dependency. You may need to have something to bite hard on while reading the next few sections. It’s going to be bad . Typically, package managers like take care of deduplicating compatible package versions and may use a directory like to install packages. In theory, one could deploy as part of their website and directly reference files in client-side JS. For example, to use Vue : It works out of the box, and is a very natural thing to try the first time you install a package and you notice . Great, right? No. Not great. First, deploying your entire directory is both wasteful , and a security risk . In fact, most serverless hosts (e.g. Netlify or Vercel ) automatically remove it from the publicly deployed files after the build is finished. Additionally, it violates encapsulation : paths within a package are generally seen as an implementation detail of the package itself, and packages expose specifier exports like or that they map to internal paths. If you decide to circumvent this and link to files directly, you now need to update your import paths whenever you update the package. It is also fragile, as not every module is installed directly in — though those explicitly marked as app dependencies are. Another common path is importing from CDNs like Unpkg and JSDelivr . For Vue, it would look like this: It’s quick and easy. Nothing to install or configure! Great, right? No. Not great. It is always a bad idea to introduce a dependency on a whole other domain you do not control , and an even worse one when linking to executable code. First, there is the obvious security risk. Unless you link to a specific version, down to the patch number and/or use SRI , the resource could turn malicious overnight under your nose if the package is compromised. And even if you link to a specific version, there is always the risk that the CDN itself could get compromised. Who remembers polyfill.io ? But even supply-chain attacks aside, any third-party domain is an unnecessary additional point of failure . I still remember scrambling to change JSDelivr URLs to Unpkg during an outage right before one of my talks, or having to hunt down all my repos that used RawGit URLs when it sunset, including many libraries. The DX is also suboptimal. You lose the immediacy and resilience of local, relative paths. Without additional tooling ( Requestly , file edits, etc.), you now need to wait for CDN roundtrips even during local development. Wanted to code on a flight? Good luck. Needed to show a live demo during a talk, over clogged conference wifi? Maybe sacrifice a goat to the gods first. And while they maintain encapsulation slightly better than raw file imports, as they let you reference a package by its name for its default export, additional specifiers (e.g. ) typically still require importing by file path. “But with public CDNs, I benefit from the resource having already been cached by another website the user visited!” Oh my sweet summer child. I hate to be the one to break it to you, but no, you don’t, and that has been the case since about 2020 . Double keyed caching obliterated this advantage . In case you were not aware, yes, your browser will redownload every single resource anew for every single website (origin) that requests it. Yes, even if it’s exactly the same. This changed to prevent cross-site leaks : malicious websites could exfiltrate information about your past network activity by measuring how long a resource took to download, and thus infer whether it was cached. Those who have looked into this problem claim that there is no other way to prevent these timing attacks other than to actually redownload the resource. No way for the browser to even fake a download by simply delaying the response. Even requiring resources to opt-in (e.g. via CORS) was ruled out, the concern being that websites could then use it as a third-party tracking mechanism. I personally have trouble accepting that such wasteful bandwidth usage was the best balance of tradeoffs for all Web users , including those in emerging economies and different locales [1] . It’s not that I don’t see the risks — it’s that I am acutely aware of the cost, a cost that is disproportionately borne by those not in the Wealthy Western Web . How likely is it that a Web user in Zimbabwe, where 1 GB of bandwidth costs 17% of the median monthly income , would choose to download React or nine weights of Roboto thousands of times to avoid seeing personalized ads? And how patronizing is it for people in California to be making this decision for them? A quick and dirty way to get local URLs for local development and CDN URLs for the remote site is to link to relative URLs, and add a URL rewrite to a CDN if that is not found. E.g. with Netlify rewrites this looks like this: Since is not deployed, this will always redirect on the remote URL, while still allowing for local URLs during development. Great, right? No. Not great. Like the mythical hydra, it solves one problem and creates two new ones. First, it still carries many of the same issues of the approaches it combines: Additionally, it introduces a new problem: the two files need to match, but the naïve approach above would always just link to the latest version. Sure, one could alleviate this by building the file with tooling, to link to specific versions, read from . But the point is not that it’s insurmountable, but that it should not be this hard . Another solution is a lightweight build script that copies either entire packages or specific exports into a directory that will actually get deployed. When dependencies are few, this can be as simple as an npm script: So now we have our own nice subset of and we don’t depend on any third-party domains. Great, right? No. Not great. Just like most other solution, this still breaks encapsulation, forcing us to maintain a separate, ad-hoc index of specifiers to file paths. Additionally, it has no awareness of the dependency graph. Dependencies of dependencies need to be copied separately. But wait a second. Did I say dependencies of dependencies? How would that even work? In addition to their individual flaws, all of the solutions above share a major flaw: they can only handle importing dependency-free packages . But what happens if the package you’re importing also uses dependencies? It gets unimaginably worse my friend, that’s what happens. There is no reasonable way for a library author to link to dependencies without excluding certain consumer workflows. There is no local URL a library author can use to reliably link to dependencies, and CDN URLs are highly problematic. Specifiers are the only way here. So the moment you include a dependency that uses dependencies, you’re forced into specifier-based dependency management workflows , whether these are bundlers, or import map flavored JSON vomit in every single HTML page (discussed later). As a fig leaf, libraries will often provide a “browser” bundle that consumers can import instead of their normal , which does not use specifiers. This combines all their dependencies into a single dependency-free file that you can import from a browser. This means they can use whatever dependencies they want, and you can still import that bundle using regular ESM imports in a browser, sans bundler. Great, right? No. Not great. It’s called a bundle for a reason. It bundles all their dependencies too, and now they cannot be shared with any other dependency in your tree, even if it’s exactly the same version of exactly the same package. You’re not avoiding bundling, you’re outsourcing it , and multiplying the size of your JS code in the process. And if the library author has not done that, you’re stuck with little to do, besides a CDN that rewrites specifiers on the fly like esm.sh , with all CDN downsides described above. As someone who regularly releases open source packages ( some with billions of npm installs ), I find this incredibly frustrating. I want to write packages that can be consumed by people using or not using bundlers, without penalizing either group , but the only way to do that today is to basically not use any dependencies. I cannot even modularize my own packages without running into this! This doesn’t scale. Browsers can import specifiers, as long as the mapping to a URL is explicitly provided through an import map . Import maps look like this: Did you notice something? Yes, this is an HTML block. No, I cannot link to an import map that lives in a separate file. Instead, I have to include the darn thing in. Every. Single. Page. The moment you decide to use JS dependencies, you now need an HTML templating tool as well. 🙃 “💡 Oh I know, I’ll generate this from my library via DOM methods! ” I hear you say. No, my sweet summer child. It needs to be present at parse time. So unless you’re willing to it (please don’t), the answer is a big flat NOPE. “💡 Ok, at least I’ll keep it short by routing everything through a CDN or the same local folder ” No, my sweet summer child. Go to sleep and dream of globs and URLPatterns . Then wake up and get to work, because you actually need to specify. Every. Single. Mapping. Yes, transitive dependencies too. You wanted to use dependencies? You will pay with your blood, sweat, and tears. Or, well, another build tool. So now I need a build tool to manage the import map , like JSPM . It also needs to talk to my HTML templating tool, which I now had to add so it can spit out these import maps on. Every. Single. HTML. Page. There are three invariants that import maps violate: Plus, you still have all of the issues discussed above, because you still need URLs to link to. By trying to solve your problem with import maps, you now got multiple problems. To sum up, in their current form, import maps don’t eliminate bundlers — they recreate them in JSON form, while adding an HTML dependency and worse latency. Given the current state of the ecosystem, not using bundlers in any nontrivial application does seem like an exercise in masochism. Indeed, per State of JS 2024 , bundlers were extremely popular, with Webpack having been used by 9 in 10 developers and having close to 100% awareness! But sorting by sentiment paints a different picture, with satisfaction, interest, and positivity dropping year after year. Even those who never question the status quo can feel it in their gut that this is not okay. This is not a reasonable way to manage dependencies. This is not a healthy ecosystem. Out of curiosity, I also ran two polls on my own social media. Obviously, this suffers from selection bias , due to the snowball sampling nature of social media, but I was still surprised to see such a high percentage of bundle-less JS workflows: I’m very curious how these folks manage the problems discussed here. Oftentimes when discussing these issues, I get the question “but other languages are completely compiled, why is it a problem here?”. Yes, but their compiler is official and always there. You literally can’t use the language without it. The problem is not compilation, it’s fragmentation. It’s the experience of linking to a package via a browser import only to see errors about specifiers. It’s adding mountains of config and complexity to use a utility function. It’s having no clear path to write a package that uses another package, even if both are yours. Abstraction itself is not something to outsource to third-party tools. This is the programming equivalent of privatizing fundamental infrastructure — roads, law enforcement, healthcare — systems that work precisely because everyone can rely on them being there. Like boiling frogs , JS developers have resigned themselves to immense levels of complexity and gruntwork as simply how things are . The rise of AI introduced swaths of less technical folks to web development and their overwhelm and confusion is forcing us to take a long hard look at the current shape of the ecosystem — and it’s not pretty. Few things must always be part of a language’s standard library, but dependency management is absolutely one of them. Any cognitive overhead should be going into deciding which library to use, not whether to include it and how . This is also actively harming web platform architecture . Because bundlers are so ubiquitous, we have ended up designing the platform around them, when it should be the opposite. For example, because is unreliable when bundlers are used, components have no robust way to link to other resources (styles, images, icons, etc.) relative to themselves, unless these resources can be part of the module tree. So now we are adding features to the web platform that break any reasonable assumption about what HTML, CSS, and JS are, like JS imports for CSS and HTML, which could have been a simple if web platform features could be relied on. And because using dependencies is nontrivial, we are adding features to the standard library that could have been userland or even browser-provided dependencies. To reiterate, the problem isn’t that bundlers exist — it’s that they are the only viable way to get first-class dependency management on the web. JS developers deserve better. The web platform deserves better. As a web standards person, my first thought when spotting such a lacking is “how can the web platform improve?”. And after four years in the TAG , I cannot shake the holistic architectural perspective of “which part of the Web stack is best suited for this?” Before we can fix this, we need to understand why it is the way it is. What is the fundamental reason the JS ecosystem overwhelmingly prefers specifiers over URLs? On the surface, people often quote syntax, but that seems to be a red herring. There is little DX advantage of (a specifier) over (a URL), or even (which can be configured to have a JS MIME type). Another oft-cited reason is immutability: Remote URLs can change, whereas specifiers cannot. This also appears to be a red herring: local URLs can be just as immutable as specifiers. Digging deeper, it seems that the more fundamental reason has to do with purview . A URL is largely the same everywhere, whereas can resolve to different things depending on context. A specifier is app-controlled whereas a URL is not. There needs to be a standard location for a dependency to be located and referenced from, and that needs to be app-controlled. Additionally, specifiers are universal . Once a package is installed, it can be imported from anywhere, without having to work out paths. The closest HTTP URLs can get to this is root-relative URLs, and that’s still not quite the same. Specifiers are clearly the path of least resistance here, so the low hanging fruit would be to make it easier to map specifiers to URLs, starting by improving import maps. An area with huge room for improvement here is import maps . Both making it easier to generate and include import maps, and making the import maps themselves smaller, leaner, and easier to maintain. The biggest need here is external import maps , even if it’s only via . This would eliminate the dependency on HTML templating and opens the way for generating them with a simple build tool. This was actually part of the original import map work , and was removed from the spec due to lack of implementer interest, despite overwhelming demand. In 2022, external import maps were prototyped in WebKit (Safari), which prompted a new WHATWG issue . Unfortunately, it appears that progress has since stalled once more. External import maps do alleviate some of the core pain points, but are still globally managed in HTML, which hinders composability and requires heavier tooling. What if import maps could be imported into JS code? If JS could import import maps, (e.g. via ), this would eliminate the dependency on HTML altogether, allowing for scripts to localize their own import info, and for the graph to be progressively composed instead of globally managed. Going further, import maps via an HTTP header (e.g. ) would even allow webhosts to generate them for you and send them down the wire completely transparently. This could be the final missing piece for making dependencies truly first-class. Imagine a future where you just install packages and use specifiers without setting anything up, without compiling any files into other files, with the server transparently handling the mapping ! However, import maps need URLs to map specifiers to, so we also need some way to deploy the relevant subset of to public-facing URLs, as deploying the entire directory is not a viable option. One solution might be a way to explicitly mark dependencies as client side , possibly even specific exports. This would decouple detection from processing app files: in complex apps it can be managed via tooling, and in simple apps it could even be authored manually, since it would only include top-level dependencies. Even if we had better ways to mark which dependencies are client-side and map specifiers to URLs, these are still pieces of the puzzle, not the entire puzzle. Without a way to figure out what depends on what, transitive dependencies will still need to be managed globally at the top level, defeating any hope of a tooling-light workflow. The current system relies on reading and parsing thousands of files to build the dependency graph. This is reasonable for a JS runtime where the cost of file reads is negligible, but not for a browser where HTTP roundtrips are costly. And even if it were, this does not account for any tree-shaking. Think of how this works when using URLs: modules simply link to other URLs and the graph is progressively composed through these requests. What if specifiers could work the same way? What if we could look up and route specifiers when they are actually imported? Here’s a radical idea: What if specifiers were just another type of URL , and specifier resolution could be handled by the server in the same way a URL is resolved when it is requested? They could use a protocol, that can be omitted in certain contexts, such as ESM imports. How would these URLs be different than regular local URLs? Architecturally, this has several advantages: Obviously, this is just a loose strawman at this point, and would need a lot of work to turn into an actual proposal (which I’d be happy to help out with, with funding ), but I suspect we need some way to bridge the gap between these two fundamentally different ways to import modules. Too radical? Quite likely. But abstraction is foundational, and you often need radical solutions to fix foundational problems. Even if this is not the right path, I doubt incremental improvements can get us out of this mess for good. But in the end, this is about the problem . I’m much more confident that the problem needs solving, than I am of any particular solution. Hopefully, after reading this, so are you. So this is a call to action for the community. To browser vendors, to standards groups, to individual developers. Let’s fix this! 💪🏼 Thanks to Jordan Harband , Wes Todd , and Anne van Kesteren for reviewing earlier versions of this draft. In fact, when I was in the TAG, Sangwhan Moon and I drafted a Finding on the topic, but the TAG never reached consensus on it. ↩︎ Use specifiers or URLs? How to resolve specifiers to URLs? Which URL do my dependencies live at? Linking to CDNs is inherently insecure It breaks encapsulation of the dependencies Locality : Dependency declarations live in HTML, not JS. Libraries cannot declare their own dependencies. Composability : Import maps do not compose across dependencies and require global coordination Scalability : Mapping every transitive dependency is not viable without tooling Twitter/X poll : 17.6% of respondents Mastodon poll : 40% (!) of respondents Their protocol would be implied in certain contexts — that would be how we can import bare specifiers in ESM Their resolution would be customizable (e.g. through import maps, or even regular URL rewrites) Despite looking like absolute URLs, their resolution would depend on the request’s header (thus allowing different modules to use different versions of the same dependency). A request to a URL without an header would fail. HTTP caching would work differently; basically in a way that emulates the current behavior of the JS module cache. It bridges the gap between specifiers and URLs . Rather than having two entirely separate primitives for linking to a resource, it makes specifiers a high-level primitive and URLs the low-level primitive that explains it. It allows retrofitting specifiers into parts of the platform that were not designed for them, such as CSS . This is not theoretical: I was at a session at TPAC where bringing specifiers to CSS was discussed. With this, every part of the platform that takes URLs can now utilize specifiers, it would just need to specify the protocol explicitly. In fact, when I was in the TAG, Sangwhan Moon and I drafted a Finding on the topic, but the TAG never reached consensus on it. ↩︎

0 views
Lea Verou 3 months ago

In the economy of user effort, be a bargain, not a scam

Alan Kay [source] One of my favorite product design principles is Alan Kay’s “Simple things should be simple, complex things should be possible” . [1] I had been saying it almost verbatim long before I encountered Kay’s quote. Kay’s maxim is deceptively simple, but its implications run deep. It isn’t just a design ideal — it’s a call to continually balance friction, scope, and tradeoffs in service of the people using our products. This philosophy played a big part in Prism’s success back in 2012, helping it become the web’s de facto syntax highlighter for years, with over 2 billion npm downloads. Highlighting code on a page took including two files. No markup changes. Styling used readable CSS class names. Even adding new languages — the most common “complex” use case — required far less knowledge and effort than alternatives. At the same time, Prism exposed a deep extensibility model so plugin authors could patch internals and dramatically alter behavior. These choices are rarely free. The friendly styling API increased clash risk, and deep extensibility reduced encapsulation. These were conscious tradeoffs, and they weren’t easy. Simple refers to use cases that are simple from the user’s perspective , i.e. the most common use cases. They may be hard to implement, and interface simplicity is often inversely correlated with implementation simplicity. And which things are complex , depends on product scope . Instagram’s complex cases are vastly different than Photoshop’s complex cases, but as long as there is a range, Kay’s principle still applies. Since Alan Kay was a computer scientist, his quote is typically framed as a PL or API design principle, but that sells it short. It applies to a much, much broader class of interfaces. This class hinges on the distribution of use cases . Products often cut scope by identifying the ~20% of use cases that drive ~80% of usage — aka the Pareto Principle . Some products, however, have such diverse use cases that Pareto doesn’t meaningfully apply to the product as a whole. There are common use cases and niche use cases, but no clean 20-80 split. The long tail of niche use cases is so numerous, it becomes significant in aggregate . For lack of a better term, I’ll call these long‑tail UIs . Nearly all creative tools are long-tail UIs. That’s why it works so well for programming languages and APIs — both are types of creative interfaces. But so are graphics editors, word processors, spreadsheets, and countless other interfaces that help humans create artifacts — even some you would never describe as creative. Yes, programming languages and APIs are user interfaces . If this surprises you, watch my DotJS 2024 talk titled “API Design is UI Design” . It’s only 20 minutes, but covers a lot of ground, including some of the ideas in this post. I include both code and GUI examples to underscore this point; if the API examples aren’t your thing, skip them and the post will still make sense. You wouldn’t describe Google Calendar as a creative tool, but it is a tool that helps humans create artifacts (calendar events). It is also a long-tail product: there is a set of common, conceptually simple cases (one-off events at a specific time and date), and a long tail of complex use cases (recurring events, guests, multiple calendars, timezones, etc.). Indeed, Kay’s maxim has clearly been used in its design. The simple case has been so optimized that you can literally add a one hour calendar event with a single click (using a placeholder title). A different duration can be set after that first click through dragging [2] . But almost every edge case is also catered to — with additional user effort. Google Calendar is also an example of an interface that digitally encodes real-life, demonstrating that complex use cases are not always power user use cases . Often, the complexity is driven by life events. E.g. your taxes may be complex without you being a power user of tax software, and your family situation may be unusual without you being a power user of every form that asks about it. The Pareto Principle is still useful for individual features , as they tend to be more narrowly defined. E.g. there is a set of spreadsheet formulas (actually much smaller than 20%) that drives >80% of formula usage. While creative tools are the poster child of long-tail UIs, there are long-tail components in many transactional interfaces such as e-commerce or meal delivery (e.g. result filtering & sorting, product personalization interfaces, etc.). Filtering UIs are another big category of long-tail UIs, and they involve so many tradeoffs and tough design decisions you could literally write a book about just them. Airbnb’s filtering UI here is definitely making an effort to make simple things easy with (personalized! 😍) shortcuts and complex things possible via more granular controls. Picture a plane with two axes: the horizontal axis being the complexity of the desired task (again from the user’s perspective, nothing to do with implementation complexity), and the vertical axis the cognitive and/or physical effort users need to expend to accomplish their task using a given interface. Following Kay’s maxim guarantees these two points: But even if we get these two points — what about all the points in between? There are a ton of different ways to connect them, and they produce vastly different overall user experiences. How does your interface fare when a use case is only slightly more complex? Are users yeeted into the deep end of interface complexity (bad), or do they only need to invest a proportional, incremental amount of effort to achieve their goal (good)? Meet the complexity-to-effort curve , the most important usability metric you’ve never heard of. For delightful user experiences, making simple things easy and complex things possible is not enough — the transition between the two should also be smooth. You see, simple use cases are the spherical cows in space of product design . They work great for prototypes to convince stakeholders, or in marketing demos, but the real world is messy . Most artifacts that users need to create to achieve their real-life goals rarely fit into your “simple” flows completely, no matter how well you’ve done your homework. They are mostly simple — with a liiiiitle wart here and there. For a long-tail interface to serve user needs well in practice , we also need to design the curve, not just its endpoints . A model with surprising predictive power is to treat user effort as a currency that users are spending to buy solutions to their problems. Nobody likes paying it; in an ideal world software would read our mind and execute perfectly with zero user effort. Since we don’t live in such a world, users are typically willing to pay more in effort when they feel their use case warrants it. Just like regular pricing, actual user experience often depends more on the relationship between cost and expectation (budget) than on the absolute cost itself. If you pay more than you expected, you feel ripped off. You may still pay it because you need the product in the moment, but you’ll be looking for a better deal in the future. And if you pay less than you expected, you feel like you got a bargain, with all the delight and loyalty that entails. Incremental user effort cost should be proportional to incremental value gained. Suppose you’re ordering pizza. You want a simple cheese pizza with ham and mushrooms. You use the online ordering system, and you notice that adding ham to your pizza triples its price. We’re not talking some kind of fancy ham where the pigs were fed on caviar and bathed in champagne, just a regular run-of-the-mill pizza topping. You may still order it if you’re starving and no other options are available, but how does it make you feel? It’s not that different when the currency is user effort. The all too familiar “ But I just wanted to _________, why is it so hard? ”. When a slight increase in complexity results in a significant increase in user effort cost, we have a usability cliff . Usability cliffs make users feel resentful, just like the customers of our fictitious pizza shop. A usability cliff is when a small increase in use case complexity requires a large increase in user effort. Usability cliffs are very common in products that make simple things easy and complex things possible through entirely separate flows with no integration between them: a super high level one that caters to the most common use case with little or no flexibility, and a very low-level one that is an escape hatch: it lets users do whatever, but they have to recreate the solution to the simple use case from scratch before they can tweak it. Simple things are certainly easy: all we need to get a video with a nice sleek set of controls that work well on every device is a single attribute: . We just slap it on our element and we’re done with a single line of HTML: Now suppose use case complexity increases just a little . Maybe I want to add buttons to jump 10 seconds back or forwards. Or a language picker for subtitles. Or just to hide the volume control on a video that has no audio track. None of these are particularly niche, but the default controls are all-or-nothing: the only way to change them is to reimplement the whole toolbar from scratch, which takes hundreds of lines of code to do well. Simple things are easy and complex things are possible. But once use case complexity crosses a certain (low) threshold, user effort abruptly shoots up. That’s a usability cliff. For Instagram’s photo editor, the simple use case is canned filters, whereas the complex ones are those requiring tweaking through individual low-level controls. However, they are implemented as separate flows: you can tweak the filter’s intensity , but you can’t see or adjust the primitives it’s built from. You can layer both types of edits on the same image, but they are additive, which doesn’t work well. Ideally, the two panels would be integrated, so that selecting a filter would adjust the low-level controls accordingly, which would facilitate incremental tweaking AND would serve as a teaching aid for how filters work. My favorite end-user facing product that gets this right is Coda , a cross between a document editor, a spreadsheet, and a database. All over its UI, it supports entering formulas instead of raw values, which makes complex things possible. To make simple things easy, it also provides the GUI you’d expect even without a formula language. But here’s the twist: these presets generate formulas behind the scenes that users can tweak ! Whenever users need to go a little beyond what the UI provides, they can switch to the formula editor and adjust what was generated — far easier than writing it from scratch. Another nice touch: “And” is not just communicating how multiple filters are combined, but is also a control that lets users edit the logic. Defining high-level abstractions in terms of low-level primitives is a great way to achieve a smooth complexity-to-effort curve, as it allows you to expose tweaking at various intermediate levels and scopes. The downside is that it can sometimes constrain the types of high-level solutions that can be implemented. Whether the tradeoff is worth it depends on the product and use cases. If you like eating out, this may be a familiar scenario: — I would like the rib-eye please, medium-rare. — Thank you sir. How would you like your steak cooked? Keep user effort close to the minimum necessary to declare intent Annoying, right? And yet, this is how many user interfaces work; expecting users to communicate the same intent multiple times in slightly different ways. If incremental value should require incremental user effort , an obvious corollary is that things that produce no value should not require user effort . Using the currency model makes this obvious: who likes paying without getting anything in return? Respect user effort. Treat it as a scarce resource — just like regular currency — and keep it close to the minimum necessary to declare intent . Do not require users to do work that confers them no benefit, and could have been handled by the UI. If it can be derived from other input, it should be derived from other input. Source: NNGroup (adapted). A once ubiquitous example that is thankfully going away, is the credit card form which asks for the type of credit card in a separate dropdown. Credit card numbers are designed so that the type of credit card can be determined from the first four digits. There is zero reason to ask for it separately. Beyond wasting user effort, duplicating input that can be derived introduces an unnecessary error condition that you now need to handle: what happens when the entered type is not consistent with the entered number? User actions that meaningfully communicate intent to the interface are signal . Any other step users need to take to accomplish their goal, is noise . This includes communicating the same input more than once, providing input separately that could be derived from other input with complete or high certainty, transforming input from their mental model to the interface’s mental model, and any other demand for user effort that does not serve to communicate new information about the user’s goal. Some noise is unavoidable. The only way to have 100% signal-to-noise ratio would be if the interface could mind read. But too much noise increases friction and obfuscates signal. A short yet demonstrative example is the web platform’s methods for programmatically removing an element from the page. To signal intent in this case, the user needs to communicate two things: (a) what they want to do (remove an element), and (b) which element to remove. Anything beyond that is noise. The modern DOM method has an extremely high signal-to-noise ratio. It’s hard to imagine a more concise way to signal intent. However, the older method that it replaced had much worse ergonomics. It required two parameters: the element to remove, and its parent. But the parent is not a separate source of truth — it would always be the child node’s parent! As a result, its actual usage involved boilerplate , where developers had to write a much noisier [3] . Boilerplate is repetitive code that users need to include without thought, because it does not actually communicate intent. It’s the software version of red tape : hoops you need to jump through to accomplish your goal, that serve no obvious purpose in furthering said goal except for the fact that they are required. In this case, the amount of boilerplate may seem small, but when viewed as a percentage of the total amount of code, the difference is staggering. The exact ratio (81% vs 20% here) varies based on specifics such as variable names, but when the difference is meaningful, it transcends these types of low-level details. Of course, it was usually encapsulated in utility functions, which provided a similar signal-to-noise ratio as the modern method. However, user-defined abstractions don’t come for free, there is an effort (and learnability) tax there, too. Improving signal-to-noise ratio is also why the front-end web industry gravitated towards component architectures: they increase signal-to-noise ratio by encapsulating boilerplate. As an exercise for the reader, try to calculate the signal-to-noise ratio of a Bootstrap accordion (or any other complex Bootstrap component). Users are much more vocal about things not being possible, than things being hard. When pointing out friction issues in design reviews , I have sometimes heard “ users have not complained about this ”. This reveals a fundamental misunderstanding about the psychology of user feedback . Users are much more vocal about things not being possible, than about things being hard. The reason becomes clear if we look at the neuroscience of each. Friction is transient in working memory (prefrontal cortex). After completing a task, details fade. The negative emotion persists and accumulates, but filing a complaint requires prefrontal engagement that is brief or absent. Users often can’t articulate why the software feels unpleasant: the specifics vanish; the feeling remains. Hard limitations, on the other hand, persist as conscious appraisals. The trigger doesn’t go away, since there is no workaround, so it’s far more likely to surface in explicit user feedback. Both types of pain points cause negative emotions, but friction is primarily processed by the limbic system (emotion), whereas hard limitations remain in the prefrontal cortex (reasoning). This also means that when users finally do reach the breaking point and complain about friction, you better listen. Friction is primarily processed by the limbic system, whereas hard limitations remain in the prefrontal cortex Second, user complaints are filed when there is a mismatch in expectations . Things are not possible but the user feels they should be, or interactions cost more user effort than the user had budgeted, e.g. because they know that a competing product offers the same feature for less (work). Often, users have been conditioned to expect poor user experiences, either because all options in the category are high friction, or because the user is too novice to know better [4] . So they begrudgingly pay the price, and don’t think they have the right to complain, because it’s just how things are. You might ask, “If all competitors are equally high-friction, how does this hurt us?” An unmet need is a standing invitation to disruption that a competitor can exploit at any time. Because you’re not only competing within a category; you’re competing with all alternatives — including nonconsumption (see Jobs‑to‑be‑Done ). Even for retention, users can defect to a different category altogether (e.g., building native apps instead of web apps). Historical examples abound. When it comes to actual currency, a familiar example is Airbnb : Until it came along, nobody would complain that a hotel of average price is expensive — it was just the price of hotels. If you couldn’t afford it, you just couldn’t afford to travel, period. But once Airbnb showed there is a cheaper alternative for hotel prices as a whole , tons of people jumped ship. It’s no different when the currency is user effort. Stripe took the payment API market by storm when it demonstrated that payment APIs did not have to be so high friction. iPhone disrupted the smartphone market when it demonstrated that no, you did not have to be highly technical to use a smartphone. The list goes on. Unfortunately, friction is hard to instrument. With good telemetry you can detect specific issues (e.g., dead clicks), but there is no KPI to measure friction as a whole. And no, NPS isn’t it — and you’re probably using it wrong anyway . Instead, the emotional residue from friction quietly drags many metrics down (churn, conversion, task completion), sending teams in circles like blind men touching an elephant . That’s why dashboards must be paired with product vision and proactive, first‑principles product leadership . Steve Jobs exemplified this posture: proactively, aggressively eliminating friction presented as “inevitable.” He challenged unnecessary choices, delays, and jargon, without waiting for KPIs to grant permission. Do mice really need multiple buttons? Does installing software really need multiple steps? Do smartphones really need a stylus? Of course, this worked because he had the authority to protect the vision; most orgs need explicit trust to avoid diluting it. So, if there is no metric for friction, how do you identify it? Reducing friction rarely comes for free, just because someone had a good idea. These cases do exist, and they are great, but it usually takes sacrifices. And without it being an organizational priority, it’s very hard to steer these tradeoffs in that direction. The most common tradeoff is implementation complexity. Simplifying user experience is usually a process of driving complexity inwards and encapsulating it in the implementation. Explicit, low-level interfaces are far easier to implement, which is why there are so many of them. Especially as deadlines loom, engineers will often push towards externalizing complexity into the user interface, so that they can ship faster. And if Product leans more data-driven than data-informed, it’s easy to look at customer feedback and conclude that what users need is more features ( it’s not ) . The first faucet is a thin abstraction : it exposes the underlying implementation directly, passing the complexity on to users, who now need to do their own translation of temperature and pressure into amounts of hot and cold water. It prioritizes implementation simplicity at the expense of wasting user effort. The second design prioritizes user needs and abstracts the underlying implementation to support the user’s mental model. It provides controls to adjust the water temperature and pressure independently, and internally translates them to the amounts of hot and cold water. This interface sacrifices some implementation simplicity to minimize user effort. This is why I’m skeptical of blanket calls for “simplicity.”: they are platitudes. Everyone agrees that, all else equal, simpler is better. It’s the tradeoffs between different types of simplicity that are tough. In some cases, reducing friction even carries tangible financial risks, which makes leadership buy-in crucial. This kind of tradeoff cannot be made by individual designers — it requires usability as a priority to trickle down from the top of the org chart. The Oslo airport train ticket machine is the epitome of a high signal-to-noise interface. You simply swipe your credit card to enter and you swipe your card again as you leave the station at your destination. That’s it. No choices to make. No buttons to press. No ticket. You just swipe your card and you get on the train. Today this may not seem radical, but back in 2003, it was groundbreaking . To be able to provide such a frictionless user experience, they had to make a financial tradeoff: it does not ask for a PIN code, which means the company would need to simply absorb the financial losses from fraudulent charges (stolen credit cards, etc.). When user needs are prioritized at the top, it helps to cement that priority as an organizational design principle to point to when these tradeoffs come along in the day-to-day. Having a design principle in place will not instantly resolve all conflict, but it helps turn conflict about priorities into conflict about whether an exception is warranted, or whether the principle is applied correctly, both of which are generally easier to resolve. Of course, for that to work everyone needs to be on board with the principle. But here’s the thing with design principles (and most principles in general): they often seem obvious in the abstract, so it’s easy to get alignment in the abstract. It’s when the abstract becomes concrete that it gets tough. The Web Platform has its own version of this principle, which is called Priority of Constituencies : “User needs come before the needs of web page authors, which come before the needs of user agent implementors, which come before the needs of specification writers, which come before theoretical purity.” This highlights another key distinction. It’s more nuanced than users over developers; a better framing is consumers over producers . Developers are just one type of producer. The web platform has multiple tiers of producers: Even within the same tier there are producer vs consumer dynamics. When it comes to web development libraries, the web developers who write them are producers and the web developers who use them are consumers. This distinction also comes up in extensible software, where plugin authors are still consumers when it comes to the software itself, but producers when it comes to their own plugins. It also comes up in dual sided marketplace products (e.g. Airbnb, Uber, etc.), where buyer needs are generally higher priority than seller needs. In the economy of user effort, the antithesis of overpriced interfaces that make users feel ripped off are those where every bit of user effort required feels meaningful and produces tangible value to them. The interface is on the user’s side, gently helping them along with every step, instead of treating their time and energy as disposable. The user feels like they’re getting a bargain : they get to spend less than they had budgeted for! And we all know how motivating a good bargain is. User effort bargains don’t have to be radical innovations; don’t underestimate the power of small touches. A zip code input that auto-fills city and state, a web component that automatically adapts to its context without additional configuration, a pasted link that automatically defaults to the website title (or the selected text, if any), a freeform date that is correctly parsed into structured data, a login UI that remembers whether you have an account and which service you’ve used to log in before, an authentication flow that takes you back to the page you were on before. Sometimes many small things can collectively make a big difference. In some ways, it’s the polar opposite of death by a thousand paper cuts : Life by a thousand sprinkles of delight! 😀 In the end, “ simple things simple, complex things possible ” is table stakes. The key differentiator is the shape of the curve between those points. Products win when user effort scales smoothly with use case complexity, cliffs are engineered out, and every interaction declares a meaningful piece of user intent . That doesn’t just happen by itself. It involves hard tradeoffs, saying no a lot, and prioritizing user needs at the organizational level . Treating user effort like real money, forces you to design with restraint. A rule of thumb is place the pain where it’s best absorbed by prioritizing consumers over producers . Do this consistently, and the interface feels delightful in a way that sticks. Delight turns into trust. Trust into loyalty. Loyalty into product-market fit. Kay himself replied on Quora and provided background on this quote . Don’t you just love the internet? ↩︎ Yes, typing can be faster than dragging, but minimizing homing between input devices improves efficiency more, see KLM ↩︎ Yes, today it would have been , which is a little less noisy, but this was before the optional chaining operator. ↩︎ When I was running user studies at MIT, I’ve often had users exclaim “I can’t believe it! I tried to do the obvious simple thing and it actually worked!” ↩︎

1 views
Lea Verou 16 years ago

CSS3 colors, today (MediaCampAthens session)

Yesterday, I had a session at MediaCampAthens (a BarCamp-style event), regarding CSS3 colors. If you’ve followed my earlier posts tagged with “colors” , my presentation was mostly a sum-up of these. It was my first presentation ever, actually, the first time I talked to an audience for more than 1 minute :P . This caused some goofs: Also, I had prepared some screenshots (you’ll see them in the ppt) and the projector completely screwed them up, as it showed any dark color as black. Apart from those, I think it went very well, I received lots of positive feedback about it and the audience was paying attention, so I guess they found it interesting (something that I didn’t expect :P ). Here is the presentation: Please note that Slideshare messed up slide #8 and the background seems semi-transparent grey instead of semi-transparent white. By the way, I also thought afterwards that I had made a mistake: -ms-filter is not required if we combine the gradient filter with Data URIs, since IE8 supports Data URIs (for images at least). Oops, I hate making mistakes that I can’t correct. Here are some photos from my session . If I did it correctly, every facebook user can see them. If I messed things up, tell me :P

0 views
Lea Verou 16 years ago

CMYK colors in CSS: Useful or useless?

As someone who dealed a bit with print design in the past, I consider CMYK colors the easiest color system for humen to understand and manipulate. It’s very similar to what we used as children, when mixing watercolors for our drawings. It makes perfect sense, more than HSL and definately more than RGB. I understand that most of us are so accustomed to using RGB that can’t realise that, but try to think for a moment: Which color system would make more sense to you if you had no idea and no experience at all with any of them? Personally, even though I have lots more experience with RGB, given the fact that most of my work will be displayed on screen and not printed on paper, when I think of a color I want, I can instantly find out the percentages of Cyan, Magenta, Yellow and blacK needed to create it. I can’t do that with HSL or RGB, I’d have to play a little bit with the color picker’s sliders. I sometimes start by specifying a color in CMYK and then tweaking it via RGB or HSL to achieve the exact color I need (since the CMYK gamut is smaller than the RGB gamut) and I find that much faster than starting with RGB or HSL right away. Also, when you don’t have a color picker, it’s much easier to create beautiful colors with CMYK than it is with RGB. For example the CMYK magenta (0% Cyan, 100% Magenta, 0% Yellow, 0% blacK) is a much better color than the RGB Magenta (255 Red, 0 Green, 100% Blue). Given the above, I’ve always thought how much I wanted to be able to specify CMYK colors in my CSS. I agree that sometimes this would result in crippling myself, since as I said above the CMYK gamut is smaller, but it has other significant advantages that I think it would make it a useful option for some people. There are algorithms available for CMYK to RGB conversion, and the browser could use those to display the specified color on the screen. Then, if the user decided to print the page, The CMYK colors could be used as-is for the printer. Another advantage, as none of the current CSS color formats allow us to control that. People who don’t find the CMYK color system easier for them to understand, they could still use it for their print stylesheets. Also, graphic designers who decided to switch to web design would find it much easier to specify color values in a format they are already comfortable with. To sum it up, the advantages that I think this option would provide us are: And the format is very easy to imagine: So, what do you think? Useful or useless? Edit: As it turns out, I’m not crazy! The W3 already considers this for CSS3 with the 3rd format (from 0 to 1)! However, no browser supports it yet, not even Webkit nightlies… :(

0 views
Lea Verou 16 years ago

On native, single-input, multiple file uploads

If you are following the current news on web development, you probably heard that the new Safari 4 has a great feature: It natively allows the user to select multiple files via a single input control, if you specify a value for the attribute : or, in XHTML: You might not know that Opera supported multiple file uploads for a while now, based on the earlier Web Forms 2.0 standard in a slightly different (and more flexible) format: Sure we can, but we should provide fallbacks for the other browsers. Using these features will put pressure on the other browser vendors to implement them as well and generally, native is always better. Opera supports accessing those and properties as properties of the element. So, it’s quite trivial to check whether Opera-style multiple inputs are supported: In Safari 4 the check would be equally simple, if it supported accessing the attribute as a property. Then we could easily check whether it’s boolean and conclude that Safari-style multiple inputs are supported: However, that’s currently not the case. The good news are that I reported this as a bug today, and the Webkit team fixed it , so it will be possible in the next Webkit nightly! You can easily combine these two together with the workaround you prefer: Ok, we all know that IE will probably take years to implement similar functionality. But usually, the Mozilla team implements new and exciting stuff quite fast. As it turns out, there is a relevant ticket sitting in their Bugzilla for a while now. If you want them to implement it, vote for it so that it’s priority increases. If they do implement it in the way suggested, the code posted above will work for that too, without any changes - The advantages of feature detection baby! ;)

0 views
Lea Verou 16 years ago

Check whether the browser supports RGBA (and other CSS3 values)

When using CSS, we can just include both declarations, one using rgba, and one without it, as mentioned in my post on cross-browser RGBA backgrounds . When writing JavaScript however, it’s a waste of resources to do that (and requires more verbose code), since we can easily check whether the browser is RGBA-capable, almost as easily as we can check whether it suppports a given property . We can even follow the same technique to detect the support of other CSS3 values (for instance, multiple backgrounds support, HSLA support, etc). The technique I’m going to present is based on the fact that when we assign a non-supported CSS value on any supported CSS property, the browser either throws an error AND ignores it (IE-style), or simply ignores it (Firefox-style). Concequently, to check whether RGBA is supported, the algorithm would be: and it would result in the following code: The code above works, but it wastes resources for no reason. Every time the function is called, it tests RGBA support again, even though the result will never change. So, we need a way to cache the result, and return the cached result after the first time the function is called. This can be achieved in many ways. My personal preference is to store the result as a property of the function called, named : There is a rare case where the script element might already have set as it’s color value (don’t ask me why would someone want to do that :P ), in which case our function will return even if the browser actually supports RGBA. To prevent this, you might want to check whether the property is already set to and return if it is (because if the browser doesn’t support rgba, it will be blank):

0 views
Lea Verou 16 years ago

"Appearances can be deceiving Mr. Anderson" - a.k.a. short code is not always fast code

I used to take pride in my short, bulletproof and elegant String and Number type checks: I always thought that apart from being short and elegant, they should be faster. However, some quick tests gave me a cold slap in the face and proved my assertion to be entirely false. When comparing the following 4 methods for string and number type checking: It turned out that the Object.prototype.toString method was 50% faster than my method, and both typeof and constructor methods were a whopping 150% faster than my method! No wonder jQuery uses the typeof method for their String/Number tests . Now that I think about it, it does actually make sense - my method converts to a String or Number, then concatenates/adds it with another String/Number, then compares value and type. Too much stuff done there to be fast. But I guess I was too innocent and subconsciously thought that it wouldn’t be fair if elegant and short code wasn’t fast too. Of course the overall time needed for any of these tests was neglible, but it’s a good example of how much appearances can be deceiving - even in programming! ;) The moral: Never assume. Always test. The typeof method and my method fail for non-primitive String/Number objects, as you can easily observe if you type in the console: This can easily be solved if you also check the type via instanceof (the decrease in speed is negligible): Don’t use instanceof alone, since it fails for String and Number primitives. The instanceof method also fails for Strings and Numbers created in another window, since their constructor there is different. Same happens with the Constructor method mentioned above. It seems that if you need a bulletproof check the only method you can use is the Object.prototype.toString method and luckily, it’s one of the fastest (not the fastest one though), so I guess we can safely elect it as the ideal method for String and Number checks (and not only for arrays, as it was first made popular for). PS: For anyone wondering what the quote in the title reminds him/her, its from the Matrix Revolutions movie.

0 views
Lea Verou 16 years ago

Quick & dirty way to run snippets of JavaScript anywhere

Ever wanted to run a snippet of JavaScript on a browser that doesn’t support a console in order to debug something? (for instance, IE6, Opera etc) You probably know about Firebug Lite , but this either requires you to already have the bookmarklet, or include the script in the page. Although Firebug Lite is a great tool for more in depth debugging, it can be tedious for simple tasks (eg. “What’s the value of that property?” ). Fortunately, there is a simpler way. Do you remember the 2000 era and the URIs? Did you know that they also work from the address bar of any javascript-capable browser? For instance, to find out the value of the global variable , you just type in the address bar . You can write any code you wish after the part, as long as you write it properly to fit in one line. Of course these URIs are a no-no for websites, but they can be handy for simple debugging in browsers that don’t support a console. ;)

0 views
Lea Verou 16 years ago

20 things you should know when not using a JS library

You might just dislike JavaScript libraries and the trend around them, or the project you’re currently working on might be too small for a JavaScript library. In both cases, I understand, and after all, who am I to judge you? I don’t use a library myself either (at least not one that you could’ve heard about  ;) ), even though I admire the ingenuity and code quality of some. However, when you take such a brave decision, it’s up to you to take care of those problems that JavaScript libraries carefully hide from your way. A JavaScript library’s purpose isn’t only to provide shortcuts to tedious tasks and allow you to easily add cool animations and Ajax functionality as many people (even library users) seem to think. Of course these are things that they are bound to offer if they want to succeed, but not the only ones. JavaScript libraries also have to workaround browser differences and bugs and this is the toughest part, since they have to constantly keep up with browser releases and their respective bugs and judge which ones are common enough to deserve workaround and which ones are so rare that would bloat the library without being worth it. Sometimes I think that nowadays, how good of a JavaScript developer you are doesn’t really depend on how well you know the language, but rather on how many browser bugs you’ve heard/read/know/found out. :P The purpose of this post is to let you know about the browser bugs and incompatibilities that you are most likely to face when deciding againist the use of a JavaScript library. Knowledge is power, and only if you know about them beforehand you can workaround them without spending countless debugging hours wondering “WHAT THE…”. And even if you do use a JavaScript library, you will learn to appreciate the hard work that has been put in it even more. Some of the things mentioned below might seem elementary to many of you. However, I wanted this article to be fairly complete and contain as many common problems as possible, without making assumptions about the knowledge of my readers (as someone said, “assumption is the mother of all fuck-ups” :P ). After all, it does no harm if you read something that you already know, but it does if you remain ignorant about something you ought to know. I hope that even the most experienced among you, will find at least one thing they didn’t know very well or had misunderstood (unless I’m honoured to have library authors reading this blog, which in that case, you probably know all the facts mentioned below :P ) . If you think that something is missing from the list, feel free to suggest it in the comments, but have in mind that I conciously omitted many things because I didn’t consider them common enough. John Resig (of the jQuery fame), recently posted a great presentation , which summarized some browser bugs related to DOM functions. A few of the bugs/inconsistencies mentioned above are derived from that presentation. The operator is almost useless: Use Object.prototype.toString instead . Never, EVER use a browser detect to solve the problems mentioned above. They can all be solved with feature/object detection, simple one-time tests or defensive coding. I have done it myself (and so did most libraries nowadays I think) so I know it’s possible. I will not post all of these solutions to avoid bloating this post even more. You can ask me about particular ones in the comments, or read the uncompressed source code of any library that advertises itself as “not using browser detects”. JavaScript Libraries are a much more interesting read than literature anyway. :P I’m not really sure to be honest, it depends on how you count them. I thought that if I put a nice round number in the title, it would be more catchy :P

0 views
Lea Verou 16 years ago

Silent, automatic updates are the way to go

Recently, PPK stated that he hates Google Chrome’s automatic updates . I disagree. In fact, I think that all browser vendors should enforce automatic updates as violently as Google Chrome does. There should be no option to disable them. For anybody. This might sound a bit facist at start, but imagine a world where all browsers would get automatically updated, without the possiblity of an opt-out. If you went online, you would be bound to have the very latest version, regardless of how computer (i)literate you were (Many — if not most — home users that don’t upgrade are like that because they think it’s too difficult for their computer expertise level). Sure, if you were a developer you wouldn’t be able to test a website in older browser versions. But why would you need to do so? If everybody had the latest browser version, you would only develop for the latest version and perhaps for the next one (via nightlies and betas, that could still be separate in that ideal world). Imagine a world where your job wouldn’t have to involve tedious IE6 (and in a few weeks, no IE7 either), Firefox 2, Opera 9.5 and Safari 3.1- testing. A world where you would spend your work hours on more creative stuff, where you wouldn’t want to bang your head on the wall because you know you did nothing wrong but the ancient browser that you are currently testing in is just incompetent and YOU have to fix it’s sh*t. A world where the size of your Javascript code (and the JS libraries’ code) would be half its size and constantly decreasing as new browser versions come out. A world where you would only have 1 CSS file in most websites you develop. A world where you wouldn’t feel so bad because IE8 doesn’t support opacity, border-radius or SVG, because you would know that in 1-2 years everyone would have IE9 and it will probably support them. A world where designing a website would be as much fun as designing your personal blog. Doesn’t such a world sound like a dream? Would it harm anyone? Users would browse a much lighter and beautiful web, with a more feature-rich and secure browser. Developers would work half as much to produce better results and they would enjoy their work more. Oh come on, that isn’t a good enough reason to not make that dream come true! Companies and individuals could be allowed to have an older version of the browser installed as well . They still wouldn’t be able to opt out from the automatic upgrade, but they could apply somehow to have an older version of the browser in the same system as well. Similarly to what happens now with browser betas. People would use the older version to access corporate intranet applications and obsolete sites and the latest version to surf the web. I may be overly optimistic, but I think that if a user had both versions of a browser installed, (s)he would prefer the latest wherever (s)he can. Perhaps another step towards enforcing that would be if the OS prevented an older browser version from being set as the default browser, but I guess that would be too hard to do, especially if the browser in question is not the OS default one. What’s your opinion?

0 views
Lea Verou 16 years ago

Bulletproof, cross-browser RGBA backgrounds, today

UPDATE: New version First of all, happy Valentine’s day for yersterday. :) This is the second part of my “ Using CSS3 today ” series. This article discusses current RGBA browser support and ways to use RGBA backgrounds in non-supporting browsers. Bonus gift: A PHP script of mine that creates fallback 1-pixel images on the fly that allow you to easily utilize RGBA backgrounds in any browser that can support png transparency. In addition, the images created are forced to be cached by the client and they are saved on the server’s hard drive for higher performance. In these browsers you can write CSS declarations like: background: rgba(255,200,35,0.5) url(somebackground.png) repeat-x 0 50%; border: 1px solid rgba(0,0,0,0.3); color: rgba(255,255,255,0.8); And they will work flawlessly. Surprisingly, it seems that Internet Explorer supported RGBA backgrounds long before the others . Of course, with it’s very own properietary syntax , as usual: filter: progid:DXImageTransform.Microsoft.gradient(startColorstr=#550000FF, endColorstr=#550000FF); And since nothing is ever simple with IE, IE8 requires a special syntax which has to be put before the first one to work properly in IE8 beta1: -ms-filter: “progid:DXImageTransform.Microsoft.gradient(startColorstr=#550000FF, endColorstr=#550000FF)”; The code above actually draws a gradient from to using a Microsoft-proprietary “extended” hex format that places the Alpha parameter first (instead of last) and in the range of 00-FF (instead of 0-1). The rest is a usual hex color, in that case #0000FF. Caution: The “gradients” that are created via the gradient filter are placed on top of any backgrounds currently in effect. So, if you want to have a background image as well, the result may not be what you expected. If you provide a solid color as a background, it will also not work as expected (no alpha transparency), since the gradients created are not exactly backgrounds, they are just layers on top of backgrounds. So, personally, I only use that approach sparingly, in particular, only when “no/minimum external files” is a big requirement. My favored approach is to use rgba() for all RGBA-capable browsers and fallback pngs for the ones that don’t support RGBA. However, creating the pngs in Photoshop, or a similar program and then uploading them is too much of a fuss for me to bare (I get bored easily :P ). So, I created a small PHP script that: Here it is: rgba.php You use it like this: background: url(rgba.php?r=255&g=100&b=0&a=50) repeat; background: rgba(255,100,0,0.5); or, for named colors: background: url(rgba.php?name=white&a=50) repeat; background: rgba(255,255,255,0.5); Browsers that are RGBA-aware will follow the second background declaration and will not even try to fetch the png. Browsers that are RGBA-incapable will ignore the second declaration, since they don’t understand it, and stick with the first one. Don’t change the order of the declarations: The png one goes first, the rgba() one goes second. If you put the png one second, it will always be applied, even if the browser does support rgba. Before you use it, open it with an editor to specify the directory you want it to use to store the created pngs (the default is ) and add any color names you want to be able to easily address (the defaults are white and black). If the directory you specify does not exist or isn’t writeable you’ll get an error. Caution: You have to enter the alpha value in a scale of 0 to 100, and not from 0 to 1 as in the CSS. This is because you have to urlencode dots to transfer them via a URI and it would complicate things for anyone who used this. Edit: It seems that IE8 sometimes doesn’t cache the image produced. I should investigate this further. IMPORTANT: If your PHP version is below 5.1.2 perform this change in the PHP file or it won’t work. Of course, you could combine the IE gradient filter, rgba() and URIs for a cross-browser solution that does not depend on external files . However, this approach has some disadvantages: and some advantages: Choose the method that fits your needs better. :) It’s also for every CSS property that accepts color values. However, backgrounds in most cases are the easiest to workaround. As for borders, if you want solid ones, you can simulate them sometimes by wrapping a padded container with an RGBA background around your actual one and giving it as much padding as your desired border-width. For text color, sometimes you can fake that with opacity. However, these “solutions” are definitely incomplete, so you’d probably have to wait for full RGBA support and provide solid color fallbacks for those (unless someone comes up with an ingenious solution in , it’s common these days :P ).

0 views
Lea Verou 16 years ago

CSS3 border-radius, today

This is the first one from a series of articles I’m going to write about using CSS3 properties or values today . I’ll cover everything I have found out while using them, including various browser quirks and bugs I know of or have personally filed regarding them. In this part I’ll discuss ways to create rounded corners without images and if possible without JavaScript in the most cross-browser fashion. I will not cover irregular curves in this article, since I’ve yet to find any person who actually needed them, even once, including myself and browser support for them is far worse. Caution: The contents of a container with border-radius set are NOT clipped according to the border radius in any implementation/workaround mentioned below, and no, setting overflow to hidden won’t help (and even if it did, you’d risk text missing). You should specify a proper border-radius and/or padding to them if you want them to  follow their container’s curves properly. This could allow for some nice effects but most of the time it’s just a pain in the a$$. Firefox supports rounded corners since version 2. However incomplete support in version 2 made designers sceptical to use them. The problem was that the rounded corners created were aliased back then, and also did not crop the background image, so if you had one, no rounded corners for you. This was fixed in FF3, so now more and more designers are starting to use them. The syntax is This is effectively a shorthand for: You don’t need to specify all these properties though, even if you wan’t different measures per corner, as functions as a regular CSS shorthand, allowing us to specify all 4 corners at once. It can be used in the following ways: A good mnemonic rule for the order of the values is that they are arranged clockwise, starting from Top left. Safari also implements CSS3 border-radius, but in a quite different way. If you want to set all four corners to the same border-radius, the process is almost identical. The only thing needed is: However, things start to get tricky when you want to specify different radiuses per corner. Webkit does not support a shorthand syntax, since it chose to implement the spec closely, sacrifycing clarity but allowing for more flexibility. To cut a long story short, Webkit supports irregular curves instead of just circle quarters on each corner , so if you try to add 2 values, the result will be  horrendous . So, you have to specify all four properties (or less if you want some of them to be square). To make matters even worse, the way the names of the properties are structured is different. There is one more dash, and the position of the corner styled by each property is not at the end but before -radius : Caution: If the dimensions of your element are not enough to accomodate the rounded corners, they will be square in Webkit-based browsers. Specify a / or enough padding to avoid this. Since Google Chrome is based on Webkit, its border-radius support is like Safari’s. However, it’s haunted by an ugly bug: It renders the rounded corners aliased . :( The bad news is that Opera does not implement the CSS3 border-radius yet (it will in the future, confirmed ). The good news is that it allows for SVG backgrounds since version 9.5. The even better news is that it supports URIs, so you can embed the SVG in your CSS, without resorting to external files as someone recently pointed out to me . Alexis Deveria was clever enough to even create a generator for them , so that you could easily specify the background, border width and border-color and get the data URI instantly. This is a quite useful tool, but lacks some features (for instance you might want the background to be semi-transparent, like the one used in this blog). It’s ok for most cases though. While Opera’s current lack of border-radius support is disappointing, you can utilize it pretty well with this method and if you know SVG well enough yourself you can create stunning effects. There’s no need to tell you that IE doesn’t support border-radius or SVG backgrounds, even in it’s latest version, right? You probably guessed already. There is some hope here though, a clever guy named Drew Diller carefully researched the MS-proprietary VML language and came up with a script that utilizes it to create rounded corners in IE . The bad news is that MS when releasing IE8 fixed some things and messed up others, so the script barely works on it. It also has some other shortcomings , but for most cases it can be a great tool (for IE7 and below, unless MS surprises us and fixes the VML regressions in IE8 before the stable). Also, if rounded corners are not crucial to your design and you don’t get too much traffic from IE users, you might consider ignoring IE altogether and having square corners in it. This way you’re also serving the greater good, since when IE users see your site in a supporting browser, they’ll conclude that “Oh, this browser shows the web nicer!” and the site will still be just as usable (in most cases rounded corners are not that crucial for usability, although they enchance it a bit ). I hope this article helped you learn something new. If you found any mistakes or inaccuracies, don’t hesitate to leave a comment, I don’t know everything and I’m not god. :) One thing I have in mind is creating a PHP script that takes care of all these incompatibilities for you and caches the result. I don’t know if I’ll ever find the time to write it though, especially before someone else does :P

0 views
Lea Verou 16 years ago

Find the vendor prefix of the current browser

As you probably know already, when browsers implement an experimental or proprietary CSS property, they prefix it with their “vendor prefix”, so that 1) it doesn’t collide with other properties and 2) you can choose whether to use it or not in that particular browser, since it’s support might be wrong or incomplete. When writing CSS you probably just include all properties and rest in peace, since browsers ignore properties they don’t know. However, when changing a style via javascript it’s quite a waste to do that. Instead of iterating over all possible vendor prefixes every time to test if a prefixed version of a specific property is supported, we can create a function that returns the current browser’s prefix and caches the result, so that no redundant iterations are performed afterwards. How can we create such a function though? Caution: Don’t try to use someScript.style.hasOwnProperty(prop). It’s missing on purpose, since if these properties aren’t set on the particular element, hasOwnProperty will return false and the property will not be checked. In a perfect world we would be done by now. However, if you try running it in Webkit based browsers, you will notice that the empty string is returned. This is because for some reason, Webkit does not enumerate over empty CSS properties. To solve this, we’d have to check for the support of a property that exists in all webkit-based browsers. This property should be one of the oldest -webkit-something properties that were implemented in the browser, so that our function returns correct results for as old browser versions as possible. seems like a good candidate but I’d appreciate any better or more well-documented picks. We’d also have to test as it seems that Safari had the -khtml- prefix before the -webkit- prefix . So the updated code would be: By the way, if Webkit ever fixes that bug, the result will be returned straight from the loop, since we have added the Webkit prefix in the regexp as well. There is no need for all this code to run every time the function is called. The vendor prefix does not change, especially during the session :P Consequently, we can cache the result after the first time, and return the cached value afterwards:

0 views
Lea Verou 16 years ago

Extend Math.round, Math.ceil and Math.floor to allow for precision

, and are very useful functions. However, when using them, I find myself many times needing to specify a precision level. You don’t always want to round to an integer, you often just want to strip away some of the decimals. We probably all know that if we have a function to round to integers, we can round to X decimals by doing . This kind of duck typing can get tedious, so usually, you roll your own function to do that. However, why not just add that extra functionality to the functions that already exist and you’re accustomed to? Let’s start with . It’s the most needed one anyway. Firstly we’ll have to store the native function somewhere, since we’re going to replace it. So we do something along the lines of: Now let’s sigh replace the native with our own: And guess what? It still works the old way too, so your old scripts won’t break. So now, let’s go to and . If you notice, the only thing that changes is the function name. Everything else is the same. So, even though we could copy-paste the code above and change the names, we would end up with triple the size of the code that we need and we would have also violated the DRY principle. So we could put the names of the functions in an array, and loop over it instead: Why the closure? To allow us to be free in defining our variables without polluting the global namespace. In case was cross-browser or if you have mutated the prototype to add it for non-supporting ones, you could easily do that: No closures and much easier to read code. However, nothing comes without a cost. In this case, the cost is performance. In my tests, the new function takes about twice the time of the native one. Adding a conditional to check if the precision is falsy and use the native function directly if so, doesn’t improve the results much, and it would slow the function down for precision values > 0. Of course the speed would be just as much if the function was a normal one and not a replacement for Math[something], that doesn’t have anything to do with it.

0 views
Lea Verou 16 years ago

JS library detector

Just drag it to your bookmarks toolbar and it’s ready. And here is the human-readable code: Am I nuts? Certainly. Has it been useful to me? Absolutely.

0 views
Lea Verou 16 years ago

Check whether a CSS property is supported

Sometimes when using JavaScript, you need to determine whether a certain CSS property is supported by the current browser or not. For instance when setting opacity for an element, you need to find out whether the property that the browser supports is , ( ), ( ) or the IE proprietary . Instead of performing a forwards in compatible browser detect, you can easily check which property is supported with a simple conditional. The only thing you’ll need is a DOM element that exists for sure. A DOM element that exists in every page and is also easily accessible via JS (no need for ), is the element, but you could use the or even a tag (since there is a script running in the page, a tag surely exists). In this article we’ll use document.body, but it’s advised that you use the head or script elements, since document.body may not exist at the time your script is run. So, now that we have an element to test at, the test required is: Of course you’d change with a reference to the element you’d like to test at (in case it’s not the body tag)  and with the name of the actual property you want to test. You can even wrap up a function to use when you want to check about the support of a certain property: The only thing you should pay attention to, is using the JavaScript version of the CSS property (for example instead of ) Wasn’t it easy?

0 views