Update: Unfortunately the Web Type East event has been cancelled. Anyone who purchased a ticket should contact TypeCamp for a full refund.
Although I missed the Web Type West event earlier this year, I’m making up for it at the Web Type East event at Concordia University in Montreal, Quebec on November 6th. The conference features a pretty stellar lineup of speakers including Grant Hutchinson, Brian Warren, Xerxes Irani, Paul Hunt and Stephen Coles and more.
The title/descriptions of each of the talks tell me there’s going to be information that even for me is fresh and interesting. My own talk will be a considerably updated and expanded version of the one I gave originally at Typecon this past August.
Montreal is a fabulous city and I’d love to see you there.
The last few years have found me writing less and less code as part of my day to day activities, but that doesn’t mean I’m not keeping tabs on the latest CSS3 and HTML5 features and techniques for writing efficient and maintainable code. And so it was with great pleasure that I dove into an early peek at Sass for Web Designers from Dan Cederholm a couple days ago.
Like every book from A Book Apart, Sass for Web Designers gets to the point, fast. At a trim 98 pages, this is a brisk and easily digestable read. It’s not comprehensive by any means and certainly not a retelling of the Sass documentation. We should all be thankful for that.
Aside from some introductory housekeeping, the book focuses on a few key concepts such as formatting styles, using variables and mixins, before diving into real world use-cases such as easier to maintain media queries and dealing with highdpi images.
Ultimately Dan’s clear and succinct voice and little touches of humour are what make the book a pleasure to read. I’ve used Sass a bit myself and appreciate Dan’s hesitation to adopt it because it echoes my own experience. If your CSS ain’t broke… But by focusing on how much gain can come from little changes using some of the most impactful aspects of Sass, why it makes sense is just as clear as how to actually do it.
If you’re already a Sass power user, this might not be the book for you, but if you’re slow on the uptake of CSS pre-processors like Sass or Less, or just want to get a peek into Dan’s own CSS workflows, then this is a fabulous primer and easily the best intro to using Sass that I’ve seen.
Sass for Web Designers will be available in November.
Towards the end of October, the opportunity presented itself (thanks to the handsome and charming Brian Warren) to contribute to the upcoming 7th edition of Peachpit’s seminal HTML and CSS Visual Quickstart Guide book which will be released on December 27th, 2011. Although the majority of the heavy lifting of updating the book was handled by Bruce Hyslop, Brian and I each contribued wholly new chapters to the book.
In those new chapters, Brian provides an introduction to the use of the CSS @font-face syntax, and I cover a handful of the new(-ish) CSS properties such as border-radius, box-shadow, text-shadow, multiple backgrounds, and background gradients.
Because this book is aimed at newbies, it was an interesting challenge in restraint, and also my ability to distill some complicated properties, along with the use of vendor prefixes down to something a mere mortal can comprehend. If you’ve ever spent any time with the background gradient syntax for example, it’s… um, complicated. That I managed to write something which makes learning the basics of CSS3 gradients simple, I consider that a win.
Aside from some minor aches and pains writing and editing in Word, the process was both a great learning experience and fun. And I, of course, would be remiss to not mention the expert editorial guidance provided by Bruce and our editors Cliff and Robyn.
The morning of October 18th (that’s today) brings not just one, but two new titles from the good people at A Book Apart — Designing for Emotion by Aarron Walter, and Mobile First by Luke Wroblewski. While both books are important in their own right, along with the previously released (and reviewed) Responsive Web Design by Ethan Marcotte, they close the loop on a larger story about transforming the thinking behind how web, interactive media, and mobile apps are designed and created.
The funny thing about the opportunity to review these books in advance is that as much as I might have a lot to say about them, my inclination is to let them speak for themselves. A lengthy review feels contrary to the spirit of the books themselves.
Instead, I’d like to make or reinforce a few observations about the series and it’s overarching relevance to designers, developers, content strategists, project managers, business executives, and everyone in between.
Because I was already familiar with many of the ideas expressed throughout both books, what became evident was that I wasn’t the primary audience. Ultimately, the real readership is not the early adopters. Those people — myself included — don’t need convincing. Early adopters have already read the articles and blog posts, or heard Aarron and Luke speak on their respective topics. Nevertheless, I found myself nodding in agreement pretty much the entire way through both.
Newness of the content to early adopters aside, it’s the relevancy, timeliness, length, and quality of these books, and the time required to comfortably read them that positions them to hold the attention of clients, managers, executives, and other decision makers (and yes, your common design nerd); to convince those people to explore a new approach, to make the web more expressive, more beautiful, and more future friendly.
Should you pick up copies of one or both of these books? Yes. Should you pick up copies to share with a manager, client, or co-worker who’s less enlightened than you? Yes… yes you should.
The other day I was invited to take a peek behind the curtain at a new web font related technology that’s nearly ready to hit the streets from the fine folks at Extensis. Needless to say I was very interested and excited by what they’ve been up to.
But first a bit of context…
How (Most) Web Designers Work Today
During TypeCon in New Orleans this past July, one of the things Brian, Luke and I covered during our talk on web fonts was process — exactly how (most) web designers work, and what happens to the particular artifacts we produce as a result of that work. In particular, design mockups, and most importantly though, how those relate to a web designer’s somewhat contentious relationship with fonts.
Web designers have wanted the same control over typography print designers have taken for granted for decades, including being able to use the same variety of typefaces. Hacks such as sIFR and Cufon aside, it’s really only during the last two years, thanks to the encouraging work of type designers, foundries and browser makers, that the tide has really turned and we’re inching closer to that reality.
Unlike print though, where designers create final artwork files that are the final output of the design phase of a project (a newspaper advertisement, a book layout, product packaging), the large majority of web designers create mockups, a transitional artifact created for the benefit of clients and others involved in producing the actual end product — a functioning website.
Mockups are not the end result, and so purchasing desktop font licenses for what is effectively a throwaway product is counter-intuitive. Web fonts are part of the real end product of a web designer’s work, not their desktop equivalents. But that’s not the way we’ve had to work.
And it’s certainly not that web designers don’t want to pay for fonts — quite the opposite in fact. Web designers have flocked to web font services such as FontDeck, Typekit, and WebINK, and more will come as these services are more readily adopted by those beyond the early adopters.
During our talk at TypeCon, we further explored a suggestion which originated from Elliot Jay Stocks illustrating how web fonts might be integrated into a desktop design application such as Photoshop. In July no such thing existed; it was just an idea. And while the software is not available quite yet, I can happily say that it does now thanks to the team behind WebINK, Extensis’ web font service.
Introducing the Web Font Plugin for Photoshop
To address this disconnect in how web designers work, Extensis has created a piece of software that bridges their WebINK web font service and Photoshop, thus allowing web designers to use web fonts as though they were traditional desktop fonts in the popular design tool.
The web font plugin for Photoshop will be included with Suitcase Fusion 3 and available in beta in the coming weeks. Most importantly, it will continue to function beyond the software’s 30 day trial. There’s no requirement to purchase or use Suitcase — it’s simply the delivery mechanism for the plugin itself and assists in integrating the plugin with their WebINK web font service.
At the moment the functionality is simple and straightforward. Once the software is installed, open the Panel in Photoshop, sign in to your account and start working with their library of web fonts.
Transferring PSD files to others is seamless too, provided they have a WebINK account and the plugin installed. Designers will also be free to create JPEG, PNG and PDF files without watermarks or licensing restrictions beyond anything they’re already used to. Of course, there are still a few unanswered questions such as what happens without a network connection, but it’s a very promising start and raises the bar for competing web font services. Nudge, nudge Typekit and FontDeck.
Update (September 12, 2011)
Extensis has soft-launched the software’s microsite and you can download a 30 day free trial of Suitcase Fusion 3 and the Web Font plugin for Photoshop at webfontplugin.com. Go. Download. Create.
This past Saturday, after several weeks of email, IM, and conference calls, my Butter Label cohorts Luke Dorny, Brian Warren and I, otherwise dubbed “three guys with hats,” gave a brief talk at TypeCon in New Orleans on what web fonts means to designers.
As we discovered, the narrative on the topic of web fonts weaved its way into even more presentations than previously at TypeCon. This meant editing and rehearsing up until the last minute to ensure our spin on the topic was sufficiently unique. And while we somewhat ended up winging it, all three of us came away feeling good and have had great discussions with other speakers and attendees since.
The premise of the talk revolved around the idea that web designers have all along wanted the same typographic control as print has historically enjoyed. In that same vein, now that fine-grained control over type using CSS is becoming a reality, there’s a greater need to educate web designers on how to sensibly select and pair type, evaluate web fonts, and to know when to use advanced typographic features such as those found in newer OpenType fonts.
During the talk we also briefly covered the history of workarounds and hacks that have been invented to bridge the gap between what’s available and what’s really possible.
Additionally, we’ve made the complete anonymous source data from the unscientific, yet (we think) still relevant and interesting survey we ran not long ago to help prepare for the talk. The way to best explore the data is to put it through the lens of early adopters. It’s reasonably safe to assume that’s who the majority of the respondents were.
From Brian, Luke and myself — a big thank you to the TypeCon and SOTA board, staff and volunteers on hand during the conference — especially Michelle, JP, and Grant who helped get us there and made presenting painless. And of course everyone in the audience too.
The best part Ethan Marcotte’s new book, Responsive Web Design (available from the fine people at A Book Apart on June 7th) is that it’s brimming with his thoughtful ideas and unique approach. Actually, the best part of the book is the immediate and concise way he ties together everything you need to know to start practicing “responsive” design yourself. On the other hand, the best part is his hilarious self-deprecating humour that makes it almost impossible to read without hearing his voice narrating it in your head. That’s just me? Oh.
The prescience and immediate relevancy of this book cannot and should not be understated as the world of web design is further inundated by new devices and greater uncertainty, demanding an increased need for flexibility to understand and manage it all.
And while the concept of responsive web design might not be a silver bullet (it never claimed to be), Ethan’s book does a brilliant job of wrapping what you need to know into a straightforward and accessible package — covering both the lenses through which to approach deciding whether it’s an appropriate choice for a given project, and how to go about making it happen if it is.
Responsive Web Design is 155 pages of compact insight and unquestionably one of the most important books you’ll read many, many times this year.
Yesterday I finished up some work on a little pet project I started via the day job back in December ‘10 just before the holidays. Apple had recently released iAd Producer and after spending a bit of time tinkering with it, I thought it would be a fun little project (read: distraction), but potentially useful down the line for myself or others, to produce a set of wireframe objects based on the iAd platform and some of the default widgets and templates included in the iAd Producer software to make designing iAds — from an experience point of view, just a little bit easier.
The stencils/templates come in two flavours currently — either OmniGraffle and Adobe Illustrator for your wireframing and experience planning pleasure.
If you find them useful, I’d love to know. Same goes for any improvement suggestions, additional elements worth including, etc. For example, is it worth creating a complementary iPad-sized version of these now that iAds have started to be opened up on that platform as well?
We didn’t exactly plan it this way, but Zeldman declaring this past Tuesday, November 16th World Type Day was fortuitous. Perhaps serendipitous even.
Luke and I, along with the incomparable assistance of Carolyn Wood originally planned to launch our new little experimental venture, Ligature, Loop & Stem the previous week but enough pieces weren’t quite ready for prime-time that we pushed it back a week.
Based on the immensely positive responses we received throughout the week — it seems we did something right and are sincerely humbled, excited and frankly a bit overwhelmed. Selling out the initialcollection less than 72 hours after launching the site was… at least a little unexpected (by me anyway).
Of course there’s still some lovely (and free)ampersand wallpapers available for your iPhone or iPod touch to tide you over until the next limited edition pieces are ready to go — which we expect will be sooner than later.
Setting LL&S
In a lot of ways, the idea for LL&S came out of nowhere. At the same time, it’s at the core of what I’ve felt has been missing from my work over the last couple years; the genesis of it has been biding it’s time on pages in one of my Moleskines in some form for nearly as long.
When I mentioned my initial ideas behind LL&S to Luke I knew he’d be on board, the same with Carolyn, who I’ve searched for a good opportunity to work with for as long as I can remember and who put in 150% the whole way through. Luke and I had been talking for a little while about teaming up in some fashion and this became the perfect vehicle to get the ball rolling.
LL&S mixes Luke’s and my design sensibilities, love of the web, typography and design history while allowing us to explore ideas that don’t fit the constraints of typical client projects such as non-traditional navigation, interactions that mirror the real world, and hiding little inside jokes in and around the site — you did find all of them right?
Letterpress-printed ampersand glyphs
Unfortunately the web isn’t widely recognized for stellar typographic design. Advances in CSS, services like Typekit, and some inventive web designers experimenting with type to more closely connect it to the message of a site as print designers are more apt to do will slowly change that perception.
We wanted something that could bridge the gap between the possibilities of print and the web, with a little industrial design thrown in for good measure. To do our bit in changing perceptions and that essentially gave us complete creative freedom.
Perhaps the larger vision behind LL&S is that we wanted to experiment with making stuff we’d want for ourselves just as much as we hoped other would too — ampersands seemed like a good place to start as any. That said, we’re not restricting ourselves to just producing print pieces. The sky’s the limit. Exactly how some of the ideas we’re already exploring materialize is anyone’s guess.
We think we’ve got some interesting stuff in the works. If we can continue to surprise and delight then in my books, we’ve accomplished what we set out to do.
Credit
Luke and I would be remiss to not explicitly thank our good friend and walking encyclopaedia of all things typographic, Grant Hutchinson who I asked to help curate the Ampersand print with me. Also, writer, editor, idea generator and all-around whip cracker Carolyn Wood, without whom we might still be waiting at the gate because the copy on the web site would have been, well… nowhere near as good as we think it is now, which is pretty damn awesome.
For everyone else, close to home and around the world (the internet sure makes the world a small place) — thank you as well. Thank you for the kind words, retweets, links and for simply making the launch a resounding success by buying up everything so quickly!
Next Up
Part of the point of LL&S is just us following our instincts. We know there’s room to improve the site, particularly around navigation and little bits of the overall user experience. Thankfully we’ve got some ideas that don’t compromise our original vision and should improve the situation.
Even before we get to that though, we need to get the first collection of products in the wind and on their way while pushing ahead with the next collection (which we promise will not feature ampersands).
For the last few weeks or so I’ve had the opportunity to tinker with the technology preview (beta) of Typekit. It’s been quietly in use on this site since the end of August.
Designers such as myself have wanted the ability to use real fonts on the web for years without the hair-pulling, potential accessibility and licensing issues of image replacement, sIFR, Cufon or other “hacks” (however clever, they’re still hacks — deal with it). We’ve also wanted to ensure type designers and distributors get paid appropriately so they can keep creating and making available greattypefaces.
Typekit, as with other upcoming services such as Kernest and those from Ascender and Typoteque make this possible now by essentially levelling the playing field across browsers, providing pain-free implementation mechanisms and protecting designers from the messy business of licensing issues and ethical ramifications of distributing raw font files to browsers.
A preview of the Typekit kit editor
So, what’s so good about Typekit? Why should designers care?
The Good
More fonts Specify fonts in CSS font stacks beyond the most commonly available fonts. Yay!
Creating Kits is easy Creating a “kit” — a selection of fonts, is simple and for the most part feels familiar; not unlike using a desktop font manager.
Easy to implement If you’ve used sIFR or had to deal with image-replacement techniques, you know how frustrating they can be. With Typekit, just add the Javascript code provided as part of a kit to your pages. The rest is just a matter or specifying fonts in your CSS as you would normally.
Browser support Through a little bit of magic Typekit works across platforms and browsers — even IE6. Personally I would be totally Ok if Typekit didn’t suppose IE6 (or even IE7) but it does so they get bonus points from me for being comprehensive.
Is a good Javascript citizen The Javascript used by Typekit, at least in my experience so far behaves well and doesn’t inadvertently stomp on other Javascript events.
Reliable The service itself seems like it was designed to scale from the start. By using a Content Delivery Network (CDN) instead of a centralized server, the service should be able to withstand very high loads, provide low latency and easily maintain 100% uptime which is appropriate for such a service.
The Less Good
Even though Typekit is a great service that will only get better with time, and although my experience using it has been flawless, there’s still room for improvements. The following would be on my list.
Even more fonts This is a no-brainer obviously. There’s a huge minefield of licensing and IP issues to sort out and understandably that takes time. The biggest issue with the fonts available now — which are largely from smaller foundries and independent type designers is probably that most designers don’t already have their own personal licences to use in comps or outside a browser.
Browsing is awkward Finding the right font to add to a kit can be tedious. Right now the only options for locating fonts is browsing the paginated listings or using the classification/tag filters. Adding the ability to browse alphabetical pages, additional categorizations or a more traditional search interface might help.
Weights and styles It’s not obvious what weights and styles are available for a given font unless you view the detail page for the font or add it to a kit and look at the Weights & Styles tab. Indicating the number of weights and styles in the listings would be a good place to start.
New additions Right now if new fonts are added to Typekit, there’s no obvious way for users to find them other than by browsing through the listings or perhaps by a mention in the Typekit newsletter. Without any inside knowledge it’s hard to speculate how often new fonts will be added to the service but I think it’s safe to assume new fonts will be added with some degree of regularity.
Requires Javascript This really isn’t a big issue in my opinion because for anyone that’s disabled Javascript in their browser likely wouldn’t know what they’re missing anyway.
My gut feeling is that Typekit will ultimately be a stop-gap solution, but one that will keep up with the current momentum of browser vendors, distributors, and type designers who are ready to start licensing fonts to be used on the web so long as everything is licensed properly and intellectual property rights are protected. If they can make it easy, affordable and reliable, I have no doubt it’ll do well and be around for a long time.
Additional Reading/Listening
For some background and detailed context on the concerns from both sides of the fence (type users, type designers/distributors), I highly recommend checking out the recording of the Web Fonts Panel from the ‘09 TypeCon conference.
Get Your ‘Kit On
I’ve got 5 beta invitations for Typekit and if you’d like to get your hot little hands on one, send an email to typekit@ this domain and I’ll hook you up.
For as long as I’ve been using Mac OS X I’ve found myself exploring the Unix underbelly of the operating system and hand-rolling my own web development environment using various open source web projects such as MySQL, Ruby, Rails, Python, Django, etc. The popular stuff at least.
As such I’ve tinkered away at automating the process, because, well, installing all that software is time-consuming, tedious, and really — who doesn’t have better things to do?
So after much tinkering, tweaking and head-scratching I built a little project that I open-sourced and dubiously called …And the Kitchen Sink because that’s what it felt like. Eveything… and the kitchen sink.
Recently, Kenny Meyers goaded me into moving the project to Github and I’ve been maintaining both the original on Google Code and the Github version. That does sound like fun, doesn’t it?
And now that the next big cat, Snow Leopard will be officially out of the bag tomorrow (it’s helpful to have access to pre-release builds via the Apple Developer ConnectionFYI), one would think the logical next step would be to test things to make sure they still work, especially given that Snow Leopard is 64bit through and through. I did. They didn’t.
After several long nights and more head-scratching, now …And the Kitchen Sink is too.
Everything… And The New Hawtness
Aside from ensuring the script built everything as 64bit binaries (just like in Snow Leopard itself) I actually went quite a bit further and radically changed the way the script worked and have started splitting up several core tasks into smaller individual ones that can be run in sequence rather than one big-ass monolithic process.
Par exemple — now you can: download the various included packages, compile everything, and then finally setup MySQL (unless you screw things up real bad, this only needs to be done once). If you’ve tried the old version, trust me, this is a hands to the sky kind of improvement.
What You Can Do
There’s still other changes and improvements coming — my plan (in as much as I have one) is to either split things into “bundles” (eg. a Ruby bundle, a Django bundle, etc.) or allow some sort of flexible configuration to decide what actually gets installed. I’m not there with that yet.
In the meantime the thing desperately needs some “in the wild” testing. That’s where you come in. So go, download, read the “README” file (for reals), try it out, report bugs and make suggestions. The best ones get a cupie doll. If it changes your life, I happily accept donations.
One last note — keep in mind that I’m not a developer, ok? I just play one in my spare time.
When I moved the Notebook site over from the wishingline.com domain over to this one, one of the things I wanted to do was rebuild the contact form from scratch and integrate it into the base Movable Type install that manages things behind the scenes.
That was a fairly simple process overall and using a bit of PHP, jQuery and Ajax magic, I built the form so that it works whether Javascript is enabled in the browser or not. Unobtrusive progressive enhancement — it’s good. You should try it.
Where I ran into a problem though was that all of a sudden bots were going to town on the form and I was getting all kinds of spam through the form, despite work put into preventing that at the start — e.g. ensuring the form would only accept local requests from the same domain, using secret server-level key validation, etc.
Ultimately what proved to cure the problem: give the fields unusual names. If you have a field that collects a person’s name, don’t name it “name” or an email address, “email”. Bots look for that and can easily exploit it.
Truth be told: I knew this. Maybe you already do too, but an occasional reminder never hurts.
It takes all the running you can do to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that.
In a roundabout way I think that passage perfectly sums up the state of the web industry for me in 2009 and is a perfect lead-in to mention issue number 284 of A List Apart which features an article on the topic of Burnout by yours truly.
It was a challenging article to write simply because it was so deeply rooted in my own personal experiences and I hope readers take note and are interested in continuing the discussion further because, obvious or not, the web and design industries are intrinsically ripe for extreme cases of burnout.
My thanks to Carolyn Wood, Krista Stevens, Erin Kissane, Zeldman et al.
Last year when I originally moved the Wishingline site and a handful of others over a shiny new slice at Slicehst one of the issues I ran into was handling outgoing mail from contact forms, Movable Type, etc. I’m no server admin and despite knowing enough to be dangerous, setting up a secure mail server that can handle multiple domains was definitely outside my comfort zone.
Thanks to Ethan, I discovered a gem of an open source project called MSMTP which was just what I needed; the exception being that I couldn’t figure out how to use it with multiple domains. Until last week that is.
Of course it’s really easy.
Installing and Configuring for Multiple Domains
MSMTP provides two ways you can configure the software using a simple and well-documented configuration file format. It’s all plain text so it’s easy to create, edit and back up.
Installing the Software
Installing MSMSTP requires the following packages which can be installed using the aptitude tool on Ubuntu. Installation on other *nixes may vary.
Once you have everything installed, you need to create a configuration file either in /etc/msmtprc or by creating a user-specific one in your user’s home directory. If you need mail services for more than one domain, I suggest using the global configuration option.
I’m going to assume you’re reasonably comfortable working in a Unix environment from here on out though if you know what you’re doing you can do all of this just as easily using ExpanDrive and TextMate without having to touch the Terminal.
$ sudo nano /etc/msmtprc
Once the nano editor has opened a new blank file for you, enter the following and replace the example configuration as needed. I’m including examples for two domains so you get the idea.
# Account: domain1.com
account domain1
host smtp.gmail.com
port 587
auto_from off
auth on
user hello@domain1.com
password PASSWORD
tls on
tls_starttls on
from robot@domain1.com
maildomain domain1.com
tls_trust_file /etc/ssl/certs/ca-certificates.crt
logfile
syslog LOG_MAIL
# Set a default account to use
account default : domain1
# Account: domain2.com
account domain2
host smtp.gmail.com
port 587
auto_from off
auth on
user hello@domain2.com
password PASSWORD
tls on
tls_starttls on
from robot@domain2.com
maildomain domain2.com
tls_trust_file /etc/ssl/certs/ca-certificates.crt
logfile
syslog LOG_MAIL
Repeat as necessary to add more domains. Save your changes by typing Control-O and pressing Enter. Then type Control-X to exit the editor.
Virtual Host Configuration
Assuming you’re using PHP with Apache as your web server, you can add the last two lines in the example below to each virtual host to specify which configuration account you’d like to use to send mail.
<VirtualHost *:80>
ServerAdmin webmaster@domain1.com
ServerName domain1.com
DocumentRoot /home/user/sites/domain1/
DirectoryIndex index.html index.php
# MSMTP configuration for this domain
php_admin_value sendmail_path "/usr/bin/msmtp -a domain1 -t"
</VirtualHost>
Replace domain1 with the correct domain obviously. This should correspond to the account names specified in the /etc/msmtprc file.
Alternatively you need to instruct your middleware or framework to use MSMTP instead of Sendmail/Postfix to send mail and pass the same account parameter whenever called. Most have some form of configuration option to allow this.
Since June of last year (following attending WWDC in San Francisco) I’ve had an item on my To Do list — “experiment with improving the overall performance of the Wishingline Notebook site.” In other words, do some under the hood optimizations.
Yahoo! has a terrific set of guidelines that can be used to squeeze the most performance out of your site. This, along with a talk from WWDC last year piqued my interest in learning more — especially any simple things I could put into practice in my client work as well.
First on my list was experimenting with minifying and concatenating Javascript and CSS files. This is easily done with the YUI Compressor or other similar utilities. The trick is automating everything when changes are made. For example, generating minified versions automatically when deploying files to your site.
Second on the list was moving the site’s CSS, Javascript and images to their own subdomains, allowing browsers to download assets faster since most are limited to two connections per server. By splitting things up browsers can use more than just two concurrent connections, thereby loading the site faster.
1 domain = 2 concurrent connections. Ok but not great.
4 domains = 8 concurrent connections. Zippy!
Unfortunately it wasn’t until this past week that I had a chance to look into this one.
I started with two simple goals:
Provide separate URLs for Javascript, CSS and images.
Make the code change across the site templates simple.
Setting up subdomains in Apache is simple and adding the necessary DNS entries and virtual subdomains to accomplish the first goal took less than 10 minutes.
On the other hand, being smart about how to handle putting those subdomains into the template code elegantly was a bit trickier. Sure I could just hard-code the URLs into the site templates — but that kind of sucks. I had a better idea.
The Notebook site has been running on Movable Type since 2003 and a better way to accomplish this would be by using template tags. And so I did, though it meant creating a special plugin for Movable Type to do so.
Enter MediaURLs for Movable Type 4.x, a new plugin that allows users to specify a series of URLs for serving CSS, Javascript and image assets each from their own domain or subdomain while providing a set of corresponding template tags to make applying those changes DRY, easy, and fast.
MediaURLs plugin congiguration screen
Each of the four setting options provided by the plugin settings (shown above) are optional and should be fairly self-explanatory. A simple example of how to use these is included in the documentation.
The general “media” option was added at the last minute for the sake of simplicity — to allow the use of a single generic domain/subdomain to serve any type of asset — for example, serve all CSS, JS and images from a single secondary domain.
The MediaURLs source is being hosted at Github and will be regularly maintained. This is really only a first release and I’m open to suggestions for further improvements. Enjoy and happy optimizing!
For the past few weeks we’ve been working on designing and building out a site for a client and since selecting Movable Type 4 as the CMS, we thought it would be worth giving the relatively new Virtual MT a try as part of our development process. Although our overall experience using VMT so far has been great, we ran into one small nit: the default site isn’t served from the root URL of the server and instead uses a subdirectory path. This (probably) should be a user-defined option, but isn’t currently, so we set out to resolve this for ourselves.
Let’s be honest — Movable Type has always been a bit of a pain to run on Mac OS X unless you happen to be or know someone well-versed in the black art of the command line and Perl. A black art if you’re a designer at least. This is exactly why VMT is great, particularly if you’re already used to running Windows in Parallels or VMWare for browser testing and debugging.
Virtual MT comes pre-packaged as part of a lightweight Ubuntu Linux OS. Downloading and running an instance (or multiple instances) of VMT is simple and we’ll cover the process using Parallels 4 before walking through the configuration change to allow the default site to be served from the root path of the included Apache web server.
Downloading and Running VMT
Get started by downloading a copy of Virtual MT which comes in both Open Source and Commercial (Pro) flavours. Unzip the downloaded archive and read the included Read Me file. No really, read it.
Parallels Virtual Machine list
Next, locate the VM Image file for Parallels (or your preferred Virtual Machine software) in the unarchived folder in your Downloads directory. In the case of Parallels, this file should end with a .pvs extension. Double-click the file to add it to the Parallels Virtual Machine library. Parallels 4 will request the VM image be converted to the newer bundle format.
The Virtual Machine book screen in Parallels
Click on the Virtual Machine and start it. In a web browser, go to the Configuration Page URL displayed in the running Virtual Machine window.
The running Virtual Machine window in Parallels
Complete the base configuration to enable access to the VM and Movable Type itself.
The configuration screen for the Movable Type JumpBox
Once the base configuration is complete, go back to the main Configuration Page and click on the SSH/SFTP icon. Check the checkbox to enable SSH/SFTP access and then save the change.
The Movable Type configuration home screen
At this point you should have a fully functional, ready to customize virtualized install of Movable Type. No mucking about in the command line or Perl module installation required. Next — improving the configuration.
VMT Configuration Changes
In order to “correct” the configuration of VMT, provide access to the VMT install at the root of the included Apache web server and make accessing the MT insall and any published templates easy, you may want to install either MacFuse or ExpanDrive which let you access the virtual OS filesystem just like any other shared disk. Alternatively, Transmit or any other software that supports SFTP connections will also work, though direct access in the Finder is considerably more user-friendly.
And now the nerdy part. To make the necessary Virtual Machine configuration changes, run the following two commands in a new Terminal window. Replace VIRTUAL_IMAGE_IP_ADDRESS with the one provided in the VM window on your computer.
After entering your admin password when prompted, in the pico text editor, change the contents of the jumpbox-app file to match the following:
# Alias /movabletype/blogs /var/data/mt-blogs
Alias /movabletype /var/data/movabletype
<Directory /var/data/movabletype>
AddHandler cgi-script .cgi
Options +ExecCGI
# Uncomment the following lines to enable FastCGI
# <FilesMatch "^mt(?:-(?:comments|search|tb))?\.cgi$">
# SetHandler fastcgi-script
# </FilesMatch>
</Directory>
<Directory /var/data/mt-blogs/*>
AllowOverride All
</Directory>
# Uncomment the following lines to enable FastCGI
# FastCgiServer /var/data/movabletype/mt.cgi
# FastCgiServer /var/data/movabletype/mt-search.cgi
# FastCgiServer /var/data/movabletype/mt-tb.cgi
# FastCgiServer /var/data/movabletype/mt-comments.cgi
RewriteEngine on
# RewriteCond /jumpbox/var/widget-on !-f
# RewriteRule ^(/?|/index.(html|php|htm))$ /movabletype/blogs/my_blog [R]
# RewriteCond /jumpbox/var/widget-on !-f
# RewriteRule ^/jblogin.(html|php)$ /movabletype/mt.cgi [R]
# DocumentRoot /var/www
DocumentRoot /var/data/mt-blogs
Save your changes by typing Control-O and then Control-X. To then restart Apache so it will reload the newly updated configuration, type the following in the Terminal.
sudo /etc/init.d/apache2 restart
Updating Movable Type’s Publishing Paths
The last thing that needs to be done is to update the Publishing Path values for each blog instance in Movable Type so content will be published to /var/data/mt-blogs instead of the default location. This is done from the Preferences > Publishing screen in the Movable Type admin interface.
Set the value of Site URL to the IP address of the VM and set the Site Root to /var/data/mt-blogs. If running more than one blog instance, change these values apporpriately. Save the changes and re-publish. And that, as they say, is that. Enjoy!
Note: The current (as of this writing) release of Virtual MT is slightly out of date with the recent 4.23 release of Movable Type though it’s simple enough to update your own base install in VMT.
One of the things that annoyed me with the process of setting up a Subversion server with SSH access, aside from the sheer complexity, was the number of steps required just to create a new project. Once was bad enoug, but repeating those steps each time to create a project just didn’t scale…
So, a bit of Bash scripting later and everything is much, much easier.
Assumptions
The instructions and script that follow assume you completed the earlier tutorial carefully when setting up your own Subversion server. It may not be appropriate or work as expected otherwise. As always, YMMV.
Creating, Configuring and Using the Script
Somewhere in your $PATH on the system acting as your Subversion server (I suggest /usr/local/bin), create a new file named svnproj, set the file as executable and then finally open the file for editing.
#!/bin/sh
REPOSITORY="/svn" # Set to your repository path
USER="admin_user" # Set to your system admin user
# ====================================================
# Do not change anything below the line above
PROJECT_NAME="$1"
if [ $# -eq 0 ] ; then
echo "Usage: newproj PROJECT_NAME"
exit
fi
echo "------------------------------------------------"
cd ${REPOSITORY}
svnadmin create ${PROJECT_NAME}
echo "Created project: '$PROJECT_NAME'"
echo "Configuring svnserver.conf for restricted access"
cp ${REPOSITORY}/${PROJECT_NAME}/conf/svnserve.conf \
${REPOSITORY}/${PROJECT_NAME}/conf/svnserve.conf.default
cat > ${REPOSITORY}/$PROJECT_NAME/conf/svnserve.conf << "EOF"
[general]
anon-access = read
auth-access = write
[realm]
realm = Projects
EOF
echo "Successfully set svnserve.conf"
chown -R ${USER} ${REPOSITORY}/$PROJECT_NAME
chmod -R 770 ${REPOSITORY}/$PROJECT_NAME
chmod g+t ${REPOSITORY}/$PROJECT_NAME/db
echo "------------------------------------------------"
echo "Done"
The script requires you to set two internal variables in order for it to actually work; one which sets the location of your repository, and a second which sets the admin username on your system which will be the default owner of files and folders in the repository. You can find these at the top of the script, named REPOSITORY and USER respectively.
Running the script is as simple as:
sudo svnproj PROJECT_NAME
If you happen to run the script without the PROJECT_NAME parameter, it will simply output the usage note and exit gracefully. Whether you need run the script via sudo ultimately depends on where your repository is located on your server.
Our particular version of this script does one additional thing — it creates a post-commit hook script and automatically inserts the necessary code to output commit messages as an RSS feed per these instructions.
As Wishingline has slowly grown beyond just one person, the need to change workflows and improve our ability to communicate and collaborate with clients, peers and partners has prompted us to do things a bit differently than in the past. One of these things has been to set up our own internal Subversion server. Yeah — we know git is the new hawtness, but the tools available and integrating git are few, and honestly, our own experience with it has not left us paricularly enamoured.
In setting up a new Subversion server for us to use internally, secured on our network, but also accessible remotely, we started off with our own tutorial from back in 2007, a bit of help from the official Subversion book, and our old friend Google. We ran into a few problems along the way, and so in the hopes of saving others from running into the same issues, this entry will hopefully serve as a straightforward and complete guide to setting up a Subversion server using svn+ssh authentication on Mac OS X (Client and/or Server).
Prerequisites
In order to complete everything below on your own systems, you will need:
At least two Mac systems: one which will act as a central Subversion repository (server) another as a development workstation.
Mac OS X: Leopard 10.5.x (ideally 10.5.5) Client or Server. There’s a good chance that you’ll be able to follow this guide on Tiger as well, but YMMV.
Xcode 3.0 or newer, included on the Leopard install DVD, included with the iPhone SDK and otherwise available free from the Apple Developer Connection site.
A sufficient degree of comfort in working in the Terminal application.
Administrative access.
A few Notes Before we Start
Nearly all the instructions to follow require extensive use of the Terminal application which can be found in the /Applications/Utilities folder on your Mac. Each line in the code examples that follow should be entered into the Terminal and followed by the Return key.
Setting Up Your Envrionment
As with other Unix operating systems, Mac OS X uses the PATH environment variable to determine where to look for applications when working on the command line. It’s common to install custom builds of Unix software in /usr/local in order to avoid interfering with core system software. A big benefit being that you don’t have to worry about updates to Mac OS X inadvertently overwriting your custom software installs.
To set your the PATH for your user account on your workstation, you will need to either create or edit a .bash_login file which is commonly used to customize the default shell environment on a per-user basis. To open or create the file, in the Terminal, type:
pico ~/.bash_login
If the file does not exist, the following needs to be added at the end of the file in order to set the necessary PATH variables so that you will be able to use the various Subversion applications without needing to specify the full path to them on your systems.
The one oddball in the above PATH is the path to the aliasbin directory. We’ll explain what that’s all about later on. Patience grasshopper!
Save and close the file by typing Control-O and then Control-X. You’ll be returned to a new prompt in the Terminal. Close the window and open a new one to load your changes.
Xcode and Subversion
When you install Xcode 3.0 or newer, a version of Subversion (at the time of this writing, version 1.4.4) is also installed. Although you could use this version and skip a few steps, this tutorial is based on using the latest and greatest.
Step 1: Installing Subversion Prerequisites
Before installing Subversion there are a number of prerequisites which can or should be installed depending on your specific needs. In this particular case, the only one necessary is zlib which is used to add compression support to Subversion.
In order to keep things neat and tidy, source downloads can be saved to the Downloads folder in your home directory or wherever you prefer.
Installing zlib
To download, compile and install zlib, type the following in the Terminal:
cd ~/Downloads
curl -O http://zlib.net/zlib-1.2.3.tar.gz
tar -zxf zlib-1.2.3.tar.gz
cd zlib-1.2.3
./configure --prefix=/usr/local
make && sudo make install
cd ..
Once you get to the sudo make install command, you should be prompted for your administrator password. Enter that when requested in order to complete the installation.
Installing neon
If you want or need WebDAV support in Subversion, you can also install the neonHTTP and WebDAV client library. neon is entirely optional, but if you want to install it, type the following in the Terminal:
cd ~/Downloads
curl -O http://webdav.org/neon/neon-0.28.3.tar.gz
tar -zxf neon-0.28.3.tar.gz
cd neon-0.28.3
./configure --prefix=/usr/local \
--enable-shared=yes \
--with-ssl=openssl \
--with-libxml2
make && sudo make install
cd ..
At this point you should now have the two primary prerequisites installed, meaning you’re now ready to download, build and install Subversion itself.
Step 2: Installing Subversion
Compiling Subversion with all the necessary support libraries is straightforward. If you did not install neon as in the prerequisites above, be sure to omit that line in the configure command below.
cd ~/Downloads
curl -O http://subversion.tigris.org/downloads/subversion-1.5.4.tar.gz
tar -zxf subversion-1.5.4.tar.gz
cd subversion-1.5.4
./configure --prefix=/usr/local \
--disable-mod-activation \
--with-apxs=/usr/sbin/apxs \
--with-ssl \
--with-zlib=/usr/local \
--with-neon=/usr/local \
--without-berkeley-db \
--without-sasl
make && sudo make install
cd ..
You should now have Subversion installed on your system(s) in /usr/local/. You can verify this by checking the version of one of the Subversion applications. Type svn --version in the Terminal.
In order to create a complete client-server configuration with remote repository access, you will need to complete Steps 1 and 2 on both Macs. If you’ve got more than two Macs, repeat as necessary.
Step 3: Workstation Public/Private Key Creation
Public/private keys can be used to secure your network communications even more than relying on simple password authentication. In this particular case, these keys can be used to provide secure authentication to your repository. To create a public/private keypair, in ther Terminal, type:
cd
mkdir ~/.ssh
ssh-keygen -t rsa
If you do not want to use a passphrase as an extra level of security, just press Enter when prompted. The ssh-keygen command will create two files in the .ssh directory, ida_rsa.pub and ida_rsa.
The first, with the .pub extension is your public key which you’ll need to copy to the Mac acting as the repository server into a file named authorized_keys. The second is your private key. Do not share this with anyone. Seriously. The private key will be unique to each system/user and identifies that particular Mac when authenticating to the server or to any other systems sharing the public key. Simply put, in order to authenticate successfully, you need both halves of the key.
Step 4: Setup Users and Groups on the Server
There’s a few different ways users and groups can be managed: the Accounts system preferences panel, the command line and the Mac OS X Server Admin Tools which can also be used on the consumer version of Mac OS X and not just the server edition.
Launch the Workgroup Manager application from the /Applications/Server folder and press the Cancel button when prompted to login to the server. Instead, select View Directories from the Server menu and click the lock icon on the Workgroup Manager window to authenticate yourself as an administrator.
Create a Subversion Users Group on the Server
Before users can be given access to the repository, users all need to belong to a common group which will have read/write permissions for the repository on the server.
Creaing a new group in Workgroup Manager
Click on the Groups tab to switch to the Groups view and then click the New Group button to create a new group. Give the group a Name and Short Name and press Save. Click on the Members tab to add users to the group or switch to the Users tab and add users to the group from there. Depending on how many users you need to provide access to, one method might be faster than the other.
Adding members to a group in Workgroup Manager
Create User Accounts on the Server
Unlike other Subversion authentication methods (file://, svn://), accessing a repository via SSH requires that real user accounts exist on the server. In theory at least, these users should be able to access the server via SSH as any other user, though this can be restricted. More on that later.
Create any needed user accounts by clicking on the New User button in Workgroup Manager.
Creating a new user account in Workgroup Manager
Under the Basic tab, enter a Name, Short Name, Password, and set Administrator Access. Under the Home tab, press the Add button and enter /Users/USERNAME in the Full Path field and press Ok. Save your changes and click the Create Home Now button. This should create a new user just as if you did so using the Accounts preference panel in System Preferences and also generate their home directory.
Setting a user’s home directory in Workgroup Manager
To finish configuring access for each user to allow passwordless access using their individual public/private keypair, the user’s public key needs to be copied to an authorized_keys file in a .ssh folder in their home directory on the server.
Copy each user’s public key file to the server into their home directory. Exactly how you do this isn’t particularly important, but putting the key in the right place, named correctly and with the correct permissions is.
The cat command will take the contents from a file named id_rsa.pub and append it to the end of a file named authorized_keys or create a new file if it doesn’t exist. Repeat for each user needing access the Subversion server and replace USERNAME with the appropriate value. You can do this from a single administrative user account or by logging in as each individual user in sequence.
If a user has more than one computer which may require access to the repository, you can include more than one public key in the authorized_keys file; just ensure each is on it’s own line. Using the cat command above will do just that.
Step 5: Secure SSHD on the Server
Out of the box on Mac OS X, SSH is relatively secure, but there’s more we can do to improve it’s resiliance, particularly on the server side of things. To enhance the security of the server, edit the /etc/sshd_config file in the Terminal.
cd /etc
cp sshd_config sshd_config.orig
sudo pico sshd_config
Locate and edit the following list of configuration properties for the SSHD daemon process so they appear as shown below. Press Control-O, then Control-X to save the changes.
Protocol 2
PermitRootLogin no
PasswordAuthentication no
X11Forwarding no
UsePAM no
UseDNS no
AllowUsers [list of users -- see Step 4]
The list of users to be allowed should be based on the user’s short name and separated by a space. Note that you can skip changing the PasswordAuthentication setting if you may need to provide password access.
Note: If you need to add a new user later, you will also need to add that user to the AllowUsers setting in the sshd_config file and restart the SSH process on the server. Also, if you really want to secure things a bit more, change the default port to something other than 22. The catch is that you will have to include the custom port as a parameter when connecting via SSH.
Step 6: Create Aliases of the Subversion Applications
In case you were wondering… this is where we get really nerdy.
To allow more than one user commit access to the repository, when logging in via SSH, each authenticated user will run their own instance of the svnserve process on the server. As such, the process needs to run with a specific umask in order to prevent permission problems.
There’s two things we need to do in order to make this work:
Create a few simple shell scripts that run the appropriate svn application using the required umask. This should be done for svnadmin, svnlook and svnserve.
Then in the pico editor, type the following. Replace svnserve in the example with each of svnadmin, svnlook and svnserve.
#!/bin/sh
umask 002
/usr/local/bin/svnserve "$@"
Press Control-O and then Control-X to save your changes, quit the editor and return to a new prompt. Finally, set the necessary ownership and permissions on the scripts.
In order to ensure that the new svnserve alias is used when a user is interacting with the Subversion server, a special command must be prefixed before each public key listed in a user’s authorized_keys file.
sudo ~USERNAME/.ssh/authorized_keys
Replace USERNAME above with the specific user’s shortname.
Replace USERNAME above with the specific user’s shortname and note that the command above should be added on a single line with no line breaks, including the entirety of the public key. The value of PUBLIC_KEY should be the existing public key. Save the changes by pressing Control-O and then Control-X.
Step 8: Create a Repository on the Server
You’re most of the way there now… You’re now finally ready to create a new repository and project to test things out. The basics of this are no different than if you were using basic file:// or svn:// methods to access the repository.
Note that you shouldn’t need to specify /usr/local/aliasbin before the svnadmin command because you should have that included first in your PATH variable. If you haven’t done that, go back to step one before proceeding any further.
To create a new repository and versioned project at the root of the server and set the necessary permissions (though technically you could really put this anywhere you wanted on the system), simply execute the following, replacing SVN_USERS_GROUP_NAME with the name of the group set in step four:
The above commands create the repository directory itself, create a new test project (named “test_proj”) and then set the necessary permissions. The one critical command above is the last one which sets a sticky bit on the project’s “db” folder which ensures that permissions are maintained, particularly since more than one user will have write access to the project. This will save you frustration in trying to sort out why a second user all of a sudden cannot commit a change to the repository…
Finally, in order to secure the project so that only authorized users can read and write to it, you should edit the svnserve.conf file for the project and set the appropriate permissions as below. By default anyone who can login to the server should be able to access the repository in a read-only state, but no one has write access. This is clearly not right, so let’s fix that.
Press Control-O and then Control-X to save your changes and return to a new prompt.
At this point you should have a basic project created and the necessary permissions set to ensure that all users will be able to access it as needed. A caveat to repository access using svn+ssh is that there is no mechanism to restrict access to only specific users on a project by project basis unlike other methods which provide simple facilities for this using configuration files. These configuration files are not used when accessing a repository via svn+ssh.
Note that when you create a new project in your repository, repeat the process of creating the project as illustrated above. You can obviously skip the step of creating the actual repository directory itself.
Step 9: Check out Your Test Project
That’s it. Everything should be set and ready to roll. You can test that your Subversion server is configured properly by performing a simple checkout of your test project.
In a Terminal window on your local workstation, type:
cd ~/Sites
svn+ssh://USERNAME@IP_OR_HOSTNAME/test_proj
If all goes well, the project should download securely over SSH to the Sites folder on your Mac workstation. You’re then free to test committing a change back to the server.
If things work the way they should (cross your fingers), you should see a message indicating your change was committed to the server as version 1.
Wrapping Up and Final Notes
Setting up secure access to a Subversion repository is not for the faint of heart as it turns out and hopefully you made it this far.
As noted earlier, there’s a few other things you might want to know about how things are configured. You’re best to grab a copy of the official Subversion book and read through the relevant chapters. In particular, although you’ve provided secure access using public/private keypairs and set a command value in the authorized_keys file which otherwise prevents normal SSH access into the server, it is possible that a user could gain SSH access through other methods. In order to provide as few permissions as possible, you may want to set a few more restrictions by setting additional options immediately after the command in the authorized_keys file. You can read more on this on page 168 in the official Subversion book.
Questions, comments, or errors/typos in any of the above can and are encouraged to be noted in the comments. Finally — as with any such tutorials, YMMV.
One of the small tasks I set out to accomplish as part of moving this site (and numerous others) over to Slicehost was to fix a few plugin-related problems and template logic that broke at some point, possibly due to Movable Type updates, other template changes or just insufficient testing.
Fixing Gravatar support in the Notebook was one such problem. There are a number of versions of the Gravatar plugin for Movable Type floating around on the internets, but all are outdated and as such, incompatible with version 4.x. So I set out to figure out why, and as it turned out, the fix was simple and straightforward.
The problem came down to this: the URLs being constructed by the plugin were wrong, likely due to the plugin being developed long before Gravatar’s 2.0 re-launch a couple years back and whatever changes were introduced as part of that. So for anyone else who’s run into this issue, or wants this functionality for their own site, hopefully this saves you a few gray hairs.
This updated version of MT-Gravatar is also available from the Movable Type Plugins site, and the necessary documentation can be found within the plugin itself.
The Wishingline site and our Notebook may be inaccessible for a short period of time this weekend beginning around 12:00 midnight EST on Friday as we move everything over to our new slice at Slicehost, something we’ve been quietly coordinating for a few weeks now.
Just about everything should already be in place, so we’re hoping that the transition will be more or less seamless and hiccup-free. The big unknown is always “how long will the DNS take to propagate” once we flip the switch…
Whenever there’s been a few spare moments since we first released our Webkit CSS bundle for for TextMate, we’ve been diligently making progress at adding the handful of missing Webkit CSS properties and making minor adjustments to the organization of the bundle contents.
More than that though, we’ve been working on implementing the ability for the bundle to be easily updated from within “TextMate”: textmate itself without having to restart the application. Today, we feel confident to say that it’s working properly although with a couple minor caveats.
Caveats
In order to support auto-updating, and to get an initial build of the bundle itself, you’ll need to have Subversion installed somewhere on your system. The easiest way to get this is to install Apple’s Xcode developer tools or the iPhone SDK. Both are free downloads (and available on your Tiger/Leopard install DVDs).
The updater will do its best to locate the svn binary in order to perform updates, but if not found, will output a short error message.
The other caveat (we hope to eliminate or make easier to manage soon) is that the updater expects the bundle to be installed in the Library/Application Support/TextMate/Bundles/ folder your home directory, though technically you should be able to install bundles for all users on your computer in the Library folder at the root of your drive.
Updater Usage Notes
Updating the bundle periodically is simple. Select ‘Webkit’ from the Bundles menu, and then the ‘Self-update bundle’ command which will do the rest.
You can update the bundle directly within Textmate through the magic of version control
We’ve done a bit of testing in the wild on our own, but we’re of course interested to know if you’re using the bundle and run into any problems with the updater. Feel free to drop a note in the comments or file a bug on the project’s Google Code page.
Recently we’ve found ourselves working on a few projects that lended themselves to either allowing, or requiring us to use some newer Safari/Webkit-specific CSS3 features, and in the time since we’ve started to put together a bundle of language snippets for TextMate (our preferred editor) to make us more efficient, and to make remembering this stuff a bit easier.
The bundle, which currently contains nearly every new -webkit-prefixed property currently listed in Apple’s Safari/Webkit documentation along with a few snippets of code related to creating and using offline SQLite databases in Webkit is available via the project’s Google Code repository at:
http://code.google.com/p/webkit-css3-bundle/
In the spirit of open source, we’re releasing this software under the MIT license (which we hope is a suitable option), meaning you’re free to download, use, modify and redistribute it. Of course rather than distributing it yourself, we’d appreciate it if you’d instead simply refer folks to the project’s repository. No specific ownership or warranty is implied (YMMV) in the included language snippets.
Although Subversion access to the project is currently restricted, if you’re interested in contributing to improve and enhance this bundle, please get in touch with us and we can discuss providing access to the project. Errors and omissions should be reported via Google Code. Any general comments and feedback is welcome here in the comments though.
And no, we’re not dead. Busy. A bit dozy in the mornings, but starting to come up for air.
SXSW Interactive 2008 is almost upon us - only a couple days left before a large part of the population of design/web and interactive geeks from around the world descend into Austin for a 4 days of panels, parties, and socializing.
SXSWi 2008 badge preview
The new (yay!) Wishingline Design Studio, Inc. office will be closed while I’m away for the conference and to spend some time with clients, but I’ll do my best to stay on top of e-mail and voicemail.
And if you happen to be in Austin for SXSW, please do say “hello”. Ask nice and I might have a button or two for you as well.
I really have no aversion to big prizes, adulation or going home with a nice trophy, so I’d appreciate your vote. You can toss one vote this way every day until March 9th when the awards are handed out. Make my mom proud!
Yesterday, SXSW announced the finalists for their annual Web Awards and guess what? The Wishingline designed and developed site for FiveRuns has made the short list under the CSS category! Needless to say I’m excited and frankly, just honored to be nominated.
Screenshot of the current FiveRuns homepage
The FiveRuns site (the one nominated) has undergone many changes since it’s inception back in 2006 — from a tiny pre-beta release site developed prior to the launch of FiveRuns’ flagship Manage product to the much more fully realized site that exists now. Of course there’s more to come in 2008.
Even though I don’t really expect to win (that’s the politically correct thing to say right?), I suppose I should write an acceptance speech just in case… :)
The Interactive Web Awards will be handed out by emcee Eugene Mirman on Sunday, March 9th at the Hilton Austin Downtown.
Way back in February 2006 I put together a pair of Mac folder icons for Rails developers consisting of one to use for Rails projects and another for the Lighttpd server folder. Due to the recent release of Leopard which completely changed the standard folder design used throughout the system (for better or worse depending on your point of view on the obvious accessibility problems this introduced), I’ve revised the icons so they’ll blend in more naturally with their new surroundings.
512px size Ruby and Rails and Lighttpd folder icons for Leopard
The new icon set includes the whole range of sizes from 16px all the way up to the giant 512px icon size. As is the case with any downloads I make available here, please do not redistribute the icons or attempt to pass them off as your own.
Though nearly two months from kickoff, 2008 conference fever is already ramping up with two big ones currently marked on the calendar, tickets purchased and hotels arranged with more surely to be added as the year goes on.
First, one of too few relevant and topical Canadian-based web/design-related conferences — Web Directions North. Unfortunately due to other commitments I missed the inaugural event last year, but after speaking with both Derek Featherstone and Dave Shea during SXSW, which only shortly followed WDN, I realized I couldn’t afford to miss it a second time.
Web Directions North 2008
Given the great lineup of speakers, can you afford to miss it? I’m excited — new faces, old friends, and no dobut spectacularly organized! Plus I haven’t been to Vancouver in over 10 years which is a treat in itself.
SXSW Interactive
And then there’s old reliable — South By Southwest down in lovely Austin, Texas. Last year, oddly my first year attending, was a blast and I’m looking forward to catching up with friends, hopefully generally more interesting talks and panels than last year and just an all-around good time. I’ll be at the Hampton and staying a couple extra days at the end of the Interactive portion of the conference to visit with clients and hopefully putter around Austin a bit with anyone staying for the week of music mayhem that starts when Interactive ends.
Hope to see you there at one or both conferences. Do say “hello” — I promise I don’t bite.
In starting (somewhat) fresh with this new version of the notebook, one critical thing on the list of must do items was to finally do away with the old popup window style comments. These were a throwback and perhaps unfortunate decision made when this site was first built on Movable Type 2.x and I chose to use monthly archives as the primary archive type instead of individual entries. Hindsight is 20/20 I’ve heard…
Upgrading to Movable Type 4 and cleaning out the attic presented an opportunity to rectify this problem. The primary archive type used throughout, now individual entries, allows inline commenting without requiring popup windows. Changing the commenting behaviour provided a second opportunity — to allow the use of John Gruber’s Markdown syntax instead of vanilla HTML in the comments, something I’ve wanted to do for some time now.
Essentially this means dropping in plain old links in the comments will be converted but will definitely receive the rel="nofollow" treatment.
My reasoning is simple. One — I use it myself. Every post in the notebook has been written using Markdown. Two — it’s easy to learn, use, and has the right amount of syntax flexibility in terms of what I’m willing to allow.
Movable Type 4 blog comment settings
While setting up commenting to allow Markdown formatted comments I discovered a problem: certain parts of Markdown’s formatting syntax were being ignored and converted into plain text. My first thought was that this was a bug in either Markdown or in Movable Type itself, but after a bit of digging using Google and in the documentation for Movable Type itself, I recognized the problem.
Out of the box, Movable Type’s comment feature will only allow certain HTML tags to be included. Anything else will be automatically stripped out — for example: code, blockquote, h4, h5, h6. To change this behaviour, it’s simply a matter of specifying your own subset of HTML elements which will be acceptable in comments and setting the appropriate text filter in the Movable Type blog comment settings. The specific details on how to do this are:
In your blog’s comment settings, choose Markdown for text formatting.
Click the “Allow HTML” checkbox to enable comments to accept plain old vanilla HTML.
Under the “Limit HTML Tags” options, use your own settings to specify the tags you want to allow in comments.
Uncheck the “Allow HTML” checkbox one you are finished entering tags in step 3. Save your settings and rebuild you entries.
Although I haven’t tested this, I suspect the same procedure will also work if you choose to use Textile formatting for comments.
Believe it or not, I’m in the midst of a not-insignificant design refresh of the blog (no, seriously!) and as part of that I’ve been looking at making some modest accessibility improvements under the hood. Part of that has been adding or improving accesskey support which I quickly discovered has changed in Leopard depending on if you are using the new Spaces feature.
Under normal circumstances accesskeys are triggered by pressing the Control key plus the specific alphanumeric key. If you’ve enabled Spaces, using Control and a numeric key will instead switch spaces, at least by default. Instead you need to use Control-Option plus the number key.
You can change the keyboard settings (use Control, Command, Option or none) from the System Preferences for Spaces to potentially avoid this issue entirely though using Command would conflict with Safari’s bookmark handling features.
On the other hand, using Control and some other alphabetic character should still work as expected and as they did in Tiger if Spaces is not enabled (which is the default in a clean, out of the box Leopard install).
For those in the web/design/interactive realm, SXSW is like Mecca. It’s this place you go every year — sometimes to hear great panel discussions, other times just to meet and hang out with your friends and contemporaries.
A few weeks back, the SXSW crew posted the 2008 panel picker giving you and anyone else who wants the chance to vote on the panels most deserving to be included in the SXSWi 08 lineup.
While it might be a bit of a popularity contest in some regards, you might be interested to know that my buds Brian Warren and Mark Bixby along with myself have an entry in there called ‘Finding a Niche vs Doing it All’ which we recommend you give high marks to.
Taking a cue from Shaun Inman, author of the original implementation, and the fellow who wrote this handy Rails helper, I’ve put together a plugin for Mephisto providing a new text filter/tag to bring better typography to headlines, lists, and more.
In conjunction with this, I’ve created a new git repository and made the plugin available publicly so any updates are handled more easily, at least from my end. The initial release is now available by running:
I’ve given the plugin some limited testing in an existing Mephisto install (running off a now slightly out of date build of Mephisto, later than the 0.7.3 release) with no problems noted. There’s nothing special in the plugin so it should work fine in 0.7.3 and higher. Of course, YMMV.
I just finished the first annual A List Apart 2007 Web Design Survey and you should too. The survey took less than 5 minutes to complete and you’ll be offered a chance to win tickets to an upcoming An Event Apart conference or a 30 GB iPod provided you pass along your e-mail address at the end.
I’m not quite done with Subversion yet and have a few more tutorial-type entries planned over the next while provided the day-to-day comings and goings don’t get too much in the way along with finally getting an article I’ve been very, very slowly chipping away at (sorry Caroline!) for the last few months out the door and onto the editor’s desk.
That said, today I want to cover a simple nicety you can add to your Subversion install allowing you to more easily stay on top of incoming changes. This is particularly useful when more than one person has commit access to a project.
Monitor Subversion commits via RSS
Today we’re going to generate RSS output of changes being committed to Subversion. As usual, you’ll need your web browser, a text editor such as TextMate, and a Terminal window.
Getting Our Tools Together
To accomplish our goal of having an RSS or Atom-formatted XML file of repository changes output, the first thing we need to do is grab a copy of svn2feed.py, a hook script that will do the heavy lifting for us.
You can download svn2feed.py here (Right or Control-click and choose “Download Linked File” from the contextual menu).
Now that you have the file downloaded to your Desktop, using the Terminal, copy (or move) the file to the “hooks” directory in your repository. Using the example from the previoustutorials, let’s assume that is /usr/local/svn/.
Note that there’s a “hooks” directory in each versioned project, but also a global one for the repository itself which is the one we’re interested in.
Next, change the file permissions on the script to ensure it is executable.
sudo chmod ugo+x
/usr/local/svn/hooks/svn2feed.py
Our script to do most of the work is now in place. Next we need to create a post-commit script which will be executed - you guessed it, after a user commits a change to Subversion.
Automating RSS
In this case, we’re only interested in generating a feed for one project in the repository. Using our previous example, let’s create a new file called post-commit in /usr/local/svn/metropolis_blog/hooks/ and open it in your preferred text editor.
Take the contents below and copy/paste it into your post-commit file.
You can see the full documentation for what each of these items do in the svn2feed.py script, but the gist of it is that we’re telling the script to execute svn2feed.py using Python (which is installed by default on Mac OS X), keep the last 100 entries, use the Atom format, set the revision number based on the commit, set a permalink using the item-url, the full address of the feed itself via http, and where to actually save the XML file that gets output.
The REPOS variable is the path to your project in the repository.
Save the changes and close the file. We’re almost done.
Creating The Output Directory
Lastly, create the rss directory on your web server (eg. in /Library/WebServer/Documents) and make sure it is writeable by the script.
You’re now ready and can easily verify that everything is functioning by committing a change to the repository. If no XML file is output, odds are that the permissions are not set correctly. Assuming everything works as it should, simply subscribe to your RSS or ATOM feed and enjoy!
Before I forget, you might want to make sure Apache is running prior to attempting to subscribe to the feed ;-)
In the earlier post we ran the svnserve daemon manually. While this is fine as a one-off event, if you ever need to restart your system, we shouldn’t have to worry about remembering to start the process manually. Instead you’ll want to automate it. Thanks to the powerful launch facilities built into Mac OS X, this is a simple process and I’ll make it even easier for you.
The preferred way of launching background processes in Mac OS X means using launchd by creating LaunchDaemons and LaunchAgents which are simple plist (Property List) files which instruct launchd on how to start or stop these processes. The important difference in the two is that LaunchDaemons are intended for processes which should remain running even with no users logged into the system; perfect for Subversion.
Download this LaunchDaemon plist file and copy it to the LaunchDaemons folder in the Library folder on your system. Open the file in your preferred text editor and look at line 16. If you followed the earlier post on setting up Subversion, then you don’t need to do anything. If you created your repository in another location, you’ll need to edit the path to the repository on that line. When you’re done, save the file.
We’re now ready to make sure it will work. If you’ve got the svnserve daemon running, open up the Activity Monitor and locate the svnserve process. Select it and press the Quit Process button in the Activity Monitor toolbar. You should be asked for your administrator password. When the process exits it will disappear from the list.
After the process has closed, switch back to the Terminal. We’re ready to test our LaunchDaemon to start it up again. In the Terminal, type the following:
Enter your administrator password. You should be returned to a new prompt in the shell if everything goes well.
To verify that our process is registered with launchd, we can print out a list of all the processes run with launchctl by running:
sudo launchctl list
You should see the org.tigris.Subversion item in the list. You can further test that the LaunchDaemon works by simply restarting your system and again checking the Activity Monitor to verify that the svnserve process is running.
Source control is something thought to be geared more towards developers and those doing more traditional computer science-type programming, especially when working in a team environment. Source control is also an invaluable tool for web designers and developers alike.
Source control comes in many flavours — the two most popular and widely used systems being CVS (Concurrent Versions System) and Subversion, a successor to CVS which significantly improves on the major gripes most people have with the CVS source control system.
The problem with both is mostly from the aspect of approachability. Once you get the hang of them, it becomes natural and there will be times when you wonder how you ever survived without it, but until then, using, and more so, setting up your own source control system is a daunting task.
First steps: pick one. Your best bet is Subversion as it has been gaining in popularity and is under active development in the open-source world. Ask anyone doing serious development in Ruby on Rails for example and I’d bet 10 out of 10 times they’ll say they’re using Subversion.
If you’re a lucky developer working on Mac OS X, getting up and running with Subversion is trivial provided you have a handle on a few basic Unix commands. In this mini tutorial, I’ll walk you through installing Subversion, creating a new repository and importing a project. Ready to get started?
Step 1: Installing Subversion
We can cheat here and go the quick route using an installer package created by Martin Ott of The Coding Monkeys, the fine folks behind SubEthaEdit. Once you’ve downloaded and un-zipped the .pkg installer file, double-click the installer and run through the setup screens. Subversion will be installed into /usr/local which is where you want it since it won’t mess with anything in the core Mac OS X install.
The Subversion binaries are installed in /usr/local/bin. Of interest are svn, svnadmin and svnserve. The first two are your administrative tools for interacting with Subversion, the the svnserve binary (application) will allow you to run your own Subversion server that you can work from.
At this point you should have Subversion installed.
Step 2: Customize Your $PATH
To make working in the Terminal easier, we should tell your shell of choice (typically Bash), where to look for executable programs such as the Subversion binaries. To do this, you need to create a file in your home directory (eg. /Users/your-user-name named one of bash_login, bash_profile or bashrc.
In order for the file to be recognized by the shell as a configuration file it needs to be saved with a period (.) at the beginning of the file name. To create the file, open up your text editor (TextMate or BBEdit will do fine) and add the following:
When you’re done, save the file. Remember to prefix the file name with a period: .bash_profile, for example. You’ll need to open a new Terminal window for the change to be loaded. You can test that things are working by typing sv and press Tab. If the name auto-completes, you’re good to go.
Step 3: Setting Up A Repository
We’re now ready to create a new repository. This is where our files will be stored. This is not where we directly interact with and modify files though, but where data is pulled from and committed to when we make changes. If it’s not all clear, it hopefully will be shortly.
Open a new Terminal window (you can find the Terminal application in the Applications/Utilities folder on your computer). Type the following command:
sudo mkdir -p /usr/local/svn
This will create a new folder named ‘svn’ in /usr/local after you’ve entered your administrative password which you will be asked for. This will be our Subversion repository. If you’d rather use a different location, feel free to change the path. For example, an external drive or in your Home directory.
Assuming you followed the above, you’ll need to change the group ownership on the ‘svn’ directory in order to be able to write to it. To do this, type the following at a new prompt in the Terminal:
sudo chgrp -R admin /usr/local/svn
This changes the group associated with the main folder and recursively down into the folder to the admin group in Mac OS X. As long as you belong to that group, then you should be able to write to that folder.
Step 4: Create a New Project in Subversion
We’re now ready to create our first project in Subversion. This will get us our initial setup from which to work from. As an example, let’s say we’re creating a new blog for a client named “Metropolis & Co.”. We might name the project metropolis_blog. To create the new project, back in the Terminal, enter the following:
If all is successful, you should be returned to a new prompt in the Terminal.
Step 5: Securing Our Project
The next thing you might want to do is secure access to your project, especially if you’re working in a team environment with different people on different machines or in different places. There’s a bunch of different things you could do here and I’m going to keep it simple for now. Just the basics — controlling read/write access and adding usernames and passwords.
In order for multiple people to interact with your new Subversion repository, you need to run svnserve on the system you ran through the previous steps on. So, before we start up the server, we need to configure the access details which can be done on a per-project level. So in our case, we want to edit the settings for our ‘metropolis_blog’ project.
In the Terminal, switch to the project directory by going to:
cd /usr/local/svn/metropolis_blog
In that folder you should find a series of directories and files. Right now we’re only interested in the conf directory’s contents.
cd conf
or
cd /usr/local/svn/metropolis_blog/conf
In the conf folder you will find three files: authz, passwd and svnserve.conf. We won’t look at the authz file now, and instead start by editing the passwd file:
sudo pico svnserve.conf
You can read through the usage notes in the file, but the basics of what we want to do here is enable read-only access anonymously and make write-access require a username/password which we will specify next. To do this simply change the matching lines in the svnserve.conf file by removing the preceding hash mark (thereby uncommenting the lines).
If you wanted, you could create a new password file in a different location and point to it in the file, but in this case, we’ll just use the defaults. Save the file by pressing Control-O and then Control-X to quit the pico editor. If you have TextMate installed you could alternatively edit these files with it.
Next, let’s create two user accounts for which we’ll grant write access to the repository. In the Terminal, type:
sudo pico passwd
Using the examples already present in the passwd file, add a couple new username/password combinations. For example:
user1 = password1
user2 = password2
These are obviously crummy account credentials. I trust you to come up with something a bit tougher to figure out. When you’re done, press Control-O and then Control-X to save and quit.
We’re now ready to start up our Subversion server and import some files into our project.
Step 6: Start the Subversion Server Daemon
Starting the background daemon process for Subversion built-in server is as simple as running:
Here we’re telling the daemon to run as the root user on the system, run as a daemon (background process) and use our repository (the —root here indicates the root of the repository, not the root user in Mac OS X which is simply implied by executing the command with sudo).
If you check in the Activity Monitor application on Mac OS X, you should see the svnserve process listed. If so, you’re set to go to the next step.
Step 7: Importing Files into our Project
Now let’s create a fictitious project structure which we can import into our project. On your Desktop, create a new folder called import. Inside that folder create three subdirectories named trunk, branches and tags. We’ll use this as the base for our import.
Once those folders have all been created, in the Terminal, type:
Assuming all goes well, you should see some output in the Terminal indicating that your folders were added along with a revision number. Now is where the fun starts. Now we need to test that we actually, really did commit something to Subversion.
Step 8: Sanity Check
To verify that we did in fact commit something into our repository, the best way to do a sanity check is to check it back out somewhere. So let’s do that, make a quick change and commit the change back to Subversion.
To check out your project into a working directory (often called a ‘sandbox’), do the following in the Terminal:
svn co svn://localhost/metropolis_blog ~/Desktop/my_checkout
This should checkout the contents of the project into a new folder called my_checkout on the Desktop. If it worked you should see a nice confirmation message in the Terminal and find a new folder on your Desktop named my_checkout with the previously imported folders inside. Cool, eh?
Now we want to create a new file, add it to our repository and then commit the file into the repository. You can add and remove files to your hearts content with Subversion, but until you commit the changes you don’t actually affect the repository, only your local working sandbox.
So create a new file in your favourite text editor. In this case, let’s create a file called readme.txt inside the trunk subfolder. Now back in the Terminal we’re going to add the file and then commit it to Subversion (press Return after each line).
As usual, Subversion should provide you with some feedback indicating whether your new file was added or not. If so, you’ll see a new revision number. At this point you’ve got a nice little development environment setup for source control for your projects. And now that we’ve done our sanity check, you can safely delete the import folder you started with.
If this was helpful or if you have any comments or corrections, please feel free to leave them. I do have another small piece to add to this tutorial but which will be included separately in the next day or so.
*Updated on March 4, 2007 to add details for customizing your $PATH variable in the Terminal.
It’s amazing the little things you can learn when you RTFM. Like the other day when I came across mention of a new command in the RubyGems gem program (version 0.91 or newer) which makes it easy to check against a remote repository to see if your installed gems are up-to-date.
The command is straightforward: sudo gem outdated
You’ll get a nicely output list of your outdated installed gems showing the current version followed by the most recent version number. The usual gem update procedures apply from there. Happy updating!
For freelance designers, as with larger agencies, clients are our bread and butter. Without them there’s really not much point. Without them we’d all be queued up in the unemployment lines.
Design is this big unknown to people. They can usually recognize it or point out things that have been “designed”, but ask them to describe the process of getting from an idea to a final product and many wouldn’t have the first idea where to start.
It’s our responsibility to educate clients so that our working relationships are easier and the work more enjoyable — whether it be setting reasonable expectations, clarifying deliverables, ensuring clients understand that we can only do so much without requiring input from them, and making sure that they understand what they’re paying for and why it’s important.
This is something I think we’re collectively still failing to do.
What Problems?
There are a myriad of problems facing designers today. New technology, new communication mediums, uneducated clients, uneducated designers, too much work, too many distractions. The list goes on and on. Rather than try to cover an impossible amount of information, I’m going to take a stab at highlighting a few particular problem areas based on my own experiences.
These are:
The design process
Enticement (AKA Don’t Waste My Time)
Work on spec
Timely responsiveness and communications
The Design Process
Design can be a tricky thing. It’s hard to quantify and harder to explain. Every designer has their own process for getting from an initial brief (if you’re lucky enough to get one) to a final, billed and closed docket.
A project might involve research, user studies, competitive analysis, initial concept development, wireframing, design, code, database design, etc. There’s a million things that could go into any one project. Every project is unique in its own way with its own hurdles to leap over.
No wonder it’s difficult to educate clients on what we do.
Some clients, for the sake of reducing their costs might ask to cut out, for example, wireframing. They don’t see the value in it. They don’t get the warm and fuzzy feeling of seeing Photoshopped comps; something that looks “real”. Sure there are times when wireframes might not be necessary but it’s during projects where they could be critical that it becomes our responsibility to educate our clients as to why they should seriously reconsider.
Few clients understand the research process that should be included at the start of any design project. This usually means putting on your thinking cap and figuring out what the real problem you need to solve is and perhaps even scribbling down a few possible solutions. Research might mean doing some of the other things I mentioned earlier — like talking to the user base of a particular website (assuming there is one already) or even creating potential user profiles to understand who it is you’re going to design for, because we all know it really shouldn’t be the client themselves (although they are important in the equation too).
Not doing research up front is like writing an essay with no background on the topic. The up front process work is as important as everything else, including the outcome because if you get that wrong, there’s a good chance the final product won’t fit the bill either.
Enticing The Designer
Initiating contact with a designer can be a real problem. While we have to remember that while it’s our job to foster a good relationship with our clients, they too have a role to play. It’s just as important for the client to provide value to the relationship — it’s not just why they’d want to work with you — it’s why you’d want to work with them in return.
An introductory e-mail such as the following does nothing to provide a reason to open a line of communication with a potential client.
Please call me asap regarding a new business concept.
That was the contents of a real e-mail I received — the entire e-mail. No phone number. No name either. Even better was this one:
how much?
Um… too much for you. If you have to ask then it’s definitely too much.
I get these regularly, and while these are extreme cases, the moderately bad ones aren’t much better.
When vying for the attention of a designer, here’s a few things to keep in mind — we need real information. Don’t waste our time with pointless e-mails like the examples above. Give us a problem to solve. Be clear. Concise. Tell us why we should be interested. Sell it to us. Why would we want to work with you? And assuming you get that far, commit to the project. Prove to us you’re serious.
The need to react quickly and make decisions in existing work and when dealing with new/potential work is a real challenge for designers. A lack of commitment from the client usually indicates problems down the road. And while your first instinct might be to just say “yes”, you’re better served by knowing when to say “no” and saying that more often.
No Spec!
Our time is not free. There — I said it.
Freelance designers and larger agencies are businesses and face similar problems to their clients — paying the bills being one of them. It’s not unusual for a client to ask for extra work to be done and be surprised when they receive an invoice for services rendered. Design is not a free ride and you get what you pay for. Billable time is more than time spent working in Photoshop or developing HTML or CSS. It’s also that up-front research and preliminary process work which is often overlooked, misunderstood, and rarely billed.
If a client asks for work on spec, just say no. You don’t want those clients no matter who they are. Doing work with no guarantee of a contract is not worth it and does nothing but hurt yourself and other designers by setting expectations which should never be there in the first place. It’s like asking a carpenter to build a bookshelf, deciding you aren’t happy with their workmanship and then going to a different carpenter to build the bookcase. Like I said, say no to work on spec.
Communicating and Responsiveness
Communications is a cornerstone of design. We use visuals to communicate ideas, values, and meaning. Design is more than just making something look pretty.
Steve Jobs said “design is how it works”, and while I agree, it is also about how it looks — at least that’s the belief held by many clients. Clients understand beauty; many don’t fully grasp how function fits into the picture.
It’s easy to find clients that can tell you they want a website that looks like “x site”, but it’s difficult to find one that can provide you with solid, rational thinking as to why that would be beneficial to them.
Clients are often good at saying, “I like this” or “I want something that looks like this”, but are challenged to tell you why with certainty or empirical evidence. It can be even worse when they don’t like something.
These are the same clients who may not fully comprehend what they’re asking of the designer. They’ve forgotten about the real key players — the people that go to their website and actually use it or buy their products. Those are the truly important people and the ones who often have no voice in the design process.
Ask a client why they want something (or don’t want something) and you shouldn’t be surprised if they can’t tell you. I think of this as a variant of “the customer is always right” — meaning, just do as you’re told. There’s a catch to consider though.
The client is (presumably) paying the bill. The job of the designer is, on some level, to please the client. The thing is though — it’s also our job to do what’s right. To do what’s right for the user — and that’s a tough thing sometimes because often, what is right for the real users of a particular website/application is something that is a tough sell for a client. The designer is typically the voice of the end user. Without us standing up for them, they have no voice. If we give in to the client every time, then the end-user loses but ultimately, so does the client even if they don’t recognize it right away.
There’s a certain amount of trust that needs to be established so that the client understands that, as the designer, you have theirs and their client’s best interest in mind rather than pursuing frivolous and selfish creative goals. Constant communication, debate and honesty are all good ways to foster trust with your clients and mitigate problems before they get out of hand.
We have to know when to fight for something and when to let go. To take something from the 37signals train of thought — ask yourself — “does it matter?” If the answer is yes, fight for it. If not let it go and focus on the important things.
On the topic of responsiveness — and this ties in with meeting deadlines — the client is just as responsible for keeping a project on track as is the designer. A project can quickly come to a crashing halt when the designer is stuck waiting for feedback or the answer to a question from the client. The problem also being that the designer is expected to eat that wasted time and scramble to get the project done on time no matter what.
Communications should go both ways. Respond in a timely fashion. Everyone is busy — that’s a given. If you want your work to be taken seriously, you have to take it seriously and attempt to stay on top of it and provide responses so that things move forward, not stop dead in their tracks. Don’t assume the designer is a mind reader. We’re not. If you want us to do something, say so. Tell us why. And don’t wait two days to tell us either.
Designers should assume the same of their clients — spell things out in a way that people can actually understand. Treat your clients the same way you would like to be treated. If after a sufficient amount of time they aren’t responding accordingly, don’t be afraid to call them on it.
Wrap Up
Being a designer can be a fun and often exciting job. Being a designer, whether you work for yourself or an agency means the general rules of business and etiquette apply. We don’t work for free. We expect committments. We expect to be treated fairly and with the same respect we should be offering our clients. We expect honesty and integrity and are more than happy to educate clients on what it is we really do and why this is valuable to them.
Hopefully there are a few good lessons here. Feel free to share your own or your comments.
Like many others in the design, web, film, music and related industries, in March I’ll be making the trek down to Austin, Texas for SXSW. This will be my first time attending (finally) and I’m looking forward to meeting up with old friends, finally putting some proper faces to names, shaking hands and kissing babies.
Meet me at SXSWi
Seriously though, I’ve heard SXSW is a good time (lots of parties), and it looks like there’s a solid speaker/panel lineup, I just hope I can deal with all the people… WWDC is around 4000 — 4500 which is a lot. SXSW I’m guessing based on hotel availability will be quite a bit more.
And for anyone interested, I’ll probably bring a few mugs and CDs with me.
After a long transition, I’ve officially made the move over to TextMate from BBEdit during the last 6 months or so as the amount of Rails development I’ve been doing has increased. In that time though I haven’t had much opportunity to really dig into some of power features or to really even get a handle on all the keyboard shortcuts which brings us to the impetus behind creating this desktop — to help improve my (and possibly others’) TextMate skills.
I changed my mind since there’s been enough interest that I’ve put together a smaller 1280 × 800 size version for MacBook users (myself included). You can grab the smaller version here and a new, updated large version here (with a correction suggested by Wolf Rentzsch).
Comments and suggestions for improvement are welcomed.
Am I missing something or are the scrollbars in the standard theme in Windows XP 1 pixel wider than those in Windows 2000 and Mac OS X? I haven’t had any luck in getting an answer via Google so I’m turning to you, my ever-so wonderful readers to point me in the right direction.
On the Mac and in Windows 2000, scrollbars are 16 pixels wide. In Windows XP they appear to be 17 pixels wide based on a quick CSS test I performed earlier tonight. This is incredibly annoying to say the least.
Internet Explorer 6 has a lot of “quirks”. Some might call them bugs. Some might trade their soul for all users of said browser to give it up and just move to Firefox, skipping the just-released IE7 entirely. The web design community at large might get to keep more of their hair on their heads if that were the case. Sadly, not much chance of that.
One “bug” is an annoying flickering problem when images are specified as a background on links. The fix has been around for some time but I’ve seen very little mention of it anywhere. I came across it a few weeks back and finally had an opportunity to try it out in a real-world test. And guess what? It bloody well works!
So, here’s what you need to do.
In your site/app/blog/whatever — either encapsulate the bit of code below as a piece of inline Javascript or drop it into a linked JS file and you’re all set. No need to worry about onload handlers and the like. It just works.
The renovations are done. The new Wishingline Design Studio, Inc. office looks great although we’re still not completely done with it yet. Everything turned out really well and we’re exceedingly happy to finally have an end to the dust, piles of 2×4’s and plastic sheets.
The newest Wishingliner seen here is now less than two weeks away. A big congrats to my buds Luke and Mathew on their latest additions.
On the business front, things have been a bit crazy. Work is good. Too much work all at once is also good, but in a painful kind of way.
Toronto Life redesign teaser
We recently completed some additional work for Toronto Life although it hasn’t gone live yet. We’ve also been working with some new and some old clients on identity design, web application and site designs and redesigns with more on the way.
Some of this work literally just wrapped so it’s still a bit early to really say much, but when it’s time you’ll hear about it. Until then, here’s a bit of a tease.
Hivelogic identity concepts
The Darns were nominated for a Toronto Independent Music Award for “Best Alternative Act”, but sadly did not win. Maybe next year. And the band is finally celebrating the release of ‘What It All Turns Into’ on November 18th with a big CD release bash in Toronto. Next up — something that hopefully resembles a tour.
There’s also been a few small changes and tweaks made to the site such as the newish homepage graphic, the little availability info on the homepage (also repeated elsewhere through the site), and an upgrade of Movable Type and the newly released phpFlickr 2.0 scripts which use Flickr’s new serialized API. I only had to modify one line of code to update my scripts to work with the new release which was a nice surprise.
That all aside, there’s still a boatload of work piled up and I should probably start in on it now. I’ll try not to let another month slip by…
Jon Hicks and Shaun Inman made me do it. I couldn’t bear to use Helvetica (too obvious), and since that would probably be considered contrary to the message, it’s Paralucent for this one.
Design is more than Choosing Nice Fonts desktop wallpaper
Back in early May I talked about what I dubbed ‘Sliding Door Buttons’. I’ve continued to evolve this technique to the point where it’s now behaving consistently across browsers and platforms.
Sliding Door buttons example preview
The essence of the technique and the reasons behind its usefulness remain the same, but there are now some additional enhancements that I think add to the implementation and provide basic design features that might otherwise be difficult to achieve using other methods.
Code Structure
The HTML code required is slightly rearranged and helps work around some basic problems in the previous implementation. But before we talk about any specific changes to the CSS, let’s look at the basic structure of a sliding door button.
<div class="buttons">
<a href="#" title="Add a new user" class="btn"><span>Add User</span></a>
</div>
<div class="buttons">
<a href="#" title="Cancel" class="btn-disabled"><span>Cancel</span></a>
</div>
The surrounding div element with the class “buttons” is not necessary, it’s simply included as part of this illustration. The basic code is an anchor with a span element inside it. Simple? Yes. Clean. Yes. Do we have the hooks we need to style it? Yes.
The only real difference between this version and the previous one is that I’ve reversed the order of the span and anchor and which element has the button class applied to it.
The CSS
As mentioned in the first part, the basic idea behind these buttons builds on Doug Bowman’s Sliding Doors of CSS technique but rather than being focused on site navigation, we’re instead focusing on a common UI element, the button. The approach is essentially the same: use simple HTML elements, two images (one for the left side and another for the right) and allow the button to expand as necessary to accommodate longer text.
The big change that helps resolve the consistency problem in the earlier implementation turned out to be very simple: use display: table-cell on the anchor element. For Windows IE, note that you’ll have to use display: inline-block since it’s the only browser to really support it (so far). You can do that simply with a conditional comment.
Following the example here, we could create as many variants as necessary with fairly minimal additions to the CSS code. To take it one step further, you could also add an inline image inside the space to add a simple icon to the button.
How To Create Disabled Buttons
The one real missing piece of the puzzle in terms of making the behaviour closer to a traditional input button is that we have no real way of disabling the button. It’s a link. It’ll always be a link. What we can do is use a bit of JavaScript magic to swap our sliding door button with a nulled button. In this case for the nulled button we remove the anchor and replace it with another span element (another inline element in HTML).
Here’s a quick example of this technique in action. I’m not demonstrating the JS swapping here. My suggestion there would be to look at Prototype for that sort of interactivity since it makes it very easy.
As before, I welcome your questions, comments and critiques. Simply drop a note in the comments.
Don’t get me wrong, Firefox is a great browser as are its numerousoffspring, but like competitive browsers such as Safari and Internet Explorer, it has its own set of quirks and anomalies to frustrate web designers and developers.
Absolutely positioned scrollbars within Firefox
My current gripe is the way scrollbars are handled in relation to absolutely positioned elements. Simply: they’re not handled well.
The problem is that if you attempt to position an element (such as a DIV) above another element which has `overflow: auto` set and a lower z-index value, the scrollbar from the lower element pokes through the element with the higher z-index as you can see in the screenshot. Ugly.
Safari gets it right and there’s no show-through of the scrollbar since the z-index values seem to be honoured correctly. Heck, even IE6 gets it right.
I’ve scoured countless pages via Google looking to see if there’s an answer to this and haven’t found one. Even the latest build of Bon Echo, Firefox v2 still contains this bug/quirk.
The worst part is that I’m not even sure what the problem is other than its specific to Mozilla-based browsers. Is it an XUL problem? General CSS spec implementation problem? Something else…?
In setting up the launching pad for what I suppose will be my second endeavour in the “Web 2.0” application market following the initial beta launch of FiveRuns which went live last week, I quickly transformed the static “coming soon” page for remarkr into a simple Rails application to handle beta/information signups.
Local development with Rails is simple using either the default Webrick server or mongrel, but moving the application to my web-host of choice, Dreamhost proved to be a bit frustrating. It all worked in the end, but was made tedious by some wonky documentation. In the hopes of saving someone else the same trouble I ran into, here’s some additional notes on getting your Rails app running at Dreamhost.
Assumptions
I’m assuming you’ll be running the application from the web-root, meaning the main page of your app would be displayed if the user went to www.yourdomainname.com, and not app.yourdomainname.com.
Baby Steps
Before you do anything, be sure to create a database instance to use for your domain. Make a note of the address, username and password as you’ll want that information for the database.yml configuration file which should go under production.
You’re ready to upload your app via FTP. Dreamhost apparently requires most of the directories for your Rails app to have 766 permissions for folders and 664 for most files, the exceptions being your public directory and the log directory which can be set to 755. If you are getting weird errors and things aren’t running as expected, this is something to check.
Configuration
In your config/environment.rb file, be sure to uncomment line 5 to set the ENV['RAILS_ENV'] variable to ensure your app runs in Production mode.
ENV['RAILS_ENV'] = 'production'
While you’re at it, you may also want to store sessions in the database, and if so, uncomment line 28.
If you are planning sending e-mail from your Rails app, you’ll need to set some defaults for ActionMailer as Dreamhost requires you to use smtp and to authenticate to the mail server in order to send mail. I recommend reading the mailers examples on the Rails Wiki.
Lastly, in order for URLs to be redirected properly, you may (this may be deprecated with Rails 1.1.2 and newer) need to add one last line to the end of the environment.rb configuration file. If so, it should look like this:
# Extra configuration to fix Dreamhost Routing problems
# Make sure to also uncomment the ENV variable (see line 5) above to set
ActionController::AbstractRequest.relative_url_root = "/appname"
Dreamhost Web Admin Configuration
Now you’re ready to make a small adjustment to the default domain setup. Essentially rather than leaving the Web Directory at “/”, we need to tell it to use the public directory for our app.
Dreamhost domain setup for Rails applications
So, if your application was called addressbook, your Web Directory would be domain.com/addressbook/public. Simple right? While you’re at it, make sure you have the Fast-CGI checkbox checked.
Dispatch!
The last thing we need to adjust how the application is dispatched which means two more small adjustments.
Open up the .htaccess file in public and set the default redirect rule for dispatching Rails to use the fcgi script. Change line 28 to read:
RewriteRule ^(.*)$ dispatch.fcgi [QSA,L]
The last is changing the shebang line in dispatch.fcgi which you can find in the public directory of your application. For Dreamhost, it should be set to:
#!/usr/bin/env ruby
This will locate ruby and should generally work even in your development environment.
That’s it. Assuming all went well you should have a Ruby on Rails application up and running on Dreamhost.
One of the small tasks I set for myself in working on an upcoming web application project was to construct any buttons required in the app using simple anchors rather than using either input or button elements, handling the visual appearance with CSS.
This was a challenging task in some respects due to some cross-browser quirks (what else is new?) and the simple desire to not create excessive code for the sake of nice buttons.
In the end, a smattering of ALA technique and home-brewed trial and error did the trick and allowed a fairly robust and flexible system for constructing buttons while aiding accessibility and ideally making users with screenreaders happy as well.
The main designer/developer benefit is that these buttons are easy to style, can be easily repurposed to allow different styling and allow for translation into other languages without having to produce countless images. They also happen to work based on my testing in IE6/Win, Safari and Firefox. I haven’t done any testing in Opera, but I suspect that they should be fine in newer versions of that browser as well.
Cutting Up Your Buttons
Since this technique is based on Doug Bowman’s Sliding Doors technique , I suggest you give it a brush-up read if necessary. It lays the overall foundation for the sliding door buttons technique.
An example of Sliding door buttons with CSS
The short version is this: we need two images. A left side and a right side. The left side will occupy the space from the left edge of the button text to the left edge of the button background itself. You should get the general idea from the screenshot above.
One key thing to remember is to make the background of the button on the right side wider than you need. The reason for this is to allow the button to expand and contract with the length of the button text and to allow a certain amount of font scaling.
Mark Me Up
The basic markup is as simple as it gets. You need an anchor and a span. It looks a little something like the sample below.
Simple, no? To style the button, you apply the left background image to the anchor and the right background to the span, remove the text-decoration from the links, add some padding to allow the entire button shape to be visible and set the span to display-inline.
The reason for placing the span inside the anchor is simple: doing it the other way around works fine until you get into IE and it all falls apart. Placing the span inside brings the added benefit of ensuring the entire button shape will be clickable by the user.
Clean, (at least reasonably) semantic code is something I strive for when writing code as part of web projects at Wishingline Design Studio, Inc. It’s not always easy or possible due to numerous factors, but it’s a worthy goal nevertheless.
This is especially true of more marketing or information-driven sites where I feel there’s a greater likelihood of visitors using screen-readers or requiring enhanced accessibility. Try to provide a reasonably good experience for everyone — within reason. This is a philosophy I know is shared by many web professionals who care about standards, usability and accessibility.
What I’ve noticed of late is that a good portion of “Web 2.0”-style applications don’t necessarily follow those rules. Even 37 Signals’ applications are cluttered with non-semantic code, inline-styles and hordes of inline javascript. So much for the separation of content from style from behaviour.
What I’m curious about is how much does this matter? Is it bad or just personal taste? Do the requirements for web applications differ greatly compared to more informational pages (eg. blogs, marketing-oriented product websites, etc.)? Should they? Can we just get away with that sort of thing more easily with web applications than with regular vanilla web pages because of their general intended audience? Is it just a matter of the complexity of one type of web page vs another?
As part of a project I’m in the midst of, I needed to be able to take an array which may have one or more keys with empty values and remove those elements from the array in order to find it’s true length. In this case, I didn’t care about the actual data in the array, just the count() value returned.
After searching through the PHP documentation, I didn’t see any internal functions or methods that would do exactly what I needed, so out to Google I went and after a bit of searching, refining my query and testing, I found the solution which I thought I would share — if nothing else, because I hope it makes it easier for others to find.
The code looks like this:
// @param $a = array passed into the function
// @param $b = result returned
function array_trim($a) {
$j = 0;
for($i = 0; $i < count($a); $i++) {
if($a[$i] != "") {
$b[$j++] = $a[$i];
}
}
return $b;
}
$b = array_trim($my-array-to-trim);
Simply create a variable, and then pass your array to be trimmed through the function. It takes care of the rest and will output a nicely compressed array of key/value pairs. The empty key/value pairs are removed whether they are at the start, middle or end of the array.
Of course, if there’s a better way, I’m open to suggestions!
A home renovations company flyer came through the mail slot the other day. In and of itself, this is not unusual. But the statement included near the end of the flyer stood out and is a good general business statement that I can’t say I’ve seen anyone really talk about, at least in terms of web design.
The quote is simply:
If we don’t take care of our customers, someone else will
In the web design/programming world this is very true. Designers and programmers are a dime-a-dozen. Face it, it’s true. Whether the majority of these people are true schooled or accredited designers/programmers is another matter, but there is always someone else waiting in the wings to pick-up a new client the moment you falter.
With this in mind, take a bit of time and think about how you can serve your client better; at least say thank you — keep them happy and keep them coming back.
Running your own local server for web development is a great thing whether it be Apache, “Lightty”: lighttpd or something else. It makes it possible to develop and test under similar conditions to a deployment environment (unless of course you’re developing for a large-scale deployment across multiple load-balanced servers and such).
As I’ve said before, Mac OS X shines in these type of situations because of the flexibility of its Unix underpinnings. You can compile and run Unix-oriented application as well as nice-looking GUI apps alongside Java, Perl, Ruby, PHP and more. And since the beginning, Mac OS X has come bundled with the Apache web server for hosting your own sites.
So. Let’s take stock of what we need to get a reasonable framework running for managing development environments with Apache, Virtual Hosts and a good old-fashioned DHCP.
Mac OS X (or some flavour of Windows if you really must).
Apache web server (for this example, but the concept should work for any reasonable web server).
“Web Sharing” enabled via the Sharing Preferences in Mac OS X.
Dynamic DNS account and client software. Various options are available, but we’ll look at using DynDNS’s services in this case.
Custom domain name or choose from one of the free dynamic hostname options.
Getting Your Domain Name
There’s two ways to deal with this — register a domain via your usual registrar and point it to the appropriate DNS service or register the domain with DynDNS. If you plan on using one of their free options, you just need to register for an account to get started. For the sake of this tutorial, let’s assume you’re using their Zone Level services and a custom domain name.
Create an account at DynDNS and go to the “My Services” section.
If you haven’t done so already, register a domain name. Once complete, it should appear under Domain Registration as well as under the My Zones section. You should see an indicator for “Custom DNS”. A subdomain of one of their stock domains will appear under the My Hosts section instead.
In the My Zones section, click on the “Custom DNS” link in the table. This will display your Custom DNS settings including your Hosts (A) records, Alias (CNAME) records, MX records, etc. This is where you can add however many custom subdomains you need.
To add a new Host, click on the “Add New Host” link above the listing of your Host records.
Enter the host you wish to use. For example: subdomain.
For host type, you can leave it at the default which should be dynamic unless you happen to have a static IP address, which unless you’re running off a business-grade internet connection, you probably don’t have.
Your current IP address should be detected automagically.
Click the Add Host button and you’re done.
The Dynamic DNS Client
The next thing on the list is to grab a copy of the Dynamic DNS client software. In our case, we’re going to use the official DynDNS Update software which is easier to use and requires less configuration than the alternative client options.
To setup the DynDNS Update software, install it on your system and launch the application.
Click on the Add button to enter your DynDNS account credentials. This is the same information as the DynDNS account you created earlier. Assuming your account info is accepted, any existing DNS addresses will be refreshed within the client.
You will be asked to install the DynDNS daemon which is the background process that will run on your system and update the DynDNS service when your IP address changes.
Click on a host in the sidebar list. The details of the host will appear on the right side of the window. Click the “Enable updating for this host” in order to keep a particular hostname updated.
Adjust the interface option as needed. Typically this should be set to “Web-based IP detection” if you wish to be able to access your system remotely (or to allow others to access your dev environment by name rather than IP).
DynDNS client software for Mac OS X
When you’re done, press the Add button to continue. Back in the main window, click on the Active checkbox for your domain. If the host is found it should return “Ok” and everything is ready to go. Next up — Apache.
Setting Up Apache
Although Apache’s configuration file is long and perhaps a bit drawn out for many, it’s still reasonably easy to read and understand and creating VirtualHosts is not difficult. My personal preference is to keep VirtualHosts separate from the main httpd.conf file for numerous reasons including OS upgrades, cleanliness and ease of management.
The structure I prefer is simple. Create a new folder in /etc/httpd/ called hosts. This is where we will keep our individual VirtualHost settings. One file for each domain. Next, open up the httpd.conf file in your favourite text editor and scroll way down to the end of the file. You should see a section that contains the Include directive.
# script as well as its and *.html, *.css etc. files.
<Directory /Library/WebServer/Documents/validator/htdocs>
Options ExecCGI FollowSymLinks IncldesNOEXEC Indexes MultiViews
AllowOverride None
AddHandler server-parsed .html
AddCharset utf-8
</Directory>
# Tell httpd that "check" is a CGI script
<Location "/validator/htdocs/check">
SetHandler cgi-script
</Location>
Include /private/etc/httpd/users/*.conf
# Include configuration files for VirtualHosts
Include /private/etc/httpd/hosts/*.conf
That first line includes the necessary setup to allow each user account in OS X to have their own Sites folder where they can host their web site. We’re going to follow the same methodology with our hosts folder as shown on line 1223 in the screenshot above.
Now that we have Apache set to include all files named with a .conf extension, we can go about setting up our first VirtualHost configuration.
NameVirtualHost *
<VirtualHost *>
ServerName subdomain.mydomain.com
DocumentRoot /Library/WebServer/Documents/subdomain
RewriteEngine On
<Directory /Library/WebServer/Documents/subdomain>
Options -Indexes ExecCGI FollowSymLinks
AllowOverride None
Allow from all
Order allow,deny
</Directory>
</VirtualHost>
Change the settings you wish to use for the VirtualHost as needed. Copy files in to the appropriate directory for the host and you should be up and running in no-time flat. Questions, comments?
The browser cache is both our friend and our enemy. As web designers and/or developers the cache is our friend because it’s useful for making sites render faster and therefore seem more responsive to the end user, but it’s our enemy because can wreak havoc when changes to a CSS file are made and the browser doesn’t want to let go of the old, cached version.
It’s a pain to have to clear your cache all the time to make sure changes are working as expected during development as well as after deployment to a live environment. Thankfully there is a nice, simple solution aside from sending no-cache headers, which I’ve found don’t always work. I’ve used it myself on a number ofoccasions and have seen it used elsewhere.
So what is it you ask?
Simply add a post-style variable parameter to the end of the link to your stylesheet(s). I like to use a version numbering type scheme myself. Something like www.yoursite.com/css/main.css?v=1.000. This keeps the browser from caching the CSS file, ensuring that the browser is grabbing the right version without having to clear caches and restart browsers.
Mac OS X is great in part for its flexibility and the ease at which you can get all different types of software running and working together; whether it is Unix, Cocoa, Carbon, Java, Ruby, PHP, etc. Dan, as always, did a great job at demonstrating how to get a custom Ruby on Rails environment setup on Tiger. I felt it was just that good that I used it myself recently when I spent a couple days rebuilding my primary development environment.
Lighttpd and Rails icons preview
For the coupe de grace, I decided to whip up a set of special icons — one of Lightty and another for Rails and I’m making them available for whoever wants them. Both icons included in the set contain open and closed folder states at 128, 32 and 16 pixel sizes.
In the last few months I’ve had the opportunity to explore the Flickrpublic APIs using Dan Coulter’s phpFlickr wrapper classes to handle the API calls and database caching to speed things up.
Although the Flickr APIs are constantly evolving, the phpFlickr classes have pretty much kept up with that evolution and made it very easy to search, view and manage your photos and photosets. As a web developer this is pretty handy because it means there’s another option for creating photo galleries or special applications without having to reinvent the wheel in terms of managing photos or managing multiple photo galleries with different content.
Just about everyone I know who’s seen Flickr thinks it’s great, so why not make the most of what it can do for you?
Getting Started/Installation
The phpFlickr classes are simple to use. Start by downloading the latest version and sign-up for a Flickr Developer API key. You’ll need to the API key to interact with the Flickr APIs and it’s helpful for them to understand how the APIs are being used.
Once you’ve download the phpFlickr package, you’ll need to un-tar the file and upload it to your web server (or drop it into your development environment). That’s it for installation, you’re now ready to start working with the classes.
Searching
The example we’ll use to get you started in using the class will be simple: find a set of photos based on a particular tag. Here’s the entirety of the code. I’ll explain it in a moment.
Save the code into a new file named search.php into the phpFlickr directory on your web server or in your test environment and test accessing the file in a browser. You should have a number of images returned if your query found anything that matched.
The basics of what the code does is simple, so let’s look at it line by line.
Line 2: Include the phpFlickr classes. This needs to be called before the page content loads.
Line 3: Create a new instance of phpFlickr using your API key.
Line 5: Provide a $tag variable to search by. This can be any literal string.
Line 6: Provide a $num variable to tell the script how many results to return (up to 500).
Line 7: Provide a $username to tell the script whose photos to search. This is the full nickname of the user.
Line 8: Setup a counter variable to iterate through the results.
Line 9: Create an $nsid variable which will hold the NSID of the Flickr user based on their $username.
Line 10: Get the chosen user’s base photo URL.
Line 11: Execute the API call based on the above parameters.
Line 12: Although commented out, this would display the contents of the data array returned from Flickr.
Line 13: Start displaying the results.
Line 14: Print an error if no results are returned.
Line 17: Loop through the results if multiple photos are returned and print out a link to view the photo on Flickr
Line 18: Display the Medium sized photo. Other options are “Square”, “Large” and “Original”.
Line 19: Add 1 to the counter variable’s current value.
Hopefully this, along with the other examples available will give you a start at using the Flickr API.
While it may be obvious, communication and collaboration are key to working with clients on design-related projects whether they be for the web or print. I think people forget that though; and for anyone handling the project management aspects of a job — especially if you’re a freelancer, this can be very frustrating and really drag down productivity.
Without client communication, who exactly are you designing for? How do you know if something is working, or right for the client’s target audience(s)? How do you get them to approve anything so that you can wrap up parts of a project and move on to the next component?
There’s nothing quite like the frustration of a project, especially one with a very short timeframe, coming to a screeching halt because the necessary communication just isn’t happening.
The thing with collaboration is that it’s a two-way street — it’s not one person sitting in a room talking to him or herself. It’s no fun having to chase after a client (or the other way around) to get an answer to a question or to get feedback to maintain a project’s momentum.
The thing is that everyone involved needs to understand is, well — the importance of being involved. If they don’t, they need to understand that projects will not finish on-time, on-budget and sometimes not at all without their support.
Collaboration tools, such as Basecamp can help ease the burden by enabling more frequent and timely collaboration and communications while improving your responsiveness as a designer. It lets clients be more involved in the process, makes you more accessible and better able to keep track of everything.
Design can happen in a vacuum but only to a point. The client must get involved — whether it’s simply to bounce ideas off, to point out problems or suggest improvements. There’s few clients who will sign-off on a project without reviewing your work and being happy with the end result.
Normally OS upgrades seem to go so smoothly… But this one left me (and I’m sure many others) with a nasty surprise — no communication between PHP and MySQL. Not nice. After a couple quick searches and no answers I decided to search out one myself. Discovering the problem was simple, and so apparently, was devising a solution.
If you fire up a quick PHP info file, you’ll see that the MySQL socket specified in the included version of PHP is wrong (or at least different and not what is expected) compared to older OS releases.
In 10.4.4, it is set as with-mysql-sock=/var/mysql/mysql.sock, whereas previously it was /tmp/mysql.sock.
Thankfully, there are at least two things you can do to remedy this.
Locate and edit a php.ini file (by default it should be in the /etc directory and find the mysql.default_socket line and simply add /tmp/mysql.sock following the equals sign, save and restart Apache.
In a plain text editor, create a new file in /etc/ named my.cnf. In the file, include a line which reads (on two lines):
[mysqld]
socket=/tmp/mysql.sock
Edit: The original second solution has been removed due to security concerns as indicated by Apple. A revised alternative solution has been added in its place.
I’ve spent the last couple days hacking around with phpFlickr for a project I’m working on. phpFlickr is a PHP wrapper class written by Dan Coulter which implements all of the available FlickrAPI calls and makes developing around Flickr fairly easy. Not really simple, but not too bad if you know what you’re doing.
The class also incorporates the ability to cache the API method calls locally to either the filesystem or a database to speed things up which is a nice touch and can really help with the overall performance. I’m going to be using the class extensively throughout the site I’m currently developing and will be incorporating some of that work later in this site as well.
If there’s any interest, I could be persuaded to post a few examples on how to use the class here once things wrap up and I can breathe again. Just drop a note in the comments.
On a side note, is it just me or is the FlickrExport plug-in for iPhoto busted? I’ve got the latest release installed and it hasn’t been working right for a while now. It probably is just me ;)
Given the lastfewposts here, backing up data and important files has obviously been on my mind. It’s coincidental more than anything, but I’ve continuously had problems with the primary removable Firewire drive I had bought to store my daily and weekly backups. So much so that it’s now in many pieces in the garbage with the disk platters more or less obliterated. It’s definitely unrecoverable and I feel much better given how much time was wasted repairing the drive and trying to get good successful backups.
What I’m really interested in here, and the main point of this post is this: How are you backing up your important files?
In particular, this is for the web developer folks. How are you backing up your design files (Illustrator, Photoshop, Fireworks) and your code files (HTML, PHP, Rails, MySQL). Perhaps the real first question is: Are you backing up? If so, how often? And to what form of media? If not, why not?
Once you’ve completed a project and it goes live, what then? Do you make a full backup of all the project files? Do you keep data available “online” (on disk) so that it’s easy to make changes down the road? Are you using a version control system such as CVS or Subversion? Do you develop using a local environment such as is available on Mac OS X? Do you clone your backups and keep a second copy offsite somewhere?
I’m pondering how I want to proceed with backups since my experiences with a certain brand of Firewire hard disks has left me with an extremely low opinion of their hardware and service technicians. The immediacy and economical value of using hard disks as opposed to tape has become more apparent in recent years as disks have grown larger and the cost per GB has decreased.
Tape is a good longer-term archival medium, but in my experience I often have to retrieve files for old projects quickly to make minor changes. Being able to mount a hard disk, grab the file and make the changes is just so much more efficient than finding the right tape, un-archiving the file off tape, making the change and then re-archiving the file.
Perhaps it makes sense to use both. Tapes for archival purposes. Once a week, perform a full backup to tape as well as archive completed work. And do daily backups to hard disk. I guess it ultimately depends on needs and practicality.
Reading Jakob Nielsen’s Alertbox for September 19th this morning got me thinking about something which has always bothered me with web applications and web forms in general. Jakob mentions the problem of scrunched screen elements and effective use of screen real-estate often being a problem with web forms.
In particular, he refers to avoiding drop-down menus and scroll lists by instead using lists of selectable items where “all items are visible simultaneously” to reduce errors and make selection more immediate. This made that little lightbulb over my head start flashing repeatedly…
One of the biggest annoyances with web forms for me typically rears its ugly head when faced with an e-commerce transaction and having to select a state/province or country from an excessively long drop-down menu. Raise your hand if you also find this annoying and tedious.
Some might argue that you can get around this by using a standard input field as some do. Yes, but then you have to deal with the problem of spelling and possibly abbreviations. So what’s a web geek to do?
An Answer?
What if the best solution was a combination of the two approaches? A simple text field with the capability to autocomplete based on user entry?
An example country selection autocomplete form field
Rather than force the user to scroll through a really long list, let them type the first few letters and choose from a much shorter list of options (or a single option depending on what they entered).
This approach would permit you to check against a list of common abbreviations, country codes, misspellings and still be able to deliver a useful, intuitive and responsive interface with less errors and more completed transactions.
Perhaps the most significant downside to this approach is that there are still massive numbers of users using antiquated browser software which is incompatible with “Web 2.0” DOM scripting and AJAX.
Nevertheless, given that we have wonderful technologies such as Rails, Behaviour and Script.aculo.us to help solve such design problems and for creating innovative web applications I’m a little surprised I haven’t seen anyone really try to tackle this issue in a new way. I can’t possibly be the first to consider this, can I?
The Dashboard is a bit of a web developer’s paradise - standards-based code with only one browser required for development and testing. Plus, the use of web plug-ins as well as system level scripting languages (AppleScript, Ruby, Perl, etc.). The possibilities are almost endless really.
Widgets in the Mac OS X Dashboard
Creating widgets for the Dashboard isn’t really that hard, but there are a handful of useful things to know before you get started and I’ll try to outline a few that will hopefully save a bit of debugging and gray hairs along the way.
RTFM. Read the developer documentation. No, seriously there’s good, useful information in there.
Always, always, always have a default image in the main directory of your widget. Name it Default.png. It gets used as the drag image when a user decides to try your fancy new widget in the Dashboard.
Create a version.plist file and keep it up to date if you modify your widget.
Be sure to create an Icon for your widget to show in the Widget Bar. Name the file Icon.png and keep it in the main directory of your widget bundle.
Test your widget in Safari during development and keep an eye on the Console for debugging messages.
Download my Dashboard Widget Xcode Template (Works in Xcode 1.x and above, so yes, it works on Panther). Decompress the archive and place the contents in a new folder called Dashboard under this path:
The template will do a lot of the preliminary work for you. It creates the base HTML, CSS and JS files along with the necessary property list XML files - and will automatically modify certain properties in and of those files based on the name you give the project.
The Info.plist file contains all the current allowed properties for a widget. Disable or remove as necessary, but they’re all there to save you looking them up in the documentation.
Well… what are you waiting for? Let’s see those widgets!
CSS can be a really great thing for web-based application design and development. At the same time it can very restrictive, mostly due to cross-browser compatibility and standards-compliance (or lack thereof). To this point, one of my big missions at Masterfile has been to help drive the site to be more standards-compliant, getting rid of some of the not-so-hot legacy front-end code and to slim down pages wherever possible.
Steps in that direction have been taken in the last few significant releases, but probably none more than the release which we just pushed out to the real world. This is not to say it’s perfect or that it’s 100% valid code, because it’s not — but it’s substantially closer than it ever was before.
Better page structures and the removal of nearly all table-based structure allowed us to do something we think is pretty cool, and something that until the week following us presenting it to the president, no one had yet seen on a stock photo site. I nearly fell out of my chair when I saw someone had actually beaten us to it! In the time since then it appears to have vanished off that particular site (which shall remain nameless).
Floating Thumbs — Oh My!
So… what feature is this you ask? It’s a little something we like to call “floating thumbs” or “floating boxes”.
Conceptually it’s simple, and probably very obvious: floated DIVs inside a container. Stretch the contained wider and the DIVs rearrange themselves accordingly. Shrink the container and the DIVs rearrange themselves to fit the narrower space. In an attempt to keep things sane and from falling apart wherever possible, the min-widthCSS property was used to restrict the content from collapsing in upon itself, at least in supported 5th generation browsers such as Safari and Firefox.
The all new Masterfile search results with floating thumbnail images
The floating thumb technique works beautifully and consistently in officially supported browsers. During development and prototyping, the tricky part was making it work with a statically positioned sidebar that is locked to the right side of the window. I spent a few weeks prototyping this and ended up using some additional DIVs as containers to keep everything happy, but thankfully it degrades nicely and didn’t add significant weight or complexity to the code.
But why did we do this?
In visiting with clients and doing some site statistics analysis we discovered a large proportion of visitors were using large monitors with higher screen resolutions. Considering a large portion of the market Masterfile serves are creative-type people and organizations, this made perfect sense.
We looked at the UI and knew that we weren’t doing enough to help these people see as many images as possible on the screens due to the design being limited to a fixed-width column (due mostly to the previous table-based layouts). Moving towards a fluid, standards-based page layout let us offer users a better experience without needing to set preferences or without needing any complex interactions. Just resize the browser window. For users with 20-30 inch displays, the net effect of this is enormous.
Making it easy and painless was the key. Not forcing the user to have to change settings somewhere and then making it difficult to revert back if they don’t like it will aid in adoption of the feature. It’s something simple and we know users will appreciate it.
Server-side parsing tools are fantastic and can save you enormous amounts of time and effort in producing large-scale websites. At the same time they tend to have their own bugs and intricacies which can confound and perplex the best of us. This was the case today.
Background and the Problem at Hand
We’re preparing to promote a number of changes, fixes, new features and general improvements tonight on the Masterfile.com site. In the testing of said features we’ve encountered the usual fare — bugs. The latest one being related to a small, but generally significant change we’ve been trying to get out the door for some time — moving the site completely to UTF-8.
In a nutshell, we encountered a problem where somewhere along the way, character encodings were getting mangled. As a result, text was not rendering properly and search links generated zero result queries. This is bad and obviously unacceptable.
The Solution
While perhaps not the most elegant solution, we found a little piece of JavaScript code which batch-translates the raw UTF-8 encoded pages to the equivalent HTML entities. It’s simple and not overly tedious. It kindly ignores the surrounding HTML code completely and only translates accented characters as well.
Here’s the code and a brief description of how to use it:
Create a simple form in a new HTML page with two textarea fields and a submit button. The first will be the input, the second will display the output and should be set with the readonly attribute. The submit button should have an onclick attribute which calls the javascript function. Take a quick look at the sample page I’ve put together to see how it works.
Arvind at Movalog, along with some assistance created a nice alternate version of the Livesearch functionality that integrates with Movable Type and still allows the traditional CGI-style search function to work. While I haven’t had much time to investigate this in depth, it doesn’t strike me as quite the same thing as I’ve implemented and described, but is instead closer to the Google Suggest functionality or something in between.
Nevertheless, it’s very interesting and I’m going to look at how these two options might be combined to overcome the shortcomings I outlined in the earlier post. I’d really like to see the original search functionality remain available as a fall-back alternative for users in unsupported browsers or with Javascript disabled.
It’s been about two weeks since I posted the Livesearch functionality tutorial here and in the time I’ve been looking at the SQL query to see if there’s room for improvement.
I noticed that some of the results for certain queries didn’t make sense. What I realized I had forgotten was that it’s raw, Markdown formatted text stored in the database tables and the results reflect all positive matches to the string a user submits through the feature. This includes matching bits of URL strings…
In effect what I thought was wrong, wasn’t. It was a matter of false expectations of what the results should be, that they weren’t as good as they should be or that I had made a mistake with the query. The query correctly checks against the entry_textentry_title fields. Taking this a step further, we could add MySQL’s full-text search to make searching even more robust.
Another small catch to keep in mind is that searches are case-insensitive meaning that a search for “Apple” should yield the same results as “apple”.
The point I’m trying to make here is simply that you should look closely at the results to ensure that everything is working as you expect. Test a handful of queries using the command line or some other database management tool.
Other Examples Of Using XMLHTTPRequest
This XMLHTTPRequest thing is really starting to take off now that newer browsers are supporting the feature more consistently and that the web community has started to take notice.
As a result you’ll likely see the Livesearch functionality cropping up on more and more blogs/websites in the near future and in more innovative and creative ways. Two other excellent examples of the XMLHTTPRequest object can be found over at map.search.ch and of course Google Suggest.
Since I slipped up yesterday and inadvertently posted an entry destined for my ‘Hits, No Misses…’ links blog on the main blog I encountered the problem of a small blip in the sequential numbering of my entry IDs. Thankfully this is fairly easy to correct and stuck me as a good little mini-tutorial for anyone who may not know how to do this.
Using my own situation as an example let’s say you just posted an entry to your blog and it is recorded in the database as mt_entry number 439. You then notice you posted it to the wrong blog. Oops.
MT Entries table statistics in MySQL
To remedy the situation you re-post the entry to the correct blog (this is easy with ecto) which then leaves you with two entries. Next you delete the first entry (the one posted to the wrong blog) leaving the second entry with an ID of 440.
If you were to look at the mt_entry table for your Movable Type install using something like phpMyAdmin you’d notice the last two entries were now 438 and 440 and the next autoindex value would be 441. Clearly there’s a blip in the system, but it’s an easy fix.
Un-Blipping The System
To get things back to normal you’ll need to have access to your MySQL database through the command line or something like phpMyAdmin. Start by looking at the mt_entry table. Note the last entry ID (based on the example, this will be 440) and change the ID to the missing ID. In this case it would be 439.
Now that the IDs are back in sequential order there’s still the matter of the autoindex value which, if left alone, will result in the next new entry to have an ID of 441. In order to fix this, we have to reset the autoindex value so the next ID will be 440 instead of 441. To do this, simply execute this query on the me_entry table:
ALTER TABLE mt_entry AUTO_INCREMENT=1
You can now safely rebuild your indexes and archives to see the changes reflected in your site.
Alternatively, you can probably avoid at least part of this procedure by not double-posting the entry to two different blogs and simply change the entry_blog_id value and rebuild. Note to self for next time this happens…
It’s been a while since I’ve done a tutorial on anything so as a last-minute holiday treat I’ve got one cooked up on a feature that I’ve wanted to explore implementing here for some time.
Resources
Let’s get the resources out of the way so you know where to get the relevant pieces of open-source code and can see a nice implementation of what we’re going to build.
Garrett Murray: Maniacal Rage — For doing a nice implementation of this with a couple of extra features than those than come out of the box and which I have attempted to appropriate into my own version. Cheers, mate!
Laying The Groundwork
Follow along and you should have a basic framework for implementing livesearch functionality on your own site. Start by downloading the required BitFlux JavaScript file and take note of the installation details. We’ll go over them here, but read through the content on the topic there first.
Take the ‘livesearch.js’ file and copy it to your site wherever you keep your Javascript files. For the purposes of this tutorial, I’ll assume this file will be located in the root directory of your site.
In the livesearch.js file locate the first line that contains /livesearch.php?q= near the end of the file. To specify the search variable as something other than “q”, change this here and in the lines that follow. There are a total of three instances that will need to be changed.
As an example, to rename the variable to “s”, the replacement text would be /livesearch.php?s=. Also note that you can rename the livesearch.php file to something else if desired. This script will do the work of performing the search and returning the results back to the browser. Just be sure to make the same changes in the Javascript if you rename the file.
Determine the necessary SQL query to search your Movable Type entries. Exactly how you decide to do this is really up to you. The SQL here is more of a quick example and you may want something more robust.
SELECT entry_id, entry_title, entry_excerpt,
DATE_FORM(entry_created_on, '%Y_%m') AS date
FROM mt_entry WHERE entry_text LIKE '%$s%'
ORDER BY entry_created_on DESC
The $search variable included in the SQL should also match the name passed from the search form. Exactly what is returned is entirely up to you. In this case, the query returns the entry ID, title, and excerpt with the results sorted by date in descending order.
To restrict the results to a single blog if you have more than one, you would also want to filter by the entry_blog_id in the WHERE clause. For example, WHERE entry_blog_id = '1' AND entry_text LIKE '%$s%', etc.
It would be useful to split the results into multiple pages if a large result set is sent back (eg. 10 results at a time) but that’s outside the scope of this tutorial.
The next piece of the puzzle is to return the results in a standard XML format to be parsed by the XMLHTTPRequest JavaScript object. The Wiki page describes the format required but you can see it more clearly here in context of the PHP you will need to return a correctly formatted result set.
You may not want to copy this verbatim, but it should be enough to give you a good start on how to return the results. Include links and such where appropriate and watch out for whitespace parsing and validation issues with the XML returned. Note that you don’t have to use an unordered list here, this is just a suggestion to keep thing clean and arguably semantically correct. It’s also easy to style with CSS.
Once your PHP script is completed, test it manually to see if it’s returning results as expected. Upload the file to your server and call it along with the appropriate search query appended. For example,
yoursite.com/livesearch.php?s=querystring
If all goes well and matches are returned you should see a nicely formatted list of results. If not, you’ve got some SQL or PHP debugging to do.
Implementation
Implementing livesearch into Movable Type, replacing the default search functionality is straightforward, but presents a few challenges. The first thing to take into account is where search is located throughout the site. For now, I’m assuming that search functionality is only available within the Main Index template.
In that Main Index template, locate the search form code. The default search form in Movable Type does not include an ID/name attribute and uses a button we technically will no longer need. Change the form code to:
Formatting is important here. Make sure the third line with the DIVs has any whitespace removed. Whitespace appeared to cause problems returning results during my testing.
Next, in the HEAD section of the template, include the livesearch.js file and initialize the livesearch script by attaching an onload event to the BODY tag.
<body onload="liveSearchInit();">
There are better ways to handle this, especially if you need to be able to execute multiple functions on the DOM when the page loads, but this is suitable for this simple example. Save the template changes and style as desired with CSS. Repeat as required wherever the search form appears.
Caveats
This isn’t foolproof of course. It doesn’t work in some browsers, or in some cases, not completely. It doesn’t work at all in IE 5 for the Mac or Opera 7.x (and one can assume earlier versions as well). It mostly works in OmniWeb 5.1 beta 5 based on very minimal testing. And there’s also the issue of graceful degradation. Users with Javascript turned off are out of luck with search since livesearch relies entirely on Javascript. Providing a fallback for those users would be a worthwhile addition.
Sorry for the lack of posts over the last few days. I’ve been occupied with a small (large?) issue that I (accidentally) noticed and which seems to have occurred during the period back in October when I moved and upgraded the site to Movable Type 3.1.
During the process of the move and upgrade, I cleaned up the post IDs (kind of accidentally actually) and during one of the import processes, the actual entries for the blog got a bit messed up. In the body/summary fields, certain words somehow ended up capitalized wherever they appeared. Not coincidentally these words were all special reserved keywords in MySQL — such as DISTINCT, SELECT, DELETE, CHECK, FROM, UPDATE, etc.
I’m not sure how this happened and it’s been a real nuisance to fix, but it did provide me with an opportunity to update the archives to all use the Markdown with SmartyPants text formatting plug-ins along with cleaning up some of the less than semantically correct HTML throughout.
Overall things are generally better now, allowing me to get back to the next big feature (and the last) that will likely make an appearance after this weekend or early next week given my current holiday schedule; leaving me time to finish the design and architecture phase of a long-overdue redesign. This will also coincide with the even longer overdue launch of the Wishingline Design Studio, Inc. site.
I’ve said it before, but the right dose of inspiration hit and I’m on a bit of a roll with things so far. Hopefully I can keep up the pace for as long as I need to get it done.
Anyway, back to MT… The lesson here is: be very careful when you import or export your entries. I don’t know which method is better — using a MySQL dump or the built-in Movable Type export/import functions. One of those two produced the problem I ran into and I’m debating testing with a local MT install to figure out which one. If I do, I will post the results here.
Update
It appears that Movable Type was able to export/import the entries cleanly and also did not exhibit the problems I encountered with phpMyAdmin did.
I’ve been waiting for a good solution to secure ftp for a long time now and finding this link just made my day. The English translation isn’t perfect, but follow along with the Terminal commands and you should be good to go.
Do make a backup of any files first though — just in case.
During the process of making some of the small design and features changes that have seen the light of day on this site, I split out the sidebar calendar widget into its own include file. The Movable Type manual suggests this as a way of reducing the processing required during rebuilds of the index templates. Makes sense. Why do something more than once if you don’t have to.
One of the other things that made the cut was a complete refactoring of the actual HTML for the calendar since it was unnecessarily bloated. This was a good opportunity to play with the CSS a bit and give the calendar a bit more visual flair.
The thing I want to point out is that there’s a single TD element with an ID applied, indicating the current date. I made this change/addition but then realized that it’s not practical if I don’t post every day or I don’t rebuild the template every day to have the ID change positions appropriately.
Given that I’m currently using static publishing through Movable Type, I needed a way to automatically rebuild this template daily and ideally without me needing to remember to actually do it myself. Due to limitations at my host, I discovered that my options for automating this were limited so I rolled my own solution using a bit of AppleScript and a cron job on my main development system which stays on pretty much all the time.
Turn On System Events Scriptability
For this to work, you first need to turn on the option for “Enable Accessibility for Assistive Devices” in the Universal Access preferences in Mac OS X which allows you to target menus and execute keyboard commands programatically using AppleScript. Essentially this allows you to make just about anything scriptable whether an application supports it natively with a built-in scripting dictionary or not.
Mac OS X Universal Access for assistive devices preferences
How It Works
The general rundown of how the script works is this - the script launches via cron, opens a specified URL (the path to the Movable Type rebuild script along with the template ID) which, once loaded, causes the button on the page to be clicked using an accesskey which then rebuilds the template at which point the script closes. Sounds simple right? There was one snafu along the way, but luckily it was easy to resolve.
That one snafu? I had to modify one of Movable Type’s internal templates (specifically /tmpl/cms/rebuild_confirm.tmpl) to add the necessary accesskey which would allow the script to programatically press the button causing the template to be rebuilt. Not having the accesskey meant that there was no mechanism to actually press the button on the page.
The script then, when running and after the page has fully loaded (checked using do JavaScript( document.readyState = "complete" )) uses the keystroke command to execute the keyboard command for the accesskey. In this case set to Control-S in the template.
I had a tough time sorting out exactly how to format the keystroke command (lousy AppleScript docs…), but for future reference and anyone else struggling it is:
tell application “System Events” to keystroke “s” using {control down}
I was missing the tell statement. It was late when I was working on this part so I’ll blame it on being tired…
Making It Run
Mac OS X, being a Unix-based OS includes the cron scheduling utility. cron is used in the system to run a series of regular tasks for doing things like cleaning up and archiving log files but is really a general-purpose scheduling utility and a perfect fit for what was needed here to automate running this script. To add the task to the schedule, it’s simply a matter of editing your crontab file and adding the new command. In this case, I set it to run every day at 12:01 AM.
CronniX, cron process management for Mac OS X
Note that you may want to prepend the command with /usr/bin/open in order to actually open the application bundle (the AppleScript) via the shell. This shouldn’t be necessary since the path is already included in the default Mac OS X shell environment, but it’s probably good form just in case.
The crontab file can be edited by typing crontab -e in the Terminal or you can use a GUI application such as CronniX which is a bit easier and also makes testing things easy. If everything works, you’re good to go.
A (Minor) Caveat
One minor caveat to all this is that you need to have saved your Movable Type administration login information so the browser can access it without intervention. This means in your System Keychain. I don’t recommend running something like this on a public computer, but in a secured environment (e.g. home computer behind a firewall), you should be sufficiently safe.
Code Download
You can download a generic version of the script via the link below.
Taking cues from an old project originally started by Aaron Faby, I’m making a new version of the MySQL Preference Pane available for public consumption as part of my MySQL Tools package.
The MySQL Tools package includes a Startup Item and a custom PreferencePane for Mac OS X 10.2 or 10.3. The software is being distributed via an installer package which will install the files in the /Library folder at the root level of your hard drive. Some manual configuration is currently required following the installation though I am going to work on adding the necessary preflight/postflight scripts to the installer to take care of this.
MySQL Server PreferencePane for Mac OS X
There is also one potential security issue with the software (see the Read Me portion of the installer for additional details) which I am also intending to address so I do not recommend using this in potentially high-risk deployment environments at the moment. It should be fine for local testing and development though.
New in this version is:
Full Mac OS X installer for the software
First implementation of a Software Update mechanism to look for new versions of the software
Link to get more information on MySQL
Small improvements to the StartupItem
Code cleanup and bug fixes
I have a few ideas for additions and fixes to improve the tools which will appear in future releases.
The new Masterfile.com site is live and happily purring away. All that hard work, sweat, blood and lost/gray hair was worth it. Seeing as how this was a much larger undertaking than we anticipated I think it’s worthwhile to discuss the point and the process.
Starting Points
We’ve known this was coming for some time now, but the project didn’t really hit our desks until around six or seven weeks ago when we started to get glimpses of what was coming from the designers that were hired to take care of the company identity and re-branding work. They were also given an opportunity to help re-skin the website; to give it a fresh coat of paint.
The new Masterfile wordmark by Underware
At the time they were told that they couldn’t really change anything too drastically. The site works and is respected throughout the stock industry as one of the best. I’d say it’s in the top two, but I’m a little biased.
The site also has some of the best and most intelligently implemented features. Doing any damage to that — making the site any less usable was not an option. The equation we like to use comes from Ole Eichhorn’s Critical Section site and goes like this:
W=UH
Where, W=wrongness, U=ugliness and H=hardness. In plain English, this means: “if something is ugly or hard, it’s wrong”. In the world of software development and web design, we should all hold this to be true. This is our internal mantra at least and our task has been to ensure that the site is never any less usable than in a previous incarnation. I, and numerous clients and people outside the development team who have used the new site seem to agree that we succeeded in that aim.
Overall Goals and Expectations
The site changes are just a part of the overall re-branding project previously mentioned. The first step was the introduction of the new corporate identity, new stationary, marketing collateral, magazine advertisements (see the back covers of the current issues of HOW Design, Print and other popular industry publications) and re-establishing the company culture to be more reflective of the employees. Although the official launch is not until later this week, pieces of this have started to make their way out into the world.
In discussions to get a grasp on the task of actually doing the work of re-skinning the website, we were told the point of the design changes to the website were focused around colour. The previous site was about photos and making them as prominent as possible while lessening the impact of everything else. This makes sense from a business point of view since that’s what the company sells. Getting in the way of users looking at photos is bad. This is still true, but now we’re reintroducing the idea of colour back into the site design to make the site more vibrant, to improve visibility of features and to improve usability.
The new design and identity are more representative of the company and its internal culture. Based on the old identity and site, you might think that from a culture point of view that Masterfile was a bit stuffy, old-school and took itself a little too seriously. On the contrary. The company is primarily made up of younger, creative employees who work hard and are knowledgeable and passionate about photography and design. So the new identity — which is funky, playful, and a little retro fits the bill. The web site also needs to express that same idea.
Site Changes Recap (Or The Things We Did To Make This Happen)
While this could easily be a long, drawn out and technical explanation of the things we did to get from where we were to where we are now with the site, but (for now) I’m going to keep it reasonably brief.
The Masterfile homepage before and after
One of the notable goals we had with the new site was to make it possible to re-colour the site. CSS to the rescue! The catch — nearly 30 localized languages along with a fair bit of legacy code still using nested tables and other un-semantic markup. The plan is to switch up the site colours every so often to keep it fresh and fun. If you don’t like the current colour scheme, maybe you’ll like the next one better.
Compatibility is important and we had to make sure the site worked reasonably well in as many browsers as possible. In doing this though we had to be realistic and some browser stats helped us stay focused on the particular browsers we needed to target.
Yes, there are small quirks and inconsistencies in the new design, but upgrading to new(er) browsers clears up most or all of these issues. It’s also likely a number of these quirks are font-related issues for which there’s not much we can do. Our CSS font stack was designed to help minimize these issues and was based on research, but unfortunately there’s not really a good way to accommodate every possibility and permutation. Why can’t everyone just use Firefox or Safari :)
Other things we did to help the overall site usability based on research and user testing was to automatically select the search field so it’s ready for input when the user loads the page, completely reorganizing the information pages and updating the help documentation.
We also reworked the visual hierarchy of the pages by more effectively using page header styles and providing the utility boxes (Search, Categories, Lightboxes, Last Searches, etc.) with clearly defined headings to make them more easily identifiable. They were also re-skinned to be more visually neutral and not “in your face” so that they’re visible but not in the way. This is one of my personal favourite features.
A new price icon was added below image thumbnails for North American users allowing quick access to pricing information (RM calculator or RF pricing) via the enlarged image preview. This functionality is not available for international clients since certain collections are not available in all countries.
We also looked at how we could lighten the load of the site itself by reducing the sheer number of image assets required throughout the site design. Part of this was accomplished by moving to text-based site navigation (an unordered list) and using transparent images for buttons, leaving much of the actual visual styling to the CSS instead. With roughly 30 international versions of the site, the number of images that require changes quickly grows exponentially and time is better spent improving the site or adding new features rather than modifying image assets. We expect that users would tend to agree.
Other Little Bits
Of course there were lots of little things. We’ve probably forgotten half of the them, but here’s just a few.
Link colours added (Active, Hover, Visited)
Improved text-based tabs
Simplification of the overall page layouts
General code cleanup and validation improvements (still lots of room for improvement here)
Static information pages moved to XHTML transitional from HTML 4
Improved the underlying structural hierarchy of static content
What’s Next?
At the moment we’re in bug fix mode and preparing for the inevitable 1.0.1 release sometime in the near future. I doubt we’ll get everything, but we’re working on it. The major issues will be dealt with and lingering smaller issues will be logged in the bug database.
One of the things we’re trying to accomplish with the big re-branding project I’m working on is facilitating global colour changes throughout the entire site via CSS (obviously). One of the items on the list causing problems is Opera’s handling of the input type=image element. Everything works as expected in older versions of the browser (as it does in every other browser out there, including IE5), but not the 7.x series which appears to contain a bug.
Implementation Details
The site currently (and going forward) uses a lot of image-based input objects (buttons) as opposed to the native OS-level widgets. These are also translated into around 40 locales — so there’s a lot to deal with. The newer buttons have been created as transparent GIFs using pixel font type. The idea is to apply a background colour and border to finish off the buttons with CSS.
Transparent image buttons
In Safari, IE, OmniWeb, Firefox, Camino and Mozilla applying a background-color property to an assigned class or ID on these input elements works as expected but in Opera 7.x this property appears to have no effect. I’m assuming part of this is related to that property is not necessarily appropriate for input objects, but every other browser seems to support it when the input type is set to ‘image’ so why doesn’t Opera (anymore)?
I took a look through some of the Opera docs and I’m not entirely sure what to make of things. For an experiment I tried creating a regular input element and styled the background. Of course it bloody well worked in Opera but failed in Safari. Very frustrating.
Up For Suggestions
While I continue to ponder this, does anyone have any ideas or pointers to articles or tips that might help? We’re looking at the feasibility of converting the inputs to plain vanilla images with links, but I’m not sure it will be possible given the use of a lot of complex Javascript. That, and the short timeline we have until launch.
Following in the footsteps of John Gruber I’ve put together a codeless language module for BBEdit 8 for the Velocity template engine. I have a few thoughts on the new version of BBEdit but I will save them for another day.
For those unfamiliar with Velocity (the large majority I suspect), it is, according to the Velocity overview “…a Java-based template engine. It permits anyone to use the simple yet powerful template language to reference objects defined in Java code.”
Continued, “When Velocity is used for web development, Web designers can work in parallel with Java programmers to develop web sites according to the Model-View-Controller (MVC) model, meaning that web page designers can focus solely on creating a site that looks good, and programmers can focus solely on writing top-notch code. Velocity separates Java code from the web pages, making the web site more maintainable over the long run and providing a viable alternative to Java Server Pages (JSPs) or PHP.”
I’ve been busy at work preparing for a fairly large-scale re-branding and have been working with one of the more highly regarded and respected design/branding agencies in Canada (at this time I can’t name names and will attempt to keep this as anonymous as possible). They’ve been busy working with management to help redefine the company culture which has unfortunately been somewhat misrepresented by the current identity which is very corporate and kind of bland (at least from an identity standpoint).
Start With The Good Stuff
The current brand is not representative of the employees who are mostly younger (late twenties to mid-thirties types) and of an artistic temperment. The identity mark is also nearly ten years old and showing its age. The plus side for the designers is that they’ve done a good job it seems in developing an updated company identity; one that is younger, somewhat hip, and better targeted towards the company’s primary markets now and in the future.
They’ve put together some nice promotional pieces, advertising and packaging that make good use of the new identity and that I think will go over well and should result in an increase of traffic to the website. Some of this should start appearing very soon in magazines and design-related publications.
The Downside
On the downside though (and getting more to the point) is that their understanding of web design and web application design is sadly disappointing, though not terribly surprising. “Typical print designer” comes to mind.
You have to understand that these are primarily print designers. They understand branding, identity, advertising and package design. Pixels are a different language to them, at least in terms of the web. They are clearly more experienced in Flash-style sites where pixel perfect layout is a realistic expectation and where the sky’s the limit in terms of possibilities. They also apparently like to mock up web designs in Adobe InDesign. Odd, IMHO.
The interesting problem, besides trying to explain the ideas of usability, visual hierarchy and importance has been the idea of not doing any harm to the site on the whole. A simple but interesting equation was pointed out to me by my manager which makes a good statement for our overall design/development process.
The equation comes from Ole Eichhorn’s Critical Section site and goes like this:
W=UH
Where, W=wrongness, U=ugliness and H=hardness. In plain English, this means: “if something is ugly or hard, it is wrong”. In the world of web design or software development, we should all hold this to be true. The success of Apple’s iApps, and Apple software in general is a perfect illustration of this point.
Unfortunately, this is where our (initial) disappointment in working with the supposed bigwig designers kicked in. We knew up front that they liked the site and didn’t want to change much. We thought that sounded good and it gave us the warm and fuzzy. We were expecting more of a re-skinning of the site rather than a major undertaking such as a redesign.
In reality what’s happened is that we’ve ended up somewhere in between due to the proposed requirements and overall usability needs along with small feature changes we’d like to implement to improve the site. Remember that equation? It was doubtful the designers had seen that before or had enough understanding of the needs of web applications compared to those of marketing-oriented sites.
What was initially presented to us we discovered later was a first iteration but was immediately accepted by upper management with no questions asked. I guess if you’re paying the bigwigs the big bucks you assume they know what they’re doing. Maybe they do sometimes, and maybe in the case of the site changes they’re a little off. Now I’m not saying they haven’t done good work in the past on other sites — it’s just that at least to this point, it’s been less than spectacular.
Why You Can’t Trust Printouts From Designers
The printouts we were given to look at looked Ok. Not spectacular, but Ok. They had obvious problems such as the pixel perfect precision of everything and an overall heaviness which troubled us. The site has been there before and there was no cause to go back without fear of harming the user experience. Visual hierarchy and importance were the big issues here.
Based on the branding, Helvetica was being advocated as the primary font of choice in the CSS for everything. Sorry, been there, done that. We just got away from that and are not interested in going back. We’re pretty happy with the font stack in the CSS file currently. We are considering it for some larger image-based headings, but the overall font selection in place currently will likely not change since we’ve improved readability of the content quite a bit since the last major functionality update a little over a month ago.
The biggest problem discovered with the printouts provided by the designers — the only thing the managers had seen to this point — was that they were nowhere near colour accurate. This actually made things worse. Once we found out just how much heavier the pages looked with the real colours, I think we all were even more disheartened with the experience. Seriously — these are bigtime, expensive designers who were being paid for crap work and seemed to be missing the mark completely with the website changes.
Rebranding teaser screenshot
There have been numerous opportunities where concerns have been expressed, ideas shared. Things are starting to get cleared up but considering the schedule indicates tht we’re launching this in less than 3 weeks… I’m still nervous. There’s still a lot to decide and even more work to actually do along with technical hurdles to overcome.
We’ve got our CVS branch setup for maintaining the existing site while we work on the re-branding, as well as our tasks database for keeping track of everything along with an extensive inventory of what needs to be added, deleted or changed as part of this exercise. It’s complicated and I hope it goes smoothly. The CVS stuff worries me a little, but more from people being lazy and not taking their time when doing updates and testing. One wrong commit and we could have bits of the re-branding mixed up with the current live site. That would not be pretty.
In the time since this was all revealed, there have still been obvious disconnects between our team and the designers. They do not understand the differences between static marketing sites versus application-based sites. You can get away with more on Flash-based sites than you can in HTML-based sites.
We’ve since taken our own path and reworked the design keeping in mind what we knew about the overall intentions for the changes being proposed.
Martin Pittenauer of The Coding Monkeys, tired of looking at poorly formatted source directly in Apple’s Safari web browser cooked up a small hack which instead hijacks Safari’s source view and instead displays it in SubEthaEdit. He was also good enough to post the source code for this making it possible to change it, allowing the source to be viewed in applications other than SubEthaEdit.
While I think SubEthaEdit is really cool, it’s not my primary editor. I’m a BBEdit man myself as are many of my contemporaries. And so out of SubEthaFari comes my switched-up bundle renamed (for lack of a better idea at the time), “BBEditSource”.
Why is this useful you ask? The answers should be at least partly obvious. Automatic syntax colouring, line numbers, and the ability to sort out rendering problems and code validation (especially if you use a lot of includes and need to be able to figure out if there are missing tags and such).
Being the responsible developer I strive to be, I’ve made the bundle available for download along with my modified source code. The download links are available at the end of this post.
Installation Notes
BBEditSource can be installed in two different locations in Mac OS X, your Library folder or within the system-level Library folder found at the root level of your hard drive where it will be available to all users on the system.
BBEditSource is known to work in Safari 1.2 and the 1.3 Developer Beta. It’s possible that it may work in earlier versions no access to those versions currently limits my ability to test again this theory to deny or confirm it. Your mileage may vary.
Note that you should not have SubEthaFari and BBEditSource installed at the same time to avoid potential conflicts. If you are experiencing problems or unexplained crashes using BBEditSource, leave a post in the comments.
Create a new folder called InputManagers inside the ~/Library folder or in /Library/ at the root level of your computer.
Drag the folder named “BBEditSource” to the InputManagers folder to install the bundle.
Launch Safari and visit a web site. In the View menu, choose View Source. The source code should be displayed inside a local copy of BBEdit instead of within Safari.
Downloads
Files are available for download as compressed Stuffit (.sit) archives. Click on the link to download the files to your Desktop. The source code has been saved in Xcode 1.2 format and may not be fully compatible with older versions. Visit Apple’s Developer Connection web site to download the latest version of Xcode for Mac OS X (free).
Apple recently posted a great tutorial on their Developer Connection site on how to get the W3C HTML Validator running locally on Mac OS X. If you want or have a need for local access (particularly when offline) to the validator tool, I recommend following along and setting it up on your own systems.
The process involves checking out the project from CVS, updating a few files included in a provided disk image, editing your Apache configuration file, installing OpenSP and a number of required Perl modules and lastly a missing library in Mac OS X called libiconv.
Additional instructions, especially related to Perl can be found at David Wheeler’s site.
“This should be fairly easy and straightforward” I thought to myself when I first started looking at trying to do away with using MacCVSClient to manage collaborative CVS based projects. It bothers me because (a) it’s slow when dealing with large projects and (b) it uses a proprietary binary format for the CVS information files.
In the end it was a fairly simple procedure but I found myself running in circles trying to decipher information from various sources and turn it into something cohesive that was clear enough to a CVS newbie such as myself. Hence this tutorial, which will hopefully grace the Google archives for some time and save others the same frustrations I suffered trying to get this to work.
A Few Assumptions
Although this tutorial is intended for Mac OS X users, the majority of it applies to any Unix-based operating system which can run the bash shell (the default shell in Mac OS X). BBEdit is only available on the Mac though so that portion is of limited widespread appeal.
I’ve purposing left out things related to other shell environments to keep the article length down, but if anyone is stuck there and wants to know how to do this using other shells, drop a note in the comments and I’ll post some additional information or point you to where to get it.
Setup Your Environment
To save yourself some work you will want to set some environment variables to simplify interacting with the CVS binaries. To do this, fire up the Terminal application and type pico .bash_login to create a new file (or edit an existing one) in your home directory.
The first variable to set is CVSROOT which is used to indicate the location of your repository. Assuming you’re dealing with a CVS server running on a second machine and being authenticated using the pserver method, that variable would look like:
The second variable that you may need to set is CVS_CLIENT_PORT, though unless you need to use a port other than the default (2401), you can probably skip this step. If you are using a Firewall (hardware or software) make sure to allow traffic through this port.
export CVS_CLIENT_PORT=2401
The third variable is the CVS_PASSFILE which also is optional but useful if you want to store your unencrypted CVS password somewhere other than the default location in your home directory. Keep in mind that the pserver method sends data in plain text format so it’s possible that it could be read if intercepted by unauthorized parties — both your account information and the repository data itself. For details on improving this situation, see the Bonus Tip below.
The name of the file by default is .cvspass. The information for the CVS_PASSFILE variable should look like this:
export CVS_PASSFILE=~/.cvspass
The last variable we need to be concerned with setting is the CVS_EDITOR variable. Since we’re going to be using BBEdit for managing CVS, you can set this option to use the bbedit shell command instead of another CLI-based text editor such as vi or pico.
Save the changes to your .bash_login file by pressing Control-O to save the file. To apply the changes to your shell, either open a new Terminal window or, in the current window, type source .bash_login which will reload your shell and associated environment variables.
You can check that the new variables are set by typing env which will output a list of all the available environment variables. Assuming the new variables are set, you are set to login to the CVS server and checkout a project.
Caveats
The big caveat with this procedure is that you need to login to CVS via the Terminal or a BBEdit worksheet to perform any actions with its CVS tools. If you’re adventurous you could easily write an AppleScript/Perl or Shell script that will do that work for you and make the process easier, but if you aren’t, then just type login cvs and then enter your CVS user password (if you have one).
Keep in mind that many public CVS servers such as those at SourceForge typically provide anonymous access without the need for a password. You can get the full scoop in the case of SourceForge at their site.
The second caveat, and I’m not sure how much this still applies on Panther and with newer versions of BBEdit but you may also need to create a special environment file specifically for GUI apps to use since they can’t inherit these from the shell.
RCEnvironment PreferencePane for Mac OS X
There are two ways of doing this — either through the command line (by creating a file named environment.plist in a new hidden directory in your home folder called .MacOSX), or you can go the easier route and use RCEnvironment which is a preference pane which provides a GUI for creating and editing this file. Unless you know what you’re doing, I’d suggest using the GUI to get started.
Be sure to logout and login again to make the variables available in the GUI environment.
Bonus Tip
Let’s say you want to ensure that all your CVS-related traffic is tunnelled through SSH, all you need to do is add one more line to your .bash_login file and make a small change to your CVSROOT environment variable. That additional line looks like this:
export CVS_RSH=ssh
The revised CVSROOT variable should use this format:
You should not have to login, but you will be prompted for your password each time you launch a new Terminal session.
Additional Information
Here’s a few additional links for everyone’s benefit (myself included). I will likely revise this tutorial at some point, but until then let me know if something doesn’t work for you or if there is anything notable that appears missing or incorrect.
Although there’s been quite a lot of talk around the Konfabulator / Dashboard controversy there’s been much less talk about the new features added to the WebKit framework which is the underlying rendering engine used in Safari. I suppose this is in part due to NDAs and all that, but since this stuff does seem to be somewhat public knowledge — and is showcased on Apple’s website, I’m curious about other’s thoughts on these new additions.
New UI Objects in the updated WebKit for Mac OS X
Specifically, I’m referring to the new UI widgets — the Search Field and the Range Slider Control. Although I’m happy to see something new in the way of form widgets for the web (really, has anything new happened on this front in the last, oh, five or six years?), it’s frustrating because they’re (currently) not a part of the W3C spec and may not be for some time, if at all, and therefore obviously won’t validate without tweaking your DOCTYPE declaration.
Do you think they’re useful outside of Safari or Dashboard? Would you like to see these form widgets available in other browsers such as Internet Explorer or Firefox? How can you see using the slider control in a real-world application? Are these widgets a step towards allowing developers to build even more expansive, applications on par with desktop equivalents?
The search input field is clearly designed for a specific purpose but I think if implemented appropriately, provides a more user-focused way of implementing basic text search for websites that is both intuitive and actually useful. The ability to provide a search field that remembers search queries across sessions with no special programming or scripting is a real boon for both designers and developers. Although we could debate the issue of how many ways the same task could be implemented, in the end, would it not be better to work from a standardized, lightweight method that gets the job done and works across browsers and platforms. Perhaps with a bit of luck and a bit of time we’ll get there.
So, what to do.
I’ve been debating implementing the search field as a test though it’s general usefulness would be limited since there’s only a small percentage of visitors who would see the benefit. The upside is that the field degrades gracefully so it behaves the same as a regular text input field in unsupported browsers. Safari 1.3 or the beta of Safari 2 is required to use the new WebKit features in Mac OS X.
Although I haven’t checked my site stats specifically looking for visitors using either of those browser versions, I will be monitoring things and maybe these changes will appear… or maybe not.
Shaun Inman, who is close in the race to become my new web hero a couple days ago posted a few smokin’ (read: extremely useful) web development favlets for your browser. For the uninitiated, favlets are typically small snippets of Javascript code which can be stored as bookmarks in your browser. Favlets can do things like adjust the size of your browser window, change elements on a site — really just about anything you can do with Javascript in general.
To add any of Shaun’s favlets to your browser bookmarks, simply drag the links on the page to the bookmarks bar and reorganize/customize as desired. Be sure to read through the comments for additional browser compatibility notes.
Andy Budd put up a great post on his blog today focused on the margin property in CSS. In the post, he uses a series of examples illustrating how the margin property is supposed to behave along with how to get around some peculiarities found in certain browser implementations.
If you’ve done any research on writing semantic markup (XHTML) and styling it with CSS, the best way to start debugging rendering issues is to first test in a standards-compliant browser such as Firefox or Safari and work back to Windows IE or whichever browser(s) that need to be supported to avoid using unnecessary hacks or workarounds.
Although I’ve got a good handle on using the margin property and how and when margins are expected to collapse, I did learn a few things I didn’t know and gained some valuable insight on how to work around margin-related rendering issues. Be sure to read through the comments for a few other useful tips.
Welcome to the last part in our three-part tutorial series on building a database-driven photo gallery system using PHP and MySQL.
In the first part of this series we looked at defining and setting up the structure of our database tables and briefly discussed how the different fields relate to each other. In the second part, we looked at actually writing the necessary SQL queries required to extract the information from the database. So what now?
If part 2 was about SQL, part 3 is about writing PHP to execute those queries and render the results returned from them. Although this may sound like programming, it’s really not that difficult and I will attempt to explain everything along the way. At this point you may wish to skip to the end and download the final source code examples.
Listing Categories
Following a similar pattern from part 2, our first task is to take the database connection file we created and include that in the main gallery index page. This is done using PHP’s require_once function to essentially merge the two files at runtime helping reduce duplicated code. Our connection file was named connection.php.inc and contained the basic information needed to allow PHP to communicate with MySQL, thus permitting queries to be executed and the results returned to the browser in a viewable form.
The code used to include a file inline inside another PHP file is very simple and looks like this:
<?php
require_once('connection.php.inc');
?>
Place this line as the first line of your HTML file (above the DOCTYPE tag). In fact, all queries will go above that part of the document because they need to be executed before the page is rendered in order for the actual content to be available.
The second step is to take the page used to render the thumbnail view of the individual photo categories. In order to produce clean, readable URLs, this should perhaps all be contained within a directory named galleries and the main thumbnail view in a file named index.php. Be sure to use the .php extension in order to ensure that the PHP parser will do its magic on the server end of things. Alternatively you can change the file extension to html provided you’ve instructed Apache to also parse HTML files.
In the index.php file, directly below the require_once directive, place the following code snippet:
<?php
mysql_select_db($database, $galleries);
$query_rsCategories = "--GET CATEGORY LIST QUERY--";
$rsCategories = mysql_query($query_rsCategories, $galleries)
or die (mysql_error));
$row_rsCategories = mysql_fetch_assoc($rsCategories);
$totalRows_rsCategories = mysql_num_rows($rsCategories);
?>
Since the specific queries were covered in part 2, the above code is using a simple descriptive placeholder. You can see the full code used in the tutorial source download.
Displaying the Category Thumbnails Returned
At this point, if requested, this page should successfully return a set of results but would essentially do nothing since there is currently no display code in the page.
There are a number of ways to structure the display code — tables, lists, etc. In this example, the display code is structured as a simple list. Each list item contains the category thumbnail preview wrapped by a link to display the photo viewer window and the category description. The code will loop through the results returned (1 iteration for each result returned).
After executing the query it’s a good idea to close the connection to MySQL so information isn’t retained in memory unnecessarily. Just below the closing HTML tag on the page, add the following:
<?php mysql_free_result($rsCategories); ?>
That’s everything in the index.php page and it’s time to move on to the photo viewer page to display the large preview along with the associated photo metadata.
Displaying Photos And Meta Data
As outlined earlier this is a two file solution although the code used to build this photo gallery could be merged so that everything is included in the main index file. The second file, named view.php will be used to display the selected category of photos along with the large preview of each. As per the previous code samples, the required PHP code all belongs above the page’s DOCTYPE so that it is parsed and available prior to the page being rendered.
The first step is to again include the require_once statement to initiate the necessary database connection. The first query will take the two variables posted from the index page links and return the category information. The two variables being used are category_id and photo_id. This first query looks like:
SELECT description FROM categories WHERE id = "%s", $colname_rsCategory
The second query returns the actual photo information for a single photo under the specified category — such as the photo ID, filename and comment.
SELECT id, filename, comment FROM photos WHERE category_id = %s
AND id = %s ORDER BY id ASC LIMIT 1
The third and final query returns a data which can be used to create navigation links to browse a single category of photos.
SELECT photos.photo_num, photos.id FROM photos
WHERE photos.category_id = %s ORDER BY photos.id ASC
Each query uses the sprintf function which returns formatted strings. This is where the %s portion of the code comes from.
Putting it all Together
At this point all that’s left to do is wrap up the final pieces of display code, style the layout using CSS and start testing to make sure everything is working as expected. The remainder of this tutorial won’t focus on the visual design aspects but will cover the remaining pieces of display code in the HTML for the view.php file.
To display the category description, simply echo the results of the first query in the page using:
<?php echo $row_rsCategory['description']; ?>
Next, to display the category navigation, create a second list element and loop through the results from the second query, outputting a new list item for each result returned by the query. The code looks like this:
Displaying the photo numbers is also quite easy. We’re going to use a table since technically this is tabular information and semantically we’re not breaking any rules. We also need to apply a horizontal loop to generate the rows and columns for the table. Each item in the table will link to one of the photos returned from the third query. The code looks like this:
Lastly, output the large photo itself along with the associated comment. This is simply a matter of outputting the results of the third query.
Next, we need to display the photo itself and the associated comment meta-data for the photo. Again, this is fairly basic and is just a matter of echoing the results of the query to the page in the appropriate places. It looks something like this:
As in the index.php file, the last bit of code required is to close the connection to MySQL. Below the closing HTML tag, add the following:
And… that’s it. Visually it may not look like much but you now have a simple photo gallery which can be customized to your liking. You can download the full source code for this tutorial below. If you have suggestions, code improvements, additional features or any questions on completing the tutorial, please leave a note in the comments.
One of the things I’ve had on my mind recently has been form design and layout. Specifically in relation to both web and application design though I’m going to stick to web apps here since talking about desktop applications opens another can of worms. In particular I’ve been pondering buttons and form field naming conventions and their usage in guiding users through a form or application.
Form button elements in Safari for Mac OS X
I’ve built quite a few form-based web applications during the last few years and something I notice on occasion when visiting other sites or using such applications is that buttons are often inconsistently named or placed in a way that is confusing for the user.
A good example might be a Reset or Cancel button. To the uninitiated user, these two buttons could be the same thing and in some cases they are. At the very least, their meanings (intentions) can be easily confused. A more experienced user might know that the Cancel button is used if they change their mind and want to cancel an operation they’re in the midst of whereas the Reset button is used to return the form or application to its default state; as it was when they initiated the session. In many cases, eliminating either button is a good place to start since they’re often not needed at all.
I’ve seen many users inadvertently press the Reset button in a form-based web application for changing their e-mail password thinking that pressing that button will change their password where it instead actually resets the form and clears any entered information. Inevitably this frustrates the user. Changing the label of the button to “Clear Form Values” or something like that might be more appropriate.
Alternatively, highlighting the Submit button or switching the Reset button into a text link might help indicate to the user which button to push to actually submit the form. Using the “tabindex” tag to force a tab order may help avoid such mistakes, but for users who tend to move around using the mouse alone, it provides no protection. Browser defaults may also affect exactly how tabindex values are interpreted. Simply providing a clear button label such as “Change Password” or “Submit Changes” might be enough.
Dealing With Form Fields
Naming form fields appropriately also goes a long way to making forms readable, understandable and therefore usable to users. I’ve seen forms where a field value for “Name” might not actually be asking for your name. There’s also the question of splitting up names as First and Last name. Is it better to record that information as one value and then use either a JavaScript, PHP, MySQL, Java, Perl, ASP, Coldfusion (etc) function to split the value into first and last components by breaking it at the space? What if the user includes a middle initial? What’s easier for the user who actually has to fill out the form?
One field is certainly easier to fill out (and probably faster than two fields) and provided it has an appropriately descriptive label associated with it is likely the best choice. Yes it may require a bit more back-end programming depending on how the recorded data is used, but typically is just as flexible as if the data was recorded into separate fields. This particular case is much a question of business logic as it is ease of use. A good experienced DBA can help here as well as a good understanding of database normalization techniques. A little forward-thinking, experimentation and analysis never hurts either.
Application Flow
The overall flow of an application can also be a determining factor in whether or not someone completes the form and presses the ever-elusive Submit button. This may result in a missed sale if your form doesn’t flow properly or is too complex.
Forms can be simplified by structuring them into related sections and subdividing data collection across multiple pages or by simply removing unnecessary information or allowing users methods to lessen the burden of data entry. An example of progressive disclosure in form design can be easily produced using the Accordion component included with Flash. Progressively display, validate and guide a user through a form. Make the process quick and as painless as possible and you’re more likely to complete a transaction than have a user jump ship.
Excessive complexity is a leading cause of incomplete e-commerce checkout transactions. If your form covers more than 3 or 4 screens, users will frequently give up and leave. Giving a user too much time to reconsider by having long or complex forms will no doubt result in lost sales, errors in user input and a poor overall user experiences.
Building Better Forms
Aside from taking part in user testing with applications, there’s a number of simple things that can be done to improve forms, especially ones where certain information may be requested more than once (eg. billing address, delivery address). For example, providing a method, whether transparent to the user (Javascript, Cookies) or obvious in UI of the form itself, to keep track of certain information such as name, address, postal/zip code, city, state, country can go a long way to reducing the time to complete a form along with improving the user experience.
The key is to make sure that the information collected is:
Complete
Correct
Relevant
Usable across a site for multiple purposes
Forward thinking
Having multiple accounts for a single site can compound problems so consolidation of such information is a real boon for users. Apple has done this with their AppleID and O’Reilly just finished a similar project. High profile sites like Apple and O’Reilly generally provide good examples for form-based tools. Macromedia’s site also contains good examples of form-based applications providing a good user experience on their site using both Flash and vanilla HTML.
I think I’m really just starting to scratch the surface of this topic and planting the seeds for further discussion. If you’re interested in talking about this more or have something to add, leave a note in the comments. For now I’ll stop here with a few questions you may want to ask when building form-based applications.
What kind of naming conventions should be employed?
Where should the form buttons appear and in what order?
Are accesskey and tabindex necessary for form elements?
Are there related legacy applications which should be used as a guideline in designing forms?
How does the form flow in terms of readability and data entry?
Are buttons and other interactive form elements labelled properly?
Are there too many pages/steps to complete a transaction?
In the hope of solving my own problem, I put together a straightforward cheat sheet for John Gruber’sMarkdown text formatting software. The cheat sheet, available in PDF format has been designed for 11×17 printers though it will scale happily to 8.5×11 size for those wanting a smaller-scale version. Print it out, hang it on your wall, share it with your friends.
I’m considering revising it to include more extensive, in-context examples though comments and suggestions will help lead me either way.
In part two of this tutorial series on building your own database-driven photo gallery we’ll cover the SQL queries needed to return the various pieces of information used to generate the gallery display and the individual photo pages.
I’m assuming at this point that you already have MySQL and Apache installed and configured. For more information, see their respective web sites as well as to download the required software for your platform.
Connecting to MySQL
As you might recall from the first part, the database tables allow simple categorization of photos to allow viewing small segments of all the available photos listed in the database.
The first thing needed is to create the actual database connection which will facilitate communication between PHP and the MySQL database which will run the queries and return the results to be rendered by the Apache web server and returned to the browser.
The connection information should generally be separated from the real guts of the application for both clarity and more importantly because it adds a layer of security and could help prevent the access credentials from accidentally being compromised. You may also want to add the necessary information to your Apache .htaccess file(s) to protect this file from being viewable, depending on your level of paranoia. But really, it is a good idea.
This file, named connection.php.inc will contain the necessary database access credentials and set up some variables we can use elsewhere to reliably connect to MySQL and execute queries against the database. This file won’t be used until part 3, but it’s valuable to create it now.
To create the file, using your preferred plain text editor (BBEdit, pico, vim, Dreamweaver, etc.) copy the contents below.
Replace the [YOUR_DB_USERNAME] and [YOUR_DB_PASSWORD] with the appropriate details for your system.
Return Photo Categories
The first query we need to use returns a list of the categories listed in the database along with the auxiliary information used to display each category along with a link to launch the photo viewer. For the time being we’ll just focus on the SQL and leave the presentation code to part 3.
SELECT DISTINCT photos.id, photos.category_id, categories.preview, categories.description
FROM photos, categories WHERE photos.category_id = categories.id AND photos.filename
LIKE '%01.jpg' ORDER BY filename ASC
This first query returns the id, category_id, preview and description for each category. The two ID values are passed on to the viewer queries to locate the correct photos for the selected category. A small preview is displayed for each category using the preview field along with the description data for the category.
The query makes use of a simple wildcard to locate the appropraite preview images. The % character is used to ensure the query returns anything matching filename01.jpg or anotherfile01.jpg but not morefiles1.jpg.
Everything needed to create a simple category-based thumbnail preview for each category of photos will be returned by that one query and we can now move on to the main photo viewer file — view.php.
Photo Enlargement
This second file (view.php), the photo viewer is built based on three queries. The first returns the category descriptions, the second returns the large photo to be displayed and the last will return a list of all the photos associated with the selected category.
Return the Category Name
To return the selected category information, we are passing the category_id value from the main thumbnails page which is reflected in the query.
SELECT description FROM categories WHERE category_id = "%s", $colName_rsCategory
Return a single Large Photo
To find and return a single full-size photo and its associated metadata, the SQL query requires two values — the category_id and the photo_id from the thumbnails display page. Because we only want to render a single photo, we also need to restrict the results returned to a single entry.
SELECT id, filename, height, width, comment
FROM photos WHERE photos.category_id = %s AND id = %s
ORDER BY id ASC LIMIT 1
This query is executed each time a user clicks on a link from the returned list of photos in a selected category.
Return a Category’s Photos
The final query returns a list of every photo associated with a single category. As before, the category_id value is used to return the relevant results.
SELECT photo_gallery.photo_num, photo_gallery.photo_id
FROM photo_gallery WHERE photo_gallery.photo_category_id = %s
ORDER BY photo_gallery.photo_id ASC
Well, that’s it for now class. In part three we’ll tie everything together and show how it all works along with a few samples of the presentation code used here. At that point I’ll also make source code available for the gallery project. See you next time!
I was in a meeting today where we ended up discussing file formats and what’s appropriate depending on the intended usage. This wasn’t a web-specific conversation but instead was focused around using a digital asset management system to allow repurposing files of various formats for use in PowerPoint presentations, web sites, print or other media.
There’s a definite lack of understanding of the multitude of graphic file formats out there by the general populace. I find that I take for granted the years of experience I’ve had with many of those formats including some of the more obscure ones like Scitex CT and forget that not everyone has seen or used these formats. I get a little dumbfounded when people don’t know what a TIFF or an EPS file is. Don’t even bother trying to explain the different levels of PostScript… Then there’s the whole issue of dpi, lpi and things appropriate for print not being suitable for on-screen use and so on.
The Preview application and various supported graphic file formats
When it comes to the web, although image formats in use have been largely dominated by JPEG and GIF, most modern browsers (Windows IE I’m looking in your general direction…) also support rendering the PNG format. Despite providing some support for PNG, its use is generally more limited due to IE not completely supporting the alpha transparency channel in 24 bit PNG images.
The single largest problem with PNG images, aside from a current lack of alpha support in IE6 and below is file size. PNGs can get big. For users with a broadband connection this is less of an issue, but making sites accessible for users on slower connections should still be a concern (within reason). The colour depth and file dimensions have an effect as does any embedded metadata. PNGs can potentially be 5 to 6 times the size of an equivalent JPEG.
At the same time as PNG is beginning to gain more widespread use, the new JPEG 2000 format (lossless) has appeared on the landscape. Although I don’t have any practical experience with the file format, it sounds as though it provides all the benefits of normal JPEG but without the compression artifacts. At the moment I’m unsure of browser support for the JPEG 2000 format, so perhaps someone out there cares to comment on that.
Do you have any preference for web file formats? Why one over the other? Are you using PNGs, and if not, why?
Welcome to part one of a three part tutorial series on how to build a dynamic, database-driven image gallery. You should be able to repurpose these instructions based on a different visual design since I’m not going to cover anything specific with regards to the visual aspects of the layout.
The tutorial has been broken down as three pieces for simplicity. Although we’ll only be dealing with two files and I could combine all of the back-end scripts as one file, for aesthetic reasons I’ll keep things separate. Let’s get started…
Setup
To complete the tutorial, you will need to have two pieces of software installed and running on your computer or a server somewhere. For the database component, you will need the community edition of MySQL installed. Version 4.0 or newer should all work. For the web server, I recommend using the Apache web server, included out of the box with Mac OS X. You will also need to ensure that the PHP module for Apache is enabled and configured appropriately. Follow any provided documentation for specifics on security-related issues.
Setting up the MySQL Database
Before doing anything else, the database source needs to be created for this tutorial. To keep things easy, I’ll also assume you set up phpMyAdmin to manage your databases.
Login to phpMyAdmin in your web browser.
Create a new database and name it photo_gallery. We’ll make reference to this later once we start developing the SQL queries that will bring things to life.
Select the database you just created from the pop-up menu or listing on the left sidebar of phpMyAdmin. In the right frame, click on the SQL tab. This will open up the SQL editor view.
To add the necessary tables to the photo_gallery database, copy the SQL queries below into the SQL editor window and press the Go button.
CREATE TABLE categories (
id int(11) unsigned NOT NULL auto_increment,
description varchar(100) NOT NULL default '',
preview varchar(64) NOT NULL default '',
PRIMARY KEY (id)
) TYPE=MyISAM COMMENT='Gallery Categories';
CREATE TABLE photos (
id int(11) unsigned NOT NULL auto_increment,
filename varchar(64) NOT NULL default '',
comment varchar(255) default NULL,
category_id int(11) unsigned NOT NULL default '0',
PRIMARY KEY (id)
) TYPE=MyISAM COMMENT='Gallery Images';
Assuming the SQL is executed successfully, you can close the SQL editor window.
Now that the database tables have been created, you’re ready to start adding data. Start by creating a series of categories. Enter a description for each. The required ID field will be automatically created since the table is set to auto-increment the category_id field.
Once you’ve created a few categories, you can start adding photos and assigning them to the previously created categories. The photo_filename field should contain the actual name of the image file including the file extension (eg. .jpg, .gif, .png). The photo_comment field is used to add a short description of the photo and the category_id field is a foreign key field and should contain the appropriate ID value from the categories table. This allows us to provide a simple category mechanism, though it currently only allows assignment of a single category.
Questions? Stuck?
If you get stuck, leave a comment and I will (within reason) try to provide adequate assistance.
I was thinking about favicons today while working on a few minor design details for this site. Safari and Internet Explorer on Windows along with a few other browsers support favicons, a simple detail that can be used to help develop a site’s “brand” (I use the term loosely) and identity. They also afford a way to allow sites to stand out among others when browsing bookmarks.
For people like myself who keep extensive, organized and categorized bookmarks sets, the ones that stand out the most are the ones that include favicons. And like logos and other identity marks, they can be beautifully designed and appropriate for the end use or be a complete mess, unrecognizable and otherwise inappropriate for the intended purpose. In the case of favicons, being restricted to only 16 pixels square on average (you can technically create them at larger sizes), the simpler the better.
Favicons for wishingline.com
For Wishingline I created a set of favicons using variations on a single visual thread — colour. The orange version that will be used on the main Wishingline site, a blue-gray version used for the notebook, and a green version we’re using on our development server. The variations make it easy to identify which server we’re looking at when editing and testing while helping develop brand recognition. That, and it’s a small detail that, when added to the other minor details, goes to make up a great site.
For example, Apple, Adobe, Macromedia, and many other widely recognized companies are using favicons on their sites, just as the independents are. Take a look through your Safari bookmarks and see which ones stand out. I bet you’ll say it’s the ones with favicons. Now that you’re a believer — where’s your site’s favicon?
A Few Rules When Creating Favicons
First, keep it simple. Second, don’t go nuts with colour — stick to standard web colours or things that will display properly across platforms. Third, remember to include the appropriate link tag into the head section of your pages.
Keep in mind you can add as many favicons as you want — just adjust the link tag as needed. For example, different favicons for different site sections, subdomains — use your imagination.
How To Create Favicons
If you want to learn how to create favicons, The Iconfactory has some great resources along with their IconBuilder Pro plugin for Photoshop which can export the necessary .ico files. IconBuilder Pro is available for Mac OS X and Windows.