From Technology

The case against gratuitous Lorem Ipsum text

Caslon-schriftmusterblatt

My Lorem Ipsum tattoo on CheckOutMyInk.com

Let me preface this article by clearly stating that I’m not a designer. My background is solidly in the implementation side of things, which might be why I’m approaching this issue from what seems like a very biased viewpoint. If you are a designer, and you feel I’ve made some sort of horrible mistake, please don’t hesitate to let me know in the comments. Also, most of what I’m discussing here applies specifically to WordPress sites, which tend to be heavily publisher-focused and therefore contain user-generated content that the designer and engineer cannot control.

We’ve all seen the perils of using Lorem Ipsum or generic placeholder text to build static Photoshop mockups. Things like “Sample Title” never seem to work out right when translated to the web. A real headline, like, for example “The case against gratuitous Lorem Ipsum text” is a heck of a lot longer than the example, and forces the developer or web engineer to deal with the fallout.

The reason for this fallout, of course, is that the original design doesn’t account for flexible title lengths, and requires whoever implements it into the dynamic environment of the web to “guess” as to how the title element should flex to accommodate real-world content. In other words, by using generic text, the idea that the static mockup is just that–static–is reinforced to the client and even to the designer who created it.

Of course, as web design moves solidly away from static mockups and towards in-browser design, there’s a tendency to assume that the issue of design being too static is automatically eliminated. I’d argue this couldn’t be further from the truth. If an in-browser design uses generic titles and Lorem Ipsum content, anything can happen.

I’ve seen markup created that relied on a fixed-height title element. This created a huge problem when editors wanted to write an article with a title over 30 or so characters. I’ve also seen elements that relied on minimum content length that the generic copy provided. Once that markup intersected with real-world (sometimes very short) content, the sidebar collapsed underneath it. These are just a couple of examples–there are many more out there, and I’m sure everyone has their own stories about problematic user-generated content.

All of these problems end up being discovered during the implementation process, well into the process of designing and building a website. As a subscriber to Kevin Hoffman’s “iterate early” methodology, this late-round revision of markup and (maybe) design strikes me as highly inefficient and a major risk to on-time, on-budget project delivery.

So what’s the solution? Well, if you’re redesigning an existing site and have access to content, it’s probably best to use several examples of that content to test your in-browser design. It also helps (and this is admittedly difficult without trial and error) to think through possible content outliers and how they can affect design. While there are some seriously esoteric CSS bugs that will inevitably surface when a design is implemented into a dynamic website, testing a few common things can really help with most publishing-style sites:

  • A really short headline, e.g. “49ers Win
  • A really long headline, e.g. “Local residents up in arms over county officials’ stated desire to begin eradicating local mongoose population
  • Content that spans length from a few hundred characters up to several lengthy paragraphs
  • Enough navigation links in various nav bars to represent the actual number of options that will be on the site
  • A photo that is wider than it is tall; a photo that is taller than it is wide.

The good news here is that all of these things can be tested easily in your in-browser designs using some simple JavaScript. For example, by setting the text of the article title to several different lengths, you can easily test whether your design can handle flexible titles.

Similarly, if you’re unable to obtain actual short and long content examples (e.g. if you’re building an entirely new site), creating a small paragraph of sample text that you can then clone to create longer and longer articles is a handy trick for testing how your design handles different article content. I’ll be posting a couple of examples of these JS snippets in a follow-up to this article.

If you’re looking for some additional tips, this article provides a great walkthrough of how to overcome the Lorem Ipsum “crutch.”

Image source: http://www.checkoutmyink.com/tattoos/marquin23/my-lorem-ipsum-tattoo

Display Admin Notices in custom locations in the WordPress Dashboard

If you’ve used WordPress for a while, you’re probably familiar with the nice-looking notices (warnings and errors) that appear at the top of various admin pages:

Screen Shot 2013-02-05 at 2.46.59 PM

These notices are great for posting generic information relevant to a particular page up near the top, where they’ll be easily seen. But what if you want to use the .updated and .error classes to create messaging in other parts of the page? If you just add markup, it will move up to the heading of the page, because of the following JS that runs in wp-admin/js/common.js:

// Move .updated and .error alert boxes. Don't move boxes designed to be inline.
$('div.wrap h2:first').nextAll('div.updated, div.error').addClass('below-h2');
$('div.updated, div.error').not('.below-h2, .inline').insertAfter( $('div.wrap h2:first') );

This explains why messages get pushed to the top of the window regardless of where they are actually placed in the server-side markup. Also, it proveds a really easy way to put the message where you want it: just add a class of .inline and your message will stay where you want it.

Fixing Wi-Fi upload issues with a Sonic.net ZXV10 W300 modem/router

logo-white-half

I have Sonic.net for my ISP here in Oakland, and I’ve been pretty happy with the speed and consistency of the DSL service they provide. After returning from my latest visit to the East Coast, however, I discovered I had little to no upload bandwidth on my MacBook Pro. Trying various test sites yielded download speeds in the 12-14Mbps range, but the upload timed out on all of them. I was unable to send a test email from Apple Mail. If I plugged my MBP directly into the ZXV10 W300 modem/router combo that Sonic.net uses, everything worked fine, including standard upload speeds of 1Mbps (that’s why I’m only pretty happy with Sonic.net, by the way–I could really go for a slightly faster upload speed). I also noticed that in the Status menu of the router configuration page, there were a lot of Rx and Tx errors under the Wireless tab, which just didn’t seem good.

A few different things I tried didn’t help:

  • Checked and re-configured the modem according to the Sonic.net wiki (I changed the ‘Bridged’ setting to ‘Enabled’ as it was ‘Disabled’, no effect)
  • Restarted the modem
  • Restarted the MBP
  • Changed the channel on the WLAN from ‘Auto’ to 7, 8, and 11 in case someone had a router hard-coded to channel 1 (my auto-selected channel)
  • Put the MBP really, really close to the router

Here’s what did work:

  1. Log in to the router config page (type 192.168.1.1 into your browser URL bar, default username/password are both admin)
  2. Go to Interface Setup -> Wireless
  3. In the Multiple SSIDs Settings section, Authentication Type was set to WPA-PSK/WPA2-PSK
  4. The menu item directly below that, Encryption, was set to TKIP. When I pulled that menu down, the options available were TKIP and AES.
  5. I switched the Authentication Type to Disabled. NB: don’t leave your router set this way, unless you’re in the middle of the woods somewhere
  6. After re-testing the connection, I was able to upload normally.
  7. When I went to re-enable authentication, I re-chose WPA-PSK/WPA2-PSK.
  8. To my surprise, the Encryption pull-down now included the TKIP/AES option, which wasn’t there before. When I chose this option, my connection worked perfectly.

So, what could have caused this? I suspect that somehow the modem configuration got corrupted, or alternately TKIP-only encryption was working fine for a while, but stopped working with my MBP after a recent OS X 10.8 software update (I applied one while out of town). Either way, I don’t really care what broke, as long as I know how to fix it. If it happens again, that’ll point to the router as the culprit for sure, though.

Solving ‘org.apache.httpd: Already loaded’ error on Mac OS X Snow Leopard

I recently ran into a problem where my local Apache instance wasn’t responding to requests. Trying to restart or start it with sudo apachectl restart yielded an error message like this:

org.apache.httpd: Already loaded

Checking running processes, I noticed that apache wasn’t actually running, which seemed a bit strange. Luckily, apachectl offers a helpful command for checking your config syntax, apachectl configtest. Sure enough, it turned out I’d modified the httpd.conf a couple of weeks ago, but never rebooted Apache. Commenting out the offending line and starting Apache fixed the problem and I’m back up and running.

Facebook JavaScript SDK “Uncaught RangeError: Maximum call stack size exceeded” error

volcano-ecuador-825x558

I’ve been dealing with an issue for a couple of months where the Facebook JavaScript SDK wouldn’t function properly on my local development instance, even though it was working fine in our testing and production environments. I tried all the obvious things–confirmed the correct URLs in the Facebook App settings, made sure I was using the right App ID and Secret, etc. The weird thing is, according to the console FB was an object, and XFBML was an object, but parse() was not a method of XFBML.

I wasn’t seeing any of the usual JS errors in the console in Chrome either, which was a bit confusing, at least until I opened Safari and saw this:

Uncaught RangeError: Maximum call stack size exceeded

According to this question on StackOverflow, the problem is caused by running the old Facebook SDK (FeatureLoader.js) alongside the new one (all.js). I was positive that FeatureLoader.js wasn’t loading anywhere in my codebase, and a quick check with ack didn’t show anything either. FeatureLoader.js definitely was getting loaded though, and when I double-checked I saw that it was being loaded by a locally-installed dev plugin that I have running (but that isn’t on our dev or production sites). Plugin removed, problem solved.

On Safari Mobile for iOS 5, event.pageY is the new event.clientY

safari-logo-lg

We have some tooltips running at my work that are used to render sharing buttons when a user clicks or taps on them. When the recent upgrade to iOS came around, the tooltips stopped being rendered properly in iOS 5.

After running into a few problems with jQuery Tools and the iPad, I came up with a solution for getting the tooltips to appear next to the anchor element like they were supposed to. By using the event.clientY value from the touch event, I was able to detect where in the DOM the touch had happened, and simply position the tooltip right next to it, with something like this:

$('#tooltip').css('position', 'absolute !important').css('top', event.clientY);

In iOS 4.3, event.clientY was reporting the absolute position of the touch event relative to the entire document. In iOS 5, I discovered that it was reporting the position of the touch event relative to the window. So, if you tapped a tooltip way down on the page, but near the top of the current viewport window, the tooltip would appear right near the top of the document, completely off screen.

A little digging on Google yielded this page on the Apple site. The reference to an event.pageY property made me think that maybe that would do the trick, and it seems to work.

$('#tooltip').css('position', 'absolute !important').css('top', event.pageY);

Now, with iOS 5, the touch event was properly setting the tooltip’s top value to the position of the touch event within the entire DOM, not just the viewport. I’m not exactly sure what’s changed between iOS 4.3 and 5, but at least now I have something that works for both.

Lazy-load a LinkedIn Sharing button after the JavaScript window.load event

Adding social networking sharing buttons to your site has become almost a ubiquitous step in Web development, to the point when some designers have stopped thinking about the performance impact of rendering multiple buttons via JavaScript while a page is still loading. The delay might not be noticeable for, say, 1 or 2 buttons, but when you’re rendering multiple buttons per page (when you have a button to share individual Tweets on a page, for example), it can get out of hand (turns out JavaScript crashing the browser creates a user-unfriendly experience for most people).

The solution is to lazy-load the buttons when you need them, either when a user clicks to expose a previously hidden div, or at the very least after the window.load JavaScript event, to prevent slowing your pageload times down. Here’s an example of a simple way to lazy-load a LinkedIn Share button on window.load:

First, include the necessary scripts (LinkedIn’s in.js and jQuery). You can do this in the footer if you want…after all, you’re not doing anything with them until much later in JavaScript-time:



Next, add some jQuery in a script tag that looks for any script tag with a type of ‘unparsed-IN/Share’ (the name doesn’t matter, as long as it’s NOT IN/Share, since the whole point here is you don’t want the in.js script to parse the tag). Depending on the size of your DOM, you may want to be more specific with your jQuery selector…a div or a section of content is fine, and you can bind to a click event, a scroll event, or whatever else you’d like to initiate the parsing of your LinkedIn buttons:

 jQuery(window).load(function(){
    jQuery('body').find('script[type="unparsed-IN/Share"]').each(function(){
      jQuery(this).attr('type', 'IN/Share');
      IN.parse(document); //you can also use getElementById('id') in place of document here
    });
  });

Finally, you just need to add LinkedIn sharing tags in the following manner. The key here is to change the JS script type from IN/Share to unparsed-IN/Share (or whatever you chose in the jQuery above), which will cause the tag not to be rendered when in.js is loaded, allowing you to control with the JS when the tag is actually parsed, using IN.parse (which can be applied to the document as a whole, or an element as retrieved by the built-in JS document.getElementById method.


Update: As Howard points out in the comment section, if you don’t need to load the in.js script to render any LinkedIn buttons or content earlier, you can always accomplish lazy-loading by simply deferring the script load until you want to render the buttons. This allows you to avoid parsing and replacing the ‘type’ on each JavaScript snippet. If you need LinkedIn content to render both before the onload event as well as after, though, you’ll still need to do the replacement.

Install Siege on Mac OS X Snow Leopard

I recently needed to install Siege on my MacBookPro. Here’s what I did:

wget ftp://ftp.joedog.org/pub/siege/siege-latest.tar.gz
tar -xvf siege-latest.tar.gz
cd siege-2.70/
./configure && make && make install

That’s it! Now you should be able to do something like this:

siege -c50 http://yourserver.com

That will simulate 50 concurrent users hitting that particular URL. After siege runs for a bit, you can hit Ctrl+C to kill it and it will output some handy stats about performance.

TomTom randomly disconnects during update on Mac OS X 10.6 Snow Leopard

TomTom Manage My Device

When I went to update my TomTom XL to the latest version of the operating system, I ran into a problem where the update failed and the device was stuck at a blinking screen with a red X across it. I Googled the problem a bit, but didn’t really find any good solutions. This Apple forum was trying to solve the problem, so I applied the first two suggestions, but to no avail. Following the instructions on repairing a bricked device on TomTom’s site was similarly unsuccessful.

Then I started watching the TomTom every time I ran the update, and I noticed that the little hard drive icon would stop flashing before each disconnect, not the other way around. This led me to believe the problem started on the device. When I went into the Manage my device option in TomTom HOME, I noticed the device’s memory was nearly full.

I think the problem is that the TomTom update/map is larger (or at least requires more room during unpack) than the previous version of the OS or map. In my case, the problem was solved in the following way:

  • In TomTom HOME, choose More->Manage my device
  • Make sure you’re looking at Item on my device (not on the computer)
  • Navigate through and delete any unncessary files (in my case deleting extra downloaded voices was enough)
  • Go back to the HOME menu and choose Update my device

The update should finish. If it fails, check to see if it’s failed on a different file (that means it used more space, but still failed because it ran out), and delete more files. If your device is already bricked (like mine was), deleting the application itself (see screenshot below) couldn’t hurt (in fact, this might’ve been necessary for my OS update to successfully install).

Fixing eventfd() failed error after YUM nginx upgrade

nginx-logo

After running a Nessus scan on my VPS last night, I ran a yum update to fix a few security holes patched in newer software packages. It was pretty late, so I went to sleep after the upgrade, because everything seemed to be working fine. This morning, when I went to log in to this site’s admin dashboard, I discovered that none of my sites were working. Pings were working fine, but a quick check of the nginx error log revealed this:

2011/03/28 15:18:30 [emerg] 2661#0: eventfd() failed (38: Function not implemented)

A quick Google search turned up this forum, which indicated that the problem was related to the fact that the YUM version of nginx 0.8.53 is compiled with the –with-file-aio option, which uses libraries that were apparently not installed on my system. The solution was to re-install nginx by downloading the latest source and compiling it.

Once I did this, I changed the value of the nginx variable in /etc/init.d/nginx from /usr/sbin/nginx to /usr/local/nginx/sbin/nginx (the location of the new executable). Running service nginx restart did the trick, and my sites were back up and running.