We’re in an odd place with the whole typography thing.
Most people who look at computers on a daily basis have an idea of what fonts are. They just have no idea about how beautiful they can be.
The reason is that screens suck. They are either too low resolution (we can see the pixels) or too tiny (phones). And everyone reads screens nowadays, not paper.
Screens have sucked forever. Just that word, “screen“, is nasty. It’s like a mesh, or a veil; something you have to look through to see the real thing.
If you want a good screen, you’d better be rich. Which is why typography as an artform still only exists in print. If we remove the barrier of location, which the screen claims to have done, artwork should be look the same to anyone, anywhere, at any time.
Put the same printed piece in someone’s hands, regardless of this restriction, and she will see the same thing. Stand in front of a painting, sculpture, heck–even digital art, and you see the same thing as anyone else in the gallery. Share a link to your design on behance, dribbble, or your website? Forget it. Everyone’s seeing something different, and most of them are seeing garbage.
Print is inherently ubiquitous. Physical location and space is definable. Screen is nowhere near being able to make the same claim. Screens suck, and it’s pointless to use it as a forum to discuss typographic merit of any design until screens get better, way better.
In the beginning, format was king. The mere fact that we were reading something on the web made it important. It had somehow come to occupy this new medium, which in itself was novel and beautiful and confusing. Whoever put it there had to be smart, and therefore the content as well.
Then, at some point, maybe the early 2000’s, content became king. Your format is getting in the way of our content, we’d say. Enough of the tables, the flash, the jpeg-rendered text. Let us read <pre> formatted courier and be fulfilled.
Then context became king; it was more important where, when, and how readers got the content than what the content actually was. Can I read it on my iWatch? Because that’s how I read stuff nowadays. Is it RTL compatible? Pft, how dare we ignore half the world (if not more)’s readership.
Contrast is next. It’s all we have left. Is it different than what I’ve seen before? Does it stand out? In my daily sea-of-noise, what clambers to the surface, bobbing aggressively for attention like some snagged snapper float? That’s what I’ll read.
I see a singularity in media, and it comes in the form of a special combo visor/glove.
Parts of visor can be activated, or it can completely take over your visual space. It acts as your phone, home theater, gaming system, tv screen, etc. This way the visor can enhance the real world or replace it entirely. It’s interface is managed by the glove, with endless combinations of finger movements. Audio is transmitted through the visor’s earpieces.
I can’t see us going in any other direction. When people talk about mobile media, I get a little queasy thinking about the tiny screens. I think about David Lynch’s iPhone rant. I imagine the sore neck/eyes/back/hand I’ll have from staring at a tiny thing in my hand. I also think about wireless data charges running wild, $1000 monthly bills for all the news, video, music, and movies I’ve watched, but that’s another issue I guess…
Anyways, here’s a sketch with my idea for how this all works:
anyway I’m sure it’s been thought of so I can’t wait for my visor!
The dashicons font created for the mp6 plugin has pretty much taken up all of my time the last few weeks. This even with help from Mel Choyce, Joen Asmussen and the rest of the WP design team.
There are quite a few resources on how to do this, but most of the ones I’ve read, although I’m sure worked for some, went against a few of my own design principals. So I set out to find the perfect workflow for me, and here it is.
When I design icons in Photoshop (AP), the end goal is a png sprite. Using a split window, I can zoom in on one window and see the actual size icon in the other. I can click on an anchor point and nudge it with the arrow keys, getting sub-pixel placement just right and having absolute control over the end product.
The move to vector as the final source has been really weird and challenging. In Illustrator, vectors don’t anti-alias the same was as they do in Photoshop. That is, if you draw a rectangle with the edges half-way between the edges of a pixel, in Photoshop, you might get a different grey value than if you did that in Illustrator (AI). And pixel snapping is inconsistent. I would copy/paste a shape from AP to AI, and my perfectly sharp edges would become fuzzy, even though the paths were exactly in the same place. If I click/dragged an anchor over a pixel, then dragged it back, it would become sharp again. Very weird.
The tutorials I read through, https://github.com/blog/1135-the-making-of-octicons and http://glyphsapp.com/blog/importing-from-illustrator/ were really helpful, but as I said earlier, there are fundamental flaws with those workflows, at least as far as I’m able to incorporate them into mine. For some reason, 16×16 is this imposing number that icon designers hold sacred. It’s a good target, but it really does overly constrain design. I decided 20×20 was a much easier canvas to work within, and as long as I left a pixel or two breathing room around the icons, they look great in a 16×16 space-not too big, but not leaving out important details in the name of absolute limits. I also got nervous when scanning through the article at all the weird numbers: 2048, 2052, -17something. Do we really need all those complex numbers? As for the glyphsapp.com article, while trying to draw directly in Glyphs may be the best way, it’s gonna be tough to put aside my AI experience to learn a new method for drawing vectors until I have lots of time on my hands.
After trying all sorts of settings, I came to the conclusion that 20x20pixel icons should be designed in a 2000×2000 upm font. The glyphs article points out that 1 ai point=1 glyphs unit. So I worked in points.
Another oddity was the Glyphs vs Glyphs Mini (GM) inconstancies. Joen Asmussen, who I can’t thank enough for all his help on this, designed the icon font you see at WordPress.com, using GM and following the octicons article pretty closely. I had a trial version of Glyphs, but decided to spring for GM for the sake of consistency. Although I started the font in Glyphs, bringing it into GM was an eye opener. The major issue was that GM doesn’t allow you to change the font’s UPM settings; so when I opened the .glyphs file I created in Glyphs, I was stuck with the 1000 upm I was originally working with. When I decided to try 2000 upm, so that it mapped more (theoretically) naturally with the 20pt x 20pt AI designs, my glyphs all got cut to 50% of their size. Resizing in glyphs was not something I wanted to learn how to do, so I returned my workflow to Glyphs.
Screenshots:
New AI file, 20pt x 20pt with points as units Keyboard increment set to 1/8th of a point, so I can nudge vectors between pixel edges as needed. Gridline every 10pt, with 10 subdivisions Snap to grid as needed Select all your glyphs in Glyphs mini, set widths to 2000 upm Choose font info in glyphs mini Ascender/caps height to 2000, giving you a perfect square for each glyph Icon drawn in AI, with a 20pt by 20pt box drawn around it as a bounding box Scale it up to 2000x 2000, note the little chain lock ensuring proportional scaling Copy the icon, Double-click a glyph in Glyphs Mini, Paste into the glyph window. Make sure the x/y/w/h are exactly as shown Double-click on the “bounding box” and delete it.