Category Code

First Look at Adobe Edge


On Monday, Adobe opened up their beta trial of Adobe Edge, their latest tool for creating animations for the web. Being that they are getting hammered on all sides about Flash, instead Edge uses HTML5/CSS/JavaScript as it animation engine. HTML based animation has gotten dramatically better, my test animation ran surprisingly well across browsers and mobile devices. Whether this is Adobe’s little white flag on the end of Flash is yet to be seen but for now we options.

At first glance Edge seems closer in structure to After Effects then to Flash which could be a nod to where Adobe is heading or a way to separate the apps, again we’ll have to wait and see. The application is laid out in four sections: Properties, Timeline, Elements, and the editor window, which is Webkit based. The basic premise application is the simple, add items to a stage and then animate them via keyframes along a timeline. There are no interactions or ability to add scripts with this version of the application. Nor can you create new objects/shapes outside of rectangles, so all your assets will need to be created externally.

The timeline offers auto-keyframing and each attribute can be animated separately. Edge supports the standard set of easing which can be applied individually or to a group via multi-select. Each object/layer is a different color and their attribute list can be contracted/expanded to maximize work space. Attribute values are displayed both in the properties panel and inline within the timeline. Similar to After Effects, you can filter the timeline by typing within the timeline’s search field. Another nice AE style feature is the ability to scale the timeline and manipulate your existing keyframes in bulk.

Since the main editor is Webkit based text is true HTML text as are the CSS styles connected to it. Adobe’s handling of object positioning within the stage has a few quirks. The biggest being an object’s Zero Point being based on where it was initially introduced to the stage and not to top-left or other standard. This make is difficult to quickly create animations or layouts based on numerical coordinates from either external applications (Fireworks) or from object-to-object. For now you’ll need a PNG/JPG to create your gradients. I’m not sure if this is due to the complicated CSS to support gradients between browsers or just something missing from this release. There’s also a few glitches revolving around scaling images but text and divs work as expected.

Code wise, Adobe hides most of the magic in pre-minimized JavaScript files with the external CSS file only defining the initial elements. As long as you keep the stage and associate includes untouched you can edit the rest of the page at will. Which will only help Adobe gain support of this among larger web shops. Despite this flexibility HTML based animations still lack the self-contained nature of Flash’s SWF, which means to view my sample animation you’ll need to click to a page outside of WordPress’s CMS.

Overall, I think this is a great example of where I think web content creation apps need to go. Live HTML in the editor means there is no guessing on how things will look when it hits a browser. I hope they follow this thinking with the next version of Fireworks allowing less web savvy designers to get a better feel for what how their designs will really look outside the false perfection of the current design tools present.

Though Edge is only in its first beta release it’s pretty solid app and could easily be used to create complex animations for websites, banners and other strongholds of Flash. You can view the my test animation I created for this review. Future versions promise support for increased support SVG and the Canvas tag which will only make this more powerful tool for web animators. Will HTML5 really be the death of Flash? If so, Edge is a smart bet for Adobe to keep relevant in this new world.

3rd Party Developers Feeling Twitter’s Growing Pains

My buddy Arpit and I were discussing Twitter’s suggestion that 3rd party developers stop creating new Twitter clients (more at Ars Technica) and he wrote a blog titled Thoughts on an open Twitter replacement: Concentrate on what’s done poorly. Below is my response to both his posting and Twitter’s actions.

– – – – –

Not to defend Twitter’s recent actions, but this is just the next step in the evolution of Twitter from a social service to a becoming a destination. As Twitter’s popularity has grown, they need to change to support their new users. They are no longer targeting the early adopters and the techies that helped Twitter grow during it’s early years. There was hints of this last year during the launch of the “new” Twitter and earlier when they bought/rebranded the most popular 3rd party app as official.

No doubt Twitter is looking at how users experience Facebook through the official sites and apps. There are alternative Facebook clients but no one is using them, so if Facebook killed access to these clients there may be some rumblings from a few developers but overall no one else would care. Based off the numbers provided by Twitter in September this may already be true for Twitter as well. The thinking then was that users of these alternative clients were the power users and creating the lion’s share of the content seen on Twitter so it makes sense that Twitter is still in support of 3rd party apps for adding content.

Overall Twitter can be improved and the third party apps have help fill these voids. I like some of the idea posted and it’d be great for either Twitter or other service to bring these to the forefront. At the end of the day most users may not be directly effected by this latest change, at least not immediately. In the long-term this will change how and what Twitter is and how it gets used. In the short-term this seems to be about Twitter trying to take control of their service and finally make some money off it, which they have every right to do. I don’t agree with their tactics and it does make me wonder: if the 3rd party apps were such a small percentage of the users, what does Twitter gain by cutting them out of the equation?

UPDATE: Seems like I just found a partial answer to my own question. Mashable is reporting that “Only 58% of tweets come from official Twitter clients

This time it’s personal

Web 2.0 was about making the web social (and glassy buttons), now let’s make it personal, relevant and about the user.

There’s so much content it’s hard to filter out the noise and get to the stuff you want. This is true be it from Twitter, your Facebook news feed or what to watch on TV. The growth of smartphones only exacerbates the need for a personalized experience. Our phones have become an extension of ourselves, though their smaller form factor requires us to only put the important stuff on them. The way we use our phones also dictates a need for faster access to the important things. Besides streamlining the features and the design, the mass of content needs to be streamlined as well. Quicker access to the things that matter to you is the core concept behind Microsoft’s current ad campaign for WindowPhone7.

The mobile space isn’t the only place where this streamlining is welcomed, take Netflix for example. They’ve grown from a simple DVD-by-mail service to one of the biggest online streaming services. There’s a reason people love Netflix. It’s not about the number of movies they have but rather they showcase the videos that you may actually want to watch. When first signing up to the service you’re asked to rate a few movies so it can begin to make recommendations. Netflix even had a ongoing contest looking for anyone that could significantly improve (their already lauded) recommendation algorithms. In Sept. 2009, they had a winner but the real winner was Netflix and their customers.


Personalization doesn’t always need to be complicated, even the smallest touches of personalization will do wonders for the user experience. The latest browsers have removed their default homepages in lieu of quick views of sites that you visit most. Above is the message Safari displays when you launch it for the first time. Like Netflix, it’s aim is to show you content that’s relevant to you based on your actions. Below is a basic recommendation system I created to demonstrate how user actions can be used to bias content towards more items of a similar nature (in this case color).

Looking to game consoles to predict the future of the iPhone

Comparing the Smart-phone market to that of the game consoles as a way to understand the implications of section 3.3.1 of Apple’s iPhone SDK terms of service.

Apple revisits New York Times’ homepage

Apple/New York Times - Ad integration 5-18-2009

Apple/New York Times - Animated - Ad integration 5-18-2009Once again Apple has paired up with the New York Times to create an ad users actually want to see.  This time it’s for a homepage integration/takeover featuring multiple ads all working in unison.  Similar to Apple’s TV ads, this site integration features John Hodgman (PC) and Justin Long (MAC) talking about their differences.  In this case John is commenting on the results of a Forrester Research poll, shown in the ad space above theirs, when two characters from yet another ad space join in on the conversation.  Before they start talking they seemed to fade into the pages background drawing little to no attention.  When the main ad is complete the two secondary ads fade to an unobtrusive white panel with a floating Apple logo.  Allowing those that keep the NY Times open all day (to see news updates) not to be barraged with Apple, Apple, Apple.

Though this isn’t the first time for Apple it’s still worthy of the viral attention is getting/has gotten.  It’s cleanly designed and executed.  Continues the sense of humor that has made these ads a hit for the last few years.  Makes great use of its environment.  It may only run a single day but I’m sure both parties make out as winners each time they meet.

Links:
Apple
New York Times
John Hodgman
Justin Long

The Rapid Concepting/Development Experiment

Monday was the start of Lab Week for the developers at CIM. The idea being that the developers could get a chance to work on stuff that wasn’t part of their everyday routine. Similar to the 20% time that Google has made famous, but in this case the time is collected and used as a team over the course of a week (Lab Week).
See how it all worked out.

Image extraction via RegEx

RegEx

Papervison 3D: first blush

Papervision3DPapervision 3D has been around for some time. I’ve been interested in checking it out but never had a project to bring it beyond just a thought.  I now have a project that calls for some simple Flash 3D.  To start out I hit GoToAndLearn, as usual his tutorials are quick and concise .  I was shocked how simple the basics were.  Sure things will get more complicated once you move beyond the box.

My first snag was with a basic scene and a simple cube.  I placed specific materials to each of the faces of my cube to emphasize the feeling of depth.  Upon export I’m not seeing the front of my cube, but rather the back face.  I wasn’t doing any rotations yet so I’m not sure why it was rendering as if from behind.  Not finding any explanation, I instead readjusted the faces themselves to look as I wanted.  Every other transformation worked as expected, it was only the mirrored view of the cube’s faces.  If anyone has an explanation I’d love to hear them.

Links:
Papervision 3D
GoToAndLearn

The joy of coding

Part of writing code is knowing the vocabulary of the particular language.  I’ve been working with ActionScript for years and started the transition to AS3 about a year ago.  I’m currently working on my first AIR application.  AIR can be made  as either Flex or pure ActionScript based.  Since I’m using Eclipse, which by default only supports Flex based AIR apps, I was in Flex’s world and not knowing it’s little quirks I was hitting a wall.

Not knowing the quirks or vocabulary of Flex nor all of AIR’s additions, the search for answers in the documentation left me empty.  Since what I was searching for seemed so basic and simple that I didn’t expect not to find it.  The answers could have been there, I just never found it.

So what was this mysterious nugget I was looking for?  It was simple little thing, adding a Sprite to the stage.  Found how to make various window types and other interesting tid bits, but none of the samples were based on a Flex based AIR app, which is what I was building.  To say the least it was very frustrating.

Turns out that to add a child to a Flex window you can’t just use addChild(), as the Flex window is a FLEX component.  There is a special function called addrawChildren(), which adds a non-Flex component to a Flex component.  From there you can add sub-children via addChild() as you would in an ActionScript based project.  It’s always the little things.

This simple answer/lesson was given to me in less then 5 minutes by my co-worker, Arpit.  He also showed that it was possible to do a non-Flex based AIR app in Ecipse, so all this was for naught.

Link:
Arpit