May 2011
Sun Mon Tue Wed Thu Fri Sat
« Mar   Jun »
1234567
891011121314
15161718192021
22232425262728
293031  

Month May 2011

Looking back at the Techcrunch Disrupt Hackathon


The idea behind the hackathon was to turn an idea into reality in 24 hours. Well at noon Arpit, Gabo and myself were still trying to figure out which one of our ideas we should work on. Luckily, when we checked in we came to a consensus. We settled on creating a commenting platform that would be site agnostic and simpler to find relevant content. The full concept includes integration with blogs and sites replacing their silo-ed system with one that helps spread the word and lowers the bar for participation. Obviously, the full package couldn’t be completed in 24 hours so we focused on building and testing the basic concept.


We got off to a rough start, plagued with technical glitches and an overloaded wifi. Since our project, called Yatr (pronounced yatter), was using a number of web based API’s the wifi’es were kinda important. As the night went on our table mates decided to call it quits, as did others. Despite the late hour and reduced numbers there was still a energetic vibe in the room. No doubt the cans of Red Bull and endless coffee had something to do with this.


When the sun started rising my eyes wanted to do the opposite. Thankfully a quick walk outside helped me get my energy back. At that time we were wiring up the designs to the back-end and dealing with some minor bugs. So we were feeling good about making the 9:30 deadline. By the time 9:30 hit I was busy working on the presentation and making sure I could explain our work within 60 seconds. An hour later we piled in to the auditorium (of sorts) where each of the teams sharing with the world what they’ve been working on for the previous 24 hours.

The first one out of the gate was Docracy, a online way to validate legal documents. Very cool idea and definitely set the bar for both concepts and delivery. Not surprisingly they were also one of the winners for the day. Sixty nine teams later it was my turn to present. Almost no one likes presenting to a crowd let alone trying to do so while compressing 24 hours into 1 minute. Since I had been practicing for a while I felt ready. Still 60 seconds is both forever and over in an instant.

Yatr didn’t win, but it’s not just about winning. Instead, we walked out with a working product and a architecture to take it to the next level. We also got a chance to see what other people feel strongly enough about that they would spend 24 hours working on a solution for. There was some really great projects beyond the few that got called out on TechCrunch and exhausted or not staying for all the presentations were just as rewarding as making Yatr into a working product.

UPDATE: If you would like to know more about Yatr, see how it works and why we did it check out Arpit’s post Yatr: Our hack for the Techcrunch NYC Hackathon.

5 Reasons Why Gesture isn’t happening

Movies like Minority Report, make controlling your computer with little more then the swipe of your hand look easy. With the release of the Xbox Kinect the dream of this power coming to the masses has finally come true…well not exactly.

The Kinect has become the fastest adopted technology to date and that people are using their Kinect’s for everything from gaming to self-guided robots. A quick glance at YouTube is all you need to see tons of videos showing off all the Kinect can do. So why isn’t this the launching point into being able to control our computers with a wave of our hands?

In the real world, body based gestures is anything but simple and smooth. Though there are many people successfully experimenting with the Kinect, many of these experiments don’t translate into real world feasibility. I know this first hand as I too have enjoyed hacking the Kinect, as well as working with physical gesture based UI on more legitimate terms. In both cases it’s clear, regardless of technical limitations you won’t be controlling much beyond your Xbox with gestures. Below are the five biggest reasons why gestures won’t be breaking out of the box anytime soon.

  1. Accuracy: To be blunt, the Kinect is ridiculously underpowered. The resolution of the two cameras combined is under one megapixel (read: garbage). Which means the images used to create the 3D environment are blotchy and inaccurate (see photo above). To make matters worse, even still objects are hard to define as their edges dance about from frame to frame. The actual (circa 1994) video/webcam being used is nearly useless in low light (read: your living room) and it’s poor quality doesn’t provide enough useful information to work as a supplement to the 3D data.
  2. The Lazy Factor: Face it, people are lazy. No one wants to jump up and down, flail their arms just to control their TV or computer. Lazy or not, it’s actually physically tiring to hold your arm outright and use it like a pointing device. Even in filming Minority they had to keep taking breaks because of this. Still doubtful? Hold your arm out straight forward for a minute or two. Part of the beauty of the mouse, trackpad and small touch screens is the limited amount of movement needed to control everything on the screen.
  3. No Sensory Feedback: Think of how simple it is to use a standard remote for one’s TV, or dial a standard phone. You know where the buttons are, you can feel the difference between each button and you feel the button depress beneath your finger. None of these exist between you and the air, so it’s all a guessing game and muscle memory. Touch screens have a similar problem but to a much smaller degree since one can look to see where their fingers are and the device can provide some sort of feedback to signify it received your input. Some touchscreen devices employ some sort of haptic feedback to give their users a sign that their touch has triggered an action.
  4. Children: They love touch screen devices as it’s primal to touch things and even there UI’s need to account for their high energy actions and their potential. To the Kinect cameras a moving child is a bundle of potential gestures or they can just block the camera from seeing yours. Either way a little child is a potential plethora of problems. Older children bring their own issues, their curiosity and interest to explore new things is a plus. While their potential for shorter attention spans and limited patience are in conflict with the limited abilities of today devices.
  5. Is this thing on?: On the technical side, there’s a lot of guessing involve with figuring out when the user is gesturing to control the device or just waving hi to a friend. Most of the videos showing off the cool things you can do with the Kinect are short and in a controlled environment so this issue doesn’t become obvious to the viewer, but rest assured the folks in the video know exactly what I’m talking about here.

Kinect also offers voice support, which brings it’s own set of complications. On their own gesture and voice have a long way to go before they permeate market enough to matter. Both of these technologies are great as an secondary or companion input tools instead of being the primary option. Regardless of effectiveness, they offer a new and fun way to interact with the technologies around us.