Archive for the 'Software development' category

A brief history of five TouchSmart generations–pioneering ideas for Windows 8

September 26, 2011 10:15 am

A few weeks ago I attended Microsoft’s BUILD conference to get ready for what’s coming in Windows 8. As I was sitting in the first day’s keynotes and big picture sessions, I couldn’t help but think back on the work HP has done with its TouchSmart software and notice areas where the TouchSmart software pioneered ideas that Microsoft is now building into Windows 8 for the new Metro style of programming and the new touch-first Start screen. I decided to dig a little deeper and give you a brief tour of the history of TouchSmart and highlight some of the ideas now in Windows 8 that we put into the TouchSmart software a long time ago. I’ll put a [+Win8] marker by the ideas as I go along. Let’s get started!

TouchSmart 1, aka SmartCenter, aka LaunchPad (January 2007)

The first version of TouchSmart was not called that. It was named SmartCenter and shipped with the very first modern all-in-one touch-enabled PC, the HP TouchSmart IQ770.

               

This machine was one of the so-called “Dream PCs” for Microsoft’s introduction of Windows Vista in January of 2007. I’ve written about this version of SmartCenter before, so I won’t repeat much of that here.

Touch-first [+Win8]

Of course, the main point of even embarking on a project such as the SmartCenter software was that Windows wasn’t even remotely ready for touch interactions. Every app on the Windows Desktop requires the precision that the mouse pointer provides. Fingers and touch can’t hit the tiny controls accurately enough. So SmartCenter was designed with that in mind, and as a result had large targets all throughout its user interface. Here are some sample screenshots:

SmartCenter_1_HomeSmartCenter_1_Personalize_1SmartCenter_1_Personalize_2SmartCenter_1_Personalize_3SmartCenter_1_Weather_2SmartCenter_1_Weather_3

Note that all buttons, checkboxes, radio buttons, scrollbars, etc. are large enough to be easily tapped with a finger. Note also that, for example, the on-screen keyboard that is used for entering a ZIP code in the Weather app defaults to the correct layout, i.e. the numeric one.

Live app data in shortcuts [+Win8]

This idea wasn’t really all that new, of course. Snippets of live app data displayed in a mini-view of sorts had been introduced with Windows Sidebar gadgets and other widget-like UIs on other operating systems, but SmartCenter was the first to use live data as part of the shortcut that launches an app. You could say the shortcuts were more like mini-versions of the full app. Live data is of course hard to demo with screenshots, so here is a small video clip of the SmartCenter home screen (or start screen, if you will), showing shortcuts that update their information as time passes:

This major version of the SmartCenter software was delivered with four total releases: 1.0, 1.1, 1.2, and 1.4. Towards the final delivery of version 1.0, it became clear that a standardized way of getting the live information from the apps was needed. This became a major area of investigation and investment for the next major version of the software.

 

TouchSmart 2 (June 2008)

The second generation of TouchSmart software, 2.x, was introduced with IQ500/IQ800 series hardware. These two hardware models marked the beginning of the monitor-like appearance of the TouchSmart PCs. The IQ770 was a “multi-volume” chassis – these new models had a “single volume” design, supported by the “easel” style feet that were used in the follow-on generation as well.

         

The 2.x series of software was released in three versions: 2.0, 2.5 and 2.8.

Fixed layouts for apps [+Win8]

With SmartCenter 2.0, we introduced the concept of fixed sized layouts for the TouchSmart apps. We initially picked three: small, medium and large. You can see two of the three illustrated by this screenshot:

image

The Tutorials, Canvas and Calendar apps are shown in medium size, while the remaining apps are shown in small size. By tapping on an app, you would go to the large size:

image

This layout is purposely not called full screen, since there is a reserved area at the top of the screen for navigation, app name/time and music playback controls.

Tiles concept [+Win8]

In order to make it clear that the app representations in SmartCenter were not just icons, we decided to call them tiles, or rather “live tiles.” This term was used in the developer documentation that was produced to help other people plug their apps into SmartCenter, and so we had “small tiles,” “medium tiles” and “large tiles.” For each tile size we gave guidance about how to use it appropriately. We introduced the term “layouts” to suggest that each tile size should use a different layout of basically the same content or information. As you notice from the screenshots above, when the Weather tile is small, it shows only basic information. In the large tile, the information is more full-featured and also provides access to settings for the Weather app. The medium tile for Weather looks like this:

Medium

As you can see, this layout for Weather includes only the current conditions and the forecast for the day.

With TouchSmart 2.0, a big investment was made to produce media consumption applications: Music, Video and Photo (often shortened to “MVP”) as well as a WebCam and DVD app. The screenshot above shows other apps that were published later (Netflix and Recipe Box, for example), but that just goes to show that following development guidelines has benefits: newer apps can work with older SmartCenter versions…

Other changes from the 1.0 version include the top and bottom row of “tile scrollers” and the music playback control set (aka. “media plate”) that I already mentioned. The tile scrollers had two different behaviors, depending on how full they were. If enough tiles were present, the scroller would become an infinitely looping container. If not enough tiles were present, it would have “snap-to” endpoints.

The TouchSmart 2.0 software was unveiled at a big press event in Berlin, Germany. Several of my colleagues were invited to attend to make sure everything went smoothly from a technical perspective. The most nerve-wracking part was that the TouchSmart IQ500 was to come out of a pedestal on stage after sitting inside said pedestal for an extended period of time before its unveiling. People were not sure the thermals were designed to handle as little exchange of air as this posed. Here’s a video from the introduction to give you a better idea of what I’m talking about (skip towards 1:18 or so to see the pedestal and the TouchSmart lifting out of it):

As you can see, everything worked out pretty well. This was the biggest introduction ever made for a TouchSmart PC line. No event after that had that much effort put into it.

 

TouchSmart 3 (October 2009)

With the third generation of SmartCenter, we piggybacked onto the 600/300 series of hardware. The enclosures still used the easel stand design with three feet for support, and the exterior was tweaked a bit along with the screen aspect ratio (now 16:9 instead of 16:10).

         

Generally, though the concept was largely the same, except for the software. A big investment was made to produce more apps for the TouchSmart software suite, and this brought us apps like Canvas, Twitter, Hulu, Live TV, Link, Movie Store, Recipe Box and a bunch of others. The TouchSmart software development guidelines were augmented with more of a proper SDK with app samples, installer samples and more guidance.

New layout

SmartCenter 3.0 introduced another layout that we called wide-interactive. You see, in SmartCenter 2.x there was no way to interact with the medium sized tiles in the upper tile scroller (except for in the browser, but that’s a small detail). In this version we wanted to provide interaction with the app in the upper scroller. In order to do that properly we needed a bigger size tile and a new layout to have enough space for interaction to make sense. Here’s a screenshot of 3.0 (running on a 16:10 screen, not the aspect ratio it was designed for – so circular elements are “squished”):

image

In SmartCenter 3.0 the touch scrollers no longer “looped” infinitely, but each had a “snap to” end regardless of how many tiles were present; each wide-interactive tile was given a colored title bar to add a little splash of variety and visual interest. In addition, the “media plate” and other control elements on the home screen were redesigned to appear a bit lighter than before. Also, standard button glyphs were introduced for closing and minimizing SmartCenter. Oh, and the clock was moved around and given a day of the week display. Phew – at least the Personalize button stayed almost in place…

The final big change was that tiles in the bottom scroller no longer used the small layout. They were simply icons to launch the app into large layout directly. This was done to improve performance and load less stuff at the startup of SmartCenter.

 

TouchSmart 4 (September 2010)

Okay, so here we are, almost at the last chapter of this brief history (which is turning out not so brief after all…) TouchSmart 4.0 was introduced with the TouchSmart 310 (and 610) series of hardware. These departed from the easel-type stand and went to a single-foot design (I know there’s a better term for it, I just can’t think of it at the moment).

         

TouchSmart 4 didn’t see much investment in new apps, but focused on new capabilities provided by the SmartCenter framework.

Infinite Canvas [+Win8, sort of, on the Metro Start screen]

A major goal of the SmartCenter framework software had been to provide an almost limitless space for apps to live in. With SmartCenter 4.0 that goal was finally realized. Not only did the framework provide for an infinitely expanding space for hosted apps to live in, it also did away with the upper tile scroller and let the apps be positioned freely on the canvas. This is what TouchSmart 4.0 looks like after initial startup:

image

And once again, things were moved around on screen: The clock from lower left to lower right (and it was given a function: click to show a mini-calendar), personalize from lower right to lower left (and the word personalize removed). The “media plate” music playback controls were removed and put into the music app instead. The volume control was separated out from the media plate and put in the upper left. The bottom carousel was redesigned and had the infinite looping re-introduced (to allow for a bit of visual and interactive playfulness). Tapping a tile launches the corresponding app:

image

Apps can be moved around freely and the carousel shows a colored highlight for each running app:

image

If you look at the above shot closely, you’ll notice the Weather app in what looks like another layout. What’s happening there is not a new layout, though. It’s simply the wide-interactive layout, shrunk down to an “inactive” size. Thus we called it “shrunk layout” or “shrunk view”.

The button next to personalize in the lower left can be used if the app you’re looking for in the carousel is hard to find: QuickLaunch is sorted alphabetically:

image

Parallax background [+Win8, sort of, on the Metro Start screen]

Scrolling the canvas (or panning it, if you prefer) is done by grabbing empty space (with mouse or touch) and moving from side to side. To add a little visual interest to this, and to demonstrate the departure from the 3.0 tile scrollers, we added a parallax effect to the background to give you the illusion of looking into the distance on your screen. Several sets of parallax backgrounds were developed for variety’s sake, to be picked in the personalize area.

Magnets

Another major feature of SmartCenter 4.0 was the introduction of something we called “magnets”. These represent active content that originally came either from an app or from SmartCenter itself (in the case of Graffiti magnets). Magnets eliminate the need to start an app when you want to enjoy a favorite piece of content, be it a photo, video or some music you want to keep handy for quick enjoyment. Here are a few magnets placed on the canvas (they can be “pinned” so they always stay visible or “unpinned” to scroll with the canvas):

image

Here’s what it looks like after panning a bit (while playing the fireplace video):

image

You can see the pinned magnets haven’t moved and the background looks slightly different (the islands have moved at different paces to give the illusion of depth as they’re moving).

Okay, let’s see what it looks like in action:

 

TouchSmart 5 (September 2011)

And that brings us to the latest generation of SmartCenter (as of this date), i.e. 5.0. This version of the TouchSmart framework software was brought to market with the just recently introduced 520/420/320 series of TouchSmart PCs. The exterior of the machines has been updated once more to keep up with design trends, but otherwise the single-volume enclosure is still the chosen form.

            

Integration of Windows apps, desktop icons

The biggest change in SmartCenter 5.0 regards the blending of the two environments that were previously separated: SmartCenter and the Windows Desktop. This means you no longer need to exit the SmartCenter environment when you want to run Windows apps. Here’s a screenshot of SmartCenter 5.0:

image

Note that the Windows 7 taskbar is fully visible and that you can use it for launching apps and seeing what apps are running. The SmartCenter app carousel now has the icon highlight turned on permanently and only shows a short animated starburst as an app is launched. You also see all your desktop icons represented on the SmartCenter canvas. As you can see, the magnets overlap the desktop icons, which can be a bit of a clutter issue. No worries, you can turn off the desktop icons via Settings, if you don’t like them on the canvas. Or you can rearrange your magnets so they occupy different space:

image

In general, SmartCenter 5.0 attempts to bring the touch-first environment of past generations together with the traditional, mouse-centric desktop. That’s a value-proposition you don’t have in Windows 8, which is most likely not available until sometime in late 2012 anyway…

Automatic panning/scrolling

One additional thing SmartCenter 5.0 does is automatic panning of the canvas/desktop whenever an app is launched. This removes the need for you to have to rearrange app windows frequently when you want to switch from one app to another. The canvas pans automatically to make more room for every app you start. To return to an app, you just click on it in the taskbar or the app carousel. Another video might explain it a bit better:

This behavior can be turned off in Settings as well, in case it’s not useful to you. There are many, many areas that I haven’t touched on in this post, such as all the personalization and customization aspects that SmartCenter contains and how they changed over time. Or the fact that you can make your own parallax backgrounds (not documented anywhere, unfortunately, but pretty easy to figure out for enterprising souls). Or the easter eggs, oh yes…

Let me make some general remarks about the last four generations of SmartCenter: Any apps written to observe the guidelines of SmartCenter 2.0 are able to run on SmartCenter 2.0 through 5.0. A nice compatibility feature. Of course, older versions of apps needed updates as new SmartCenter functionality was introduced (or removed, as with the media plate removal in 4.0), but as you’ve seen, the Netflix app (which was published with SmartCenter 3.0) runs just fine in SmartCenter 2.0 and 5.0 as well. What’s more, if you know what you’re doing, you can have all the versions of SmartCenter 2.0 – 5.0 running on the same system. That’s how I was able to collect screenshots and videos for this post. Oh, and the technology underlying all these versions of SmartCenter is Microsoft’s Windows Presentation Foundation (WPF), 3.0, 3.5, and 4.0. The various apps were written in anything from compiled-to-native-code-Python to WPF to Adobe Flash. The software development process used since about SmartCenter 2.5 is anchored in Scrum, an Agile software development framework.

This concludes my brief history of the TouchSmart software. As you have seen, Windows 8 definitely picked up a lot of the features that the SmartCenter framework pioneered: Live tiles, fixed layout sizes for apps, parallax scrolling with an expandable space and touch-first design. Until Windows 8 is available, the TouchSmart 5.0 software suite is most likely the best alternative for touch – combined with new thinking on how to add something more to the the desktop environment – that you’ll find on an all-in-one PC anywhere.

WebOS and Windows Phone 7 development – Part 2: Windows Phone 7

August 11, 2011 11:20 am

This is part two of a “miniseries” on my forays into mobile development. Part one is here.

3 WindowClipping

My interest in Windows Phone 7 development grew partly out of my experience with writing a simple app for WebOS and partly out of conversations with a friend at work who was really excited about what at the time was the “forthcoming” new mobile OS from Microsoft. I hadn’t paid much attention to Microsoft’s moves in the mobile space, since I’d always been a fan of Palm PDAs and didn’t own a cell phone for a really long time. I figured I was reachable either at my desk or at home most of the time, so why carry a phone and pay another monthly bill on top of all the other ones?

A long conversation on a BART ride got me curious, though, so I checked out the announcements and demos Microsoft gave at the Mobile World Congress 2010 in Barcelona. All I can say is – I was hooked. The user experience presented by Microsoft made sense to me, the user interface was clean, simple and fresh, and the development toolset / technology was something I was pretty familiar with (Silverlight being the close cousin to WPF, which I’ve worked with intensively over the last few years as part of creating the TouchSmart software UI framework.)

Since I had already gotten my feet wet writing a simple app for WebOS, I thought it would be fun to write the same app (more or less) for Windows Phone 7. I had to wait a while for the tools to come out, though, so I had some time to read and learn more in the meantime.

My friend at work heard about a group that was forming around some people from the Silicon Valley Bay.NET user group who wanted to study and learn Windows Phone 7 app development. He had already joined the group, which had its first meeting on June 15, and encouraged me to join as well, so I did, somewhere around late June 2010. The group was incredibly useful in pointing out resources, encouraging people to follow a sort of curriculum and generally keeping one’s spirit up. Not to mention getting to know the Windows Phone 7 developer evangelists in Silicon Valley, William Leong, Kenny Spade and Doris Chen. Without the group, I’m not sure I would have stuck with it.

When I started work on my app, I took advantage of what I had done on the WebOS predecessor. As it turned out, Microsoft’s phone app templates use a close cousin to the WebOS Model-View-Controller pattern that’s very familiar to WPF/Silverlight developers: Model-View-ViewModel. Transferring some of the business logic (the Controller) was relatively straightforward. But because of the differences between C# (the language initially supported by WP7) and JavaScript (the WebOS business logic language) I decided I could do better with my data model than I had done in JavaScript. Ah, the joys of a typed language with excellent tooling support (Visual Studio 2010 Express)! So I rewrote most of the business logic and added a proper unit testing project to my solution. Producing the user interface was an entirely different matter, of course. On WP7, the UI has to be built in either Silverlight or XNA Game Studio. I went with Silverlight, since I already know WPF quite well.

Unfortunately, the version of Silverlight on the phone (version 3, “plus”) leaves out lots of good stuff from WPF, so I couldn’t do some things that I would have liked to do. One thing that I had come to appreciate in particular from WebOS were “editable” text blocks, where the normal mode of operation is that the text is simply displayed without any adornments, but when you tap on the text, it turns into an edit box, where you can change the content. I liked this control so much, I just had to write my own version of it. The Silverlight limitations on WP7 (I can’t remember at this late stage if it was lack of style inheritance or something else) made the result not quite as elegant as it would have been with WPF, but it ended up working well enough. Mobile apps are all about removing clutter and unnecessary steps, so eliminating the need for an edit screen seems to be a good choice, even if the control that enables this isn’t a “standard” control everyone knows about.

On the WebOS app, I didn’t have to worry too much about application lifetime management, in other words I didn’t have to write much code to save and restore the state of the app. WebOS provides multitasking abilities for apps; Windows Phone 7 on the other hand only provides for a single app to run at a time (at least Silverlight apps, “native” apps have more advanced capabilities, including the ability to do things in the “background”, but non-OEM developers can’t currently write “native” apps [Microsoft will remedy some of this with the now final “Mango” update]). Writing the required “tombstoning” code was some extra work, but Microsoft had provided good sample code at a free developer event that I attended. Part of that sample code also included methods that make it easy to work with “isolated storage”, which is what used for storing an app’s data. Thankfully, I didn’t have to resort to using typeless JavaScript objects, but could use fully typed first-class objects with methods and persist them in isolated storage without having to write my own translation code like I had to with WebOS.

After I had made good progress bringing the Open app to the same level of functionality that my WebOS app had, I noticed that there was a Bing Maps control available from Microsoft, and thought it would be interesting to see what I could do with that. The Open app allows the user to enter a store address. Wouldn’t it be nice if the app could draw you a map to the store, and based on the route’s duration tell you if you can get to the store in time before closing or if the store will be open by the time you get there? Certainly! It was surprisingly easy to use the Bing Map control (except for one thing that I’ve blogged about before), and I had the new feature implemented in a matter of hours. I think what took longest was to get my API key to actually be approved/deployed by Microsoft.

After testing the finished app and checking it against Microsoft’s publishing guidelines, I proceeded to the Windows Phone 7 Marketplace (now called AppHub) to start the publishing process. Because of my involvement with the peer learning group hosted at Microsoft, I had gained access to the second wave of “early access” certification for Windows Phone developers. This meant that I was able to work through the submission process before it was open to the public. I had a few hiccups getting that far, eventually got approval to start submitting my app for certification.

Publishing an app is probably about the same amount of work for both WebOS and Windows Phone. I get the impression that Microsoft’s testers are quite thorough, at least they were when I went through the process. They’re also pretty fast. After preparing all the required materials (several app icons, background image for the Phone Marketplace, marketing text, etc.) and submitting the app, it took about 3 days to get it approved, if I remember correctly. According to my xap file timestamp, I produced the 1.0 version on October 18, 2010 and it was released on October 21, 2010. The awesome folks at wp7applist.com (thanks Luigi!) helped me track down that it was among the first 2000 apps submitted, at #1983 or so. [Incidentally, I published a second app, called Countdown, which also took 3 days to get approved (submitted on December 26, 2010 and published on December 29, 2010; it was #5123 in the Marketplace).] When I updated one of my apps to version 1.1, I got a failure report back (I hadn’t tested tombstoning well enough) and was impressed by the quality of the report. It really helped me find and reproduce the issue quickly.

I have not had time to update either of my apps any further since publishing version 1.1, but perhaps some of the new features in “Mango” will encourage me to do so. A live tile for the Countdown app has been requested in the reviews of the app, for example, and producing that functionality without Mango would require me to create and host a web service, not something I’m willing to pay for at the moment. With Mango, the app itself will be able to update its tile…

Speaking of payment, you may wonder if this venture has been worth it from a monetary perspective. I would say “not quite”, but since I haven’t spent anything on promoting my apps I don’t know if it could have gone better. Open is $1.99 and Countdown is free. Open has a trial version, which is ad supported and Countdown is free with ads. I’ve sold 5 copies (at this point) of Open and made perhaps fifteen dollars in ad revenue from both apps (so no payouts on either front yet). I was lucky enough to get my $99 Marketplace fee refunded due to publishing two apps by a certain deadline for a Microsoft promotion. But figure in the time I spent on creating the apps, and this definitely has been an exercise more for the sake of learning and personal enjoyment than for the sake of financial gain.

Finally, since this is part two of a series on mobile development, I need to comment a little on the two experiences of doing WebOS versus Windows Phone. To me, the phone/OS experiences on the two come pretty close. WebOS is similar to the HP TouchSmart 2.x/3.x concept of an app carousel and works beautifully. I like WebOS a lot from a user perspective (I just REALLY wish there was a WebOS phone model closer in size to the iPhone or my current LG Quantum or the Samsung Focus), but developing for WebOS is hampered (for me at least) by the relative lack of good development tools. Windows Phone provides a unique user experience, hampered a little by the lack of multitasking, but absolutely SHINES in the area of development tools. Microsoft also invests a LOT into the developer ecosystem, as evidenced by the evangelists participating (on their own time, no less) in peer learning groups, such as the one I participated in. They use this as a vehicle to give people early access to phone hardware for testing and to keep the energy and motivation up among developers. I’ve not been aware of such support existing for WebOS.

WebOS and Windows Phone 7 development – Part 1: WebOS

August 10, 2011 11:20 am

This is the first of a two part “miniseries” of my forays into developing for mobile platforms. Part two is here.

01_Store_List_03

After Phil McKinney announced a WebOS app development contest at HP’s internal technology conference, Tech Con ’10, I was somewhat drawn to trying my hands at this unknown “beast” (lure of the prize? Maybe.) In a conversation with Jon Rubinstein on the first evening of Tech Con I had mentioned how Microsoft’s tools provide incredible developer productivity and I asked if Palm’s toolset provides something similar. Jon mentioned project Ares and encouraged me to try it out. More on that later.

Over lunch that last day of Tech Con, I mentioned in a conversation with my colleagues that I was going to develop an app that helps you keep track of store opening hours. After lunch I had a little bit of time before my flight back to California, so I rudely ignored my fellow travelers and started downloading and installing the “regular” Palm WebOS tools: Java, VirtualBox, Eclipse, the SDK toolset, Google Chrome and the Aptana Studio plugin for Eclipse. I didn’t start writing code right away. I had just finished installing stuff when it was time to get on the shuttle for the airport.

My next steps were to read up on the overview documentation that Palm provides at http://developer.palm.com and to start running the emulator and toolset. I’m no stranger to (D)HTML/CSS and JavaScript. One of my first projects at HP was developed almost entirely using that combination. Admittedly, that was quite some years ago. I’m a little surprised that someone would build a mobile platform based on technology that old, but I guess the rationale is sound: anyone who can develop a webpage can now develop mobile apps. (I’m not entirely sure I’d want just anyone who theoretically can do it to actually do it. Sorry. Little digression.) So, I’m no stranger to the technology, but I still needed to brush up. So I went off to www.w3schools.com to check out the JavaScript references (in particular the Date class docs) etc. Part of the journey also took me to a few articles at Linux Magazine (WebOS is based on Linux – another decades old technology stack, hmmmm – but then, so is the Windows Kernel and a bunch of other pieces of software) where some of the details around data persistence were explored. I knew that I’d have to store the data locally, since I couldn’t possibly support running a web/cloud service anywhere. Some other detours led me to the JSON website and the Prototype framework.

My first tentative steps were to get the app from the Linux Magazine articles up and running, which didn’t take too long. Then came experimenting with my “business logic”. Palm apps are nicely partitioned according to the Model – View – Controller software pattern, so trying out some “Model” approaches was worthwhile. During all this, I kept bouncing back and forth between the Linux Mag articles, the SDK documentation, Palm’s developer forums and the JavaScript documentation at w3schools.

After working with the TimePicker widget for a bit (store opening hours are central to the app, after all), I settled on using Date as the main “Model” for the app. Unfortunately JavaScript can’t store Date in the local persistence layer of WebOS. What can be persisted are object primitives (strings, integers, lists, arrays and such), and Date is not one of those. The persistence format in WebOS is JSON (JavaScript Object Notation), which is a string representation of a JavaScript object that the JavaScript interpreter can “rehydrate” by calling “eval()” on the string that’s retrieved from storage (or a web service call). Date objects don’t persist well, so I had to work out a way to “dehydrate” and “rehydrate” my Date-based data model. I’m sure there are better ways to do it than what I came up with, but my method is basically to “dehydrate” by calling Date.getTime() and storing that away. “Rehydration” is the reverse: construct a Date object from the stored getTime() value (which is the number of milliseconds since the “epoch”, Midnight on January 1, 1970).

After settling on the data model, I started some work on the business logic. I figured out the rules for determining a single day’s open/closed status and did debugging on that. This is where one of my frustrations with the toolset started to surface. Debugging is pretty painful on WebOS at first. It seemed that all I had at my disposal were “tracing” statements in combination with looking at log files in the emulator. To do that, I had to connect to the emulator running the app by using Putty (an SSH client that’s included in the toolset) to localhost port 5522. And every time I made a code change, I had to re-deploy the app, etc. It wasn’t until the end of my project that I discovered the semi-standalone log viewer from palm, hosted at http://ares.palm.com/AresLog and the corresponding debugger at http://ares.palm.com/AresDebug. The unfortunate thing, of course, is that these two only work if you have a live Internet connection. The other unfortunate thing is that my data model is an object that none of the tools know how to “Visualize”. By that I mean that even though AresDebug can show me my Date object, it can’t show me the various interesting “parts” like the Date, Month, Year or Day.

After making progress on the logic for one day of opening hours, I worked my way toward the logic for a whole week of opening hours. This meant starting to work with arrays of objects and that made the debugging situation worse. Now I had to trace a set of Date objects seven times in order to make headway. Seeing the log output from that was really messy.

In parallel to the business logic work, I started sketching out the UI flow and settled on four scenes/cards to use in creation/editing of store opening hour information. Most of these scenes were easy enough to come up with. The main problem was aligning items in list widgets so their placement was “pleasing to the eye”. That sometimes required padding and using tables in the HTML code along with general CSS tinkering. While using <div> elements with certain palm CSS class styles (“palm-group” in particular), I discovered that using a self-closing <div /> element could create issues with rendering the UI properly. I had to use opening <div> and closing </div> elements to get the correct rendering. Another thing I found a bit maddening was that I had to resort to padding in list rows to get items centered vertically. The style inheritance tree was just too much for me to wade through. I tried a couple of times, using the Palm Inspector, but it didn’t get me very far.

After most of the UI was settled, I had to finalize the business logic. This took the bulk of my development time, and was quite frustrating because of the difficulties of debugging/tracing/seeing traces using the Palm log tool via SSH. I ended up spending all of Memorial Day weekend on this (except for a few hours on Sunday where I got away to spend some quality time at a pool party). Memorial Day was another full working day where I thought I had finalized all the business logic…

Alas, I discovered in preparing my app for submission to the Palm site that there were still bugs lurking and that I needed to tinker a bit more with the UI. So I added a few images, twiddled icon sizes around, wrote up the required “marketing” text, etc. Each morning and evening I tested the app only to conclude that there were still calculation bugs.

Finally I convinced myself that it was time to formalize my testing efforts, so I put together a table on paper, sketching out various valid and invalid/tricky test data scenarios. I then coded these up in some “unit tests” (really just part of the app’s logic, but the tests only run if a certain flag is set in the startup code).

Other finishing touches included making the store opening hours definition less repetitive/labor intensive, adding a splash of color here and there, making it possible to delete the entire database and enabling two buttons in the UI based on conditions related to the store data the user enters: If a phone number is entered, enable calling up the dialer app to make a quick call to the store – if an address is entered, enable a button to take the user to a map of the store using the built-in mapping app. And with all those things in place, I finally submitted the app to the Palm catalog on June 4, 2010.

You’ll notice that I didn’t mention the Ares development tool yet. That’s because I didn’t use it much. Once I started on the path of using the SDK tools, I was unable to “round-trip” the app between Ares and the SDK toolset. I could upload the app just fine, but the App UI didn’t show up in the Ares environment. So perhaps I should have started out using Ares, but then I would have been limited to developing only while having a live Internet connection. Not something I find very comforting.

How much time did I spend on this adventure? Since I didn’t keep a log, I can only make rough estimates, but here’s the breakdown from memory:

Reading SDK docs
2 – 4 hours

Download and install tools
2 hours

Reading other articles
1 – 2 hours

Reading JavaScript docs
4 – 6 hours

Coding
6 – 8 hours

Debugging
30 – 40 hours

Refining UI, testing
8 hours

Preparing for submission
2 – 4 hours

So that’s somewhere between 55 and 74 hours. A lot of effort for a simple app? Probably. Worth the time, considering the value of the prize? Perhaps not. Great value in learning the ins and outs of a new platform and having some serious geek fun? Absolutely!!!

Why the big number on Debugging? This is where I get back to the productivity question/issue I posed to Jon Rubinstein. Debugging was so painful and time-intensive because the tools just didn’t provide what I needed. What I would have wanted was an environment that provides a coding and debugging experience that helps track down bugs in a matter of minutes. Variables should be easily inspected, breakpoints set / made conditional, etc. etc. The Palm Ares debugger provides some of this, but there is still lots of room for improvement.

All in all, it was great fun writing a WebOS app and learning about the platform. I highly recommend you do it yourself, if you are so inclined.

Windows Phone 7 – No “editable” TextBlock

October 16, 2010 5:11 pm

As I’m diving into Windows Phone 7 development and making notes for myself on how WP7 compares to WebOS, I’ve come across one little wrinkle that works really nicely in WebOS (out-of-the-box) and doesn’t work so well in WP7 (out-of-the-box).

I’m talking about a control (actually, a Widget in WebOS) that initially looks like a regular text label, but when you tap on it, it turns into a text box that lets you edit the contained text. WP7 does not have anything like this out-of-the-box. So I decided to create my own.

I made a UserControl that consists of a TextBlock and a TextBox. The TextBox is normally Collapsed (Hidden doesn’t exist on WP7, you’d have to use Opacity=”0″ instead). When the user taps on the TextBlock, it is collapsed and the TextBox is made visible. Once the TextBox loses focus, the reverse happens, and the text from the TextBox is transferred to the TextBlock. Since it can be useful to be able to style the TextBlock and to provide InputScope, I’ve also added a few DependencyProperties to enable that. The code is a little “smelly”, perhaps, because it could be refactored into a proper CustomControl, but what I have so far works well enough for me.

If you want to use it or just have a look, feel free to download the source code for TextBlockEditable.