Showing posts with label Kerry Johnson. Show all posts
Showing posts with label Kerry Johnson. Show all posts

HTML5 SDK for the QNX CAR 2 platform — the back story

Kerry Johnson
Today, at SAE Convergence, QNX Software Systems announced the new HTML5 SDK for the QNX CAR 2 application platform. I’d like to provide some insight into this announcement, describe what you can expect to find in the SDK, and explain how it builds on the HTML5 capabilities already available in the QNX CAR 2 application platform.

Enabling apps for the car
Almost every consumer who owns a smart phone or tablet is familiar with the app experience: you go to an online marketplace, find apps of interest, and download them onto your device. With the HTML5 SDK, the automotive team at QNX is creating an analogous experience for the car.

Just as Apple, Android, and RIM provide SDKs to help vendors develop apps for their mobile platforms, QNX has created an SDK to help vendors to build apps for the QNX CAR 2 application platform. The closest analogies you will find to our HTML5 SDK are Apache Cordova and PhoneGap, both of which provide tools for creating mobile apps based on HTML5, CSS, JavaScript, and other web technologies.

App developers want to see the largest possible market for their apps. To that end, QNX also announced today that it will participate in the W3C’s Web and Automotive Workshop. The workshop aims to achieve industry alignment on how HTML5 is used in the car and to find common interfaces to reduce platform fragmentation from one automaker to the next. Obviously, app developers would like to see a common auto platform, while automakers want to maintain their differentiation. Thus, we believe the common ground achieved through W3C standardization will be important.

It bears mentioning that, unlike phone and tablet apps, car apps must offer a user experience that takes driver safety into consideration. This is a key issue, but beyond the scope of this post, so I won’t dwell on it here.

So what’s in the SDK, anyway?
As in any SDK, app developers will find tools to build and debug applications, and APIs that provide access the underlying platform. Specifically, the SDK will include:

  • APIs to access vehicle resources, such as climate control, radio, navigation, and media playback
  • APIs to manage the application life cycle: start, stop, show, hide, etc.
  • APIs to discover and launch other applications
  • A packaging tool to combine application code (HTML, CSS, JavaScript) and UI resources (icons, images, etc.) with QNX CAR APIs to create an installable application – a .bar file
  • A emulator for the QNX CAR 2 platform to test HTML5 applications
  • Oh yeah, and documentation and examples

The development and deployment flow looks something like this:




Emulator and debugging environment
The QNX automotive team has extended the Ripple emulator environment to work with the QNX CAR 2 application platform. Ripple is an emulation environment originally designed for BlackBerry smart phones that RIM has open sourced on github.

Using this extended emulator, application developers can test their applications with the correct screen resolution and layout, and watch how their application interacts with the QNX CAR 2 platform APIs. For example, consider an application that controls audio in a car: balance, fade, bass, treble, volume, and so on. The screenshot below shows the QNX CAR 2 screen for controlling these settings in the Ripple emulator.


Using the Ripple emulator to test an audio application. Click to magnify.

In this example, you can use the onscreen controls to adjust volume, bass, treble, fade, and balance; you can also observe the changes to the underlying data values in the right-hand panel. And you can work the other way: by changing the controls on the right, you can observe changes to the on-screen display. The Ripple interface supports many other QNX CAR 2 features; for examples, see the QNX Flickr page.

You can also use the emulator in conjunction with the Web Inspector debugger to do full source-code debugging of your Javascript code.

Creating native services
Anyone who has developed software for the QNX Neutrino OS knows that we offer the QNX Momentics Tool Suite for creating and testing C and C++ applications. With the QNX CAR 2 application platform, this is still the case. Native-level services are built with the QNX Momentics suite, and HTML5 applications are built with our new HTML5 SDK. We've decided to offer the suite and the SDK as separate packages so that app developers who need to work only in the HTML5 domain needn't worry about the QNX Momentics Tool Suite and vice versa. Together, these toolkits allow you to create HTML5 user interface components with underlying native services, where required.

Moving beyond the browser: HTML5 as an automotive app environment

If you’ve already visited this blog, you’ll know that we are bullish on HTML5 as a way to implement infotainment system HMIs. Not surprisingly, I’ve spent a fair amount of time searching the Web for facts and opinions on using HTML5 in the car, to see how this idea is catching on.

Overall, people see numerous benefits, such as the ability to leverage mobile app development to keep pace with the consumer demands, the availability of a large pool of knowledgeable developers, and the attractiveness of a truly open specification supported by many possible vendors.

But when it comes to the challenges of making HTML5 a reality in the car, I found a common thread of questions, mostly rooted in the erroneous belief that an HTML5 application environment is “just a browser.” Everyone is familiar with the concept of a browser, so it’s easy to see why people take this point of view.

So what are the key differences between a browser and an HTML5 application environment? Here’s my quick view.

The experience
Everyone is familiar with the browser experience. You navigate to a web site through bookmarks, a search engine, or direct entry of a URL. The browser implements a user interface (aka the chrome) around a rendering engine and provides bookmarks, URL entry, back and forward, scrolling and panning, and other familiar features.

An automotive HMI based on HTML5 provides a different experience — just look at the accompanying screen shots and decide for yourself if they look like a browser. In fact, the user experience of an HTML5-based HMI is similar to that of any other purpose-built HMI. It can consist of a main screen, window management, navigation controls, and other typical user interface widgets.


A radio tuner and a media player from the QNX CAR 2 application platform. Both apps are based on HTML5, but beyond that, they neither act nor look like a web browser.

A system that uses an HTML5-based HMI can include:

  • core applications that look and act like native applications
     
  • add-on (downloaded and installed) applications that have controlled interfaces to the underlying hardware
     
  • “web link” applications that simply link to a cloud-hosted application that can be downloaded on demand and cached

The web link approach makes it easy to update applications: just update the server and the remote client systems will automatically pull the application when needed.

Local resources
Web browsers pull text, images, and other content from the web and render it on the user’s machine. The process of loading this remote content accounts for much of the user’s wait time. This paradigm changes with a local HTML5 application environment — because resources can exist locally, images and other components can load much more quickly.

What’s more, screens and user interfaces can be designed to fit the platform’s display characteristics. There is no need for panning and scrolling, and only limited need for zooming. Resources such as RAM can be optimized for this experience.

Security and sandboxing
Browsers load content and executable JavaScript code dynamically. This really is the power of the web technologies. The problem is, dynamically loaded code represents a threat to an embedded platform.

Browsers are designed to be sandboxed. By default, JavaScript code can execute only in the context of a browser engine, and cannot access the underlying operating system primitives and hardware. This approach changes in an HTML5 application environment. To give JavaScript code the ability to behave like a native application, the environment needs interfaces to the underlying OS through to the hardware. Plugins are used to implement these HTML5-to-OS interfaces.

Nonetheless, access to the underlying platform must be carefully controlled. Hence, a security scheme forms a critical component of the HTML5 application environment.

Application packaging
The app experience has become familiar to anyone who owns a smartphone or tablet. An HTML5 application environment in the car can also support this kind of experience: developers create and sign application packages, and users can download those packages from an application store. In an automotive context, authenticity of the applications and control over what they can or cannot do is critical. Again, a security model that enforces this forms a key part of the HTML5 application environment.

So, how should you think of an HTML5 application environment?
From my perspective, an HTML5 environment is like any other traditional HMI toolkit, but with much more flexibility and with inherent support for connected applications. In an HTML5 application environment, you can find technologies similar to those of any proprietary toolkit, including:

  • a rendering engine (HTML5 rendering engine)
  • a set of content authoring and packaging tools
  • layout specifications (HTML5 and CSS3)
  • a programming language (JavaScript)
  • an underlying data model (DOM)

The difference is, these components are developed with a web experience in mind. This, to me, is the most significant benefit: the web platform is open, scalable, and well understood by countless developers.

Gearing up for CES

I arrived in Las Vegas last night, gearing up for the CES show. I know I must be in Vegas: When I woke up at 4:30 am (my body is stuck in eastern time), there was still a buzz around the hotel — people just do not stop here.

I’m looking forward the show. Our automotive development team has been hard at work on some exciting new technology, and I can hardly wait to show it off.

First, we are demonstrating our new concept car, based on a Porsche Carrera. This thing is loaded with goodies to demonstrate how you can use your smartphone and tablet to improve the driving experience. For instance, the car supports Near Field Communications (NFC) pairing: You simply touch your phone to the car and the two become instantly paired — no more fumbling with unintuitive menus and security codes.


The new concept car features one-touch smartphone pairing, tablet-based rear-seat
entertainment, ultra HD voice technology, and a reconfigurable instrument cluster.


We’re also showing a level of integration beyond a simple voice-dialing list. For instance, you can use your phone’s contact list to direct your navigation system or to automatically contact meeting invitees when your car knows you’ll be late for an appointment. We will also demonstrate our ultra HD voice technology, which provides full stereo sound for handsfree calls — you’d have to hear this to get the real impact. It’s like you're sitting right next to the person on the other end of the call. (I hope this works well in the noisy show floor environment!)

We're also launching the QNX CAR 2 application platform, which will allow automakers to leverage the power of the mobile development community and to keep in-car infotainment software fresh for consumers. We are doing some pretty unique things with HTML5, including the ability to write and package applications for deployment to the car. Another cool feature is the ability to dynamically detect and play media that is added to the system – try doing that with a standard browser!

Just writing this has me looking forward to getting started. See you at 2012 CES!
 

When will I get apps in my car?

I read the other day that Samsung’s TV application store has surpassed 10 million app downloads. That got me thinking: When will the 10 millionth app download occur in the auto industry as a whole? (Let’s not even consider 10 million apps for a single automaker.)

There’s been much talk about the car as the fourth screen in a person’s connected life, behind the TV, computer, and smartphone. The car rates so high because of the large amount of time people spend in it. While driving to work, you may want to listen to your personal flavor of news, listen to critical email through a safe, text-to-speech email reader, or get up to speed on your daily schedule. When returning home, you likely want to unwind by tapping into your favorite online music service. Given the current norm of using apps to access online content (even if the apps are a thin disguise for a web browser), this begs the question — when can I get apps in my car?

Entune takes a hands-free
approach to accessing apps.
A few automotive examples exist today, such as GM MyLink, Ford Sync, and Toyota Entune. But app deployment to vehicles is still in its infancy. What conditions, then, must exist for apps to flourish in cars? A few stand out:

Cars need to be upgradeable to accept new applications — This is a no-brainer. However, recognizing that the lifespan of a car is 10+ years, it would seem that a thin client application strategy is appropriate.

Established rules and best practices to reduce driver distraction — These must be made available to, and understood by, the development community. Remember that people drive cars at high speeds and cannot fiddle with unintuitive, hard-to-manipulate controls. Apps that consumers can use while driving will become the most popular. Apps that can be used only when the car is stopped will hold little appeal.

A large, unfragmented platform to attract a development community — Developers are more willing to create apps for a platform when they don't have to create multiple variants. That's why Apple maintains a consistent development environment and Google/Android tries to prevent fragmentation. Problem is, fragmentation could occur almost overnight in the automotive industry — imagine 10 different automakers with 10 different brands, each wanting a branded experience. To combat this, a common set of technologies for connected automotive application development (think web technologies) is essential. Current efforts to bring applications into cars all rely on proprietary SDKs, ensuring fragmentation.

Other barriers undoubtedly exist, but these are the most obvious.

By the way, don’t ask me for my prediction of when the 10 millionth app will ship in auto. There’s lots of work to be done first.

 

What’s next for the connected car?

It’s been almost three years since QNX Software Systems launched its connected car concept, and I thought it would be an interesting exercise to look at what has been accomplished in the automotive industry around the connected car and how some of the concepts are evolving. When the QNX CAR Application Platform was introduced, we provided a simple way to look at a connected car, using four “dimensions” of connectivity:
  • Connected to portable consumer devices for brought-in media and handsfree communications
  • Connected to the cloud for obvious reasons
  • Connected within the car for sharing information and media between front and rear seats, between the center stack and the cluster, and for other similar functions
  • Connected around the car for providing feedback to the driver about the environment surrounding the car, be it pedestrians, other cars, or vehicle to infrastructure communications
We’ve seen significant advances and evolutionary thinking on all fronts. Although QNX is not (and cannot be) at the forefront of all of these, our primary emphasis has been on cloud and consumer device connectivity. Nonetheless, it is interesting to look at each area.

Connected to consumer devices, connected to the cloud
Why lump these two together? There is not exactly a clear line between the two since consumer devices are often just extensions of the cloud. If my car connects to a smartphone which, in turn, draws information from the cloud, is there much point in creating a distinction between consumer device and cloud connections? Although it made sense to differentiate between cloud and consumer device connections when phones provided only handsfree calling and simple music playback, today the situation is quite different.

Device integration into the car has been a beehive of activity over the last few years. Smartphones, superphones, and tablets are providing entertainment, social networking, news, and access to a myriad of other content and applications to consumers anywhere, anytime. Automakers want to take advantage of many of these capabilities in a responsible, non-distracting way.

The primary issue here is how to marry the fast-paced consumer electronics world to the lifecycle of the car. At present, there are solutions at the opposite end of the spectrum: standardized Bluetooth interfaces that allow the car to control the smartphone; and screen replication technologies (iPod Out, VNC/Terminal Mode/Mirror Link) where the smartphone takes control and uses the car as a dumb display.

Neither of these scenarios takes full advantage of the combined processing power and resources of the car and the brought-in device. This, to me, is the next phase of car and cloud connectivity. How can the power of the cloud, brought-in devices, and the in-car systems be combined into a cooperative, distributed system that provides a better driver and passenger experience? (If the notion of a distributed system seems a bit of a stretch, consider the comments made by Audi Chairman Rupert Stadler at CES 2011.)

When looking for technologies that can bring the cloud, devices, and car together, you do not need to look any further than the web itself. The software technologies (HTML5, Javascript, AJAX, peer-to-peer protocols, tooling, etc.) that drive the web provide a common ground for building the future in car experience. These technologies are open, low cost, widely known and widely accessible. What are we waiting for?

Connected within the car
The integrated cockpit has emerged as a prevalent automotive design concept. It is now commonplace in higher-end vehicles to see seamless integration between center stack functions and the instrument cluster and driver information displays. For example, turn-by-turn directions, radio tuning, and song now playing are all available to the driver on the cluster, reducing the need to constantly glance over to the main display. One such example is the Audi cluster:

Connected around the car
Three years ago, systems in this category were already emerging, so there really wasn’t much of a crystal ball required here. Adaptive cruise control has become one of the most common features that illustrate how a car can connect to its surroundings. Adaptive cruise control detects the car’s surroundings (cars in front of you) and adjusts your speed accordingly. Other examples include pedestrian detection (offered in Volvo S60 and other models)automatic parking, lane departure warning, and blind spot detection/warning systems.

These Advanced Driver Assist Systems (ADAS) will become more common as cost reductions take place and the technology is provided in lower-end vehicles.

In contrast, vehicle-to-infrastructure communication that requires industry-wide collaboration is proceeding at a pace that you’d expect from an internationally standardized solution.