Better ROI per tap
Stop giving users too much homework.
With new sensors and kits added to upcoming releases on the iPhone and Android markets — like Health Kit and Google Fit — apps are about to get a lot more helpful at analyzing data — and extremely more sophisticated at reporting that data. Despite the new technology, many app makers will still require users to input data the same way we did 40 years ago. You don’t need to be one of them.
Problem: No one likes to manually input data into your app.
Case in point: Biker’s favorite app, Strava, has one of the highest tap-to-benefit ratios. It requires a mere 3 taps to capture biking data that translates into massive value.
Investment: Count the taps. 1 tap to launch the app, 2nd tap to begin GPS specific data capture, 3rd tap to finish and save.
Return: After a simple, 3-tap capture for the user, Strava does a lot of work on the back end and provides you with great value:
- total feet climbed
- distance traveled
- average speed
- top speed
- low speed
- compares to your previous runs
- gives you awards for improvement
- not only does it track previous runs, but Strava also subdivides trail segments so you can receive reports based on portions of the trail)
- shares your event publicly so you can get comments, praise, etc.,
- automatically pairs you with peers in your biking group so you don’t need to add them to your event
- recognizes which trails or paths you’ve traveled
- puts you on a comparison board so you can know your rank and more
That is a huge ROI per tap.
Questions to consider:
How can I provide more value for every piece of data entered by my users?
Apps that deliver far more benefit than the labor required to produce the data incite users to return again and again and enjoy every tap.
Remedy: Collect data passively.
Sensors embedded within the device allow data to be gathered without interrupting the participant, a process referred to as passive data collection. Obviously app makers should capture data with permission from the user. What’s not as obvious is how to get that permission without annoying the user.
These sensors won’t make users look like a bionic man but instead will either be unnoticeable within the phone hardware, feel like a natural accessory, or even beautify as unobtrusive jewelry. Some of the most fertile ground in technology right now are tools that help us all cross the huge chasm between digital and physical environments. Here are few digital and physical bridges that can spark ideas for making your next user experience a joy.
- For example the headphones released from Intel boast great tonal bass beats but can also capture heart beats. Samsung’s release of the Note 4 brings heart rate sensitivity without external sensors.
- Re-utilizing common hardware, like the omnipresent camera, detects faces for login, recognizes fruit for grocery store scanners (instead of barcodes), reads tombstones, and visual microphones (that last one isn’t available on common device-sized cameras … yet.)
- Every team in the world cup wore shoes similar to this one to track position on field and velocity of kicking. [Lionel Messi info graphic]
- Car sensors, like Automatic, slide into existing hardware to give you real-time reports — all without ever needing to lift a finger to record the data.
- Node environmental sensors are external devices that easily monitor barometric pressure, room temperature, ambient light, and humidity, all for you to analyze later.
- Nearables, or beacons, carry ARM M0 Cortex, BLE, motion and temperature sensors so that they can be activated to send info to your phone if they move, twist, heat up or go too far away from your phone — all without ever needing to tap on screen prompts.
Question to consider:
How can I make it drastically easier for users to capture their data?