Kai Kunze talks about the current state of the art of wearable computing.
Currently we are in a state where computing is embedded in more or less all everyday devices, be it your phone, cars, washing machine and so on.
We think it should support us in everyday life. But usually it does not look like this. They usually don’t support us but disturb us.
The limiting factor is no longer processing power of memory but user attention. Whenever something rings it disturbs your attention.
how can this be done better?
computing „learns“ about the real world using infrastructure and body worn sensors.
Actuators for feedback (acceleration, location etc.)
He then shows a demo done by Josef Neuburger. Whenever he moves his arms a man on the display also waves. The same is true for sitting down. It’s implemented with sensors on his body (acceleratormeters). All processing is done on the ship in 64k of memory.
Another hybrid approach is the Sputnik project (OpenBeacon/OpenAMD) which uses RFID.
Real life application support
We want to record every high level user activity, like everyday living (intelligent fridge etc.), healthcare, work and collaboration or sport/life style (virtual trainers).
He then shows a healthcare prototype demo which was indeed in action for 3 weeks.
Healthcare prototype from MrTopf on Vimeo
One problem was though the gesture support which quickly can look like you are gesturing a cross. Therefore the next prototype then used a capacitor sensor to do it differently. What doctors don’t want though is to touch any stylus etc. to have their hands free for their patients.
Then he showed a similar video from an industry prototype from Toyota’s assembly line when workers test the car. This was done together with ETH Zürich. The worker has quite a lot of sensors on him, like muscle activity, where at the car he is, velocity sensor for e.g. closing doors and so on. The system also helps the worker in explaining what to do.
The last application scenario is about sports. They are measuring kung-fu movements. This might also be interesting for the entertainment sector like games.
The context recognition network toolbox allows you to do rapid prototyping of context regonition systems.
- data-flow oriented
- component based
- open-source (LGPL)
- written in C++
- required POSIX threads
- available at http://crnt.sf.net
Done by David Banach et al.
Runs also on e.g. iPhone, iPod touch etc.
Disclaimer: This is kind of research work, the documentation might not be that uptodate.
Another thing he wanted to pitch is a context logger for iPhone/iPod touch but it did not get accepted yet by Apple so it’s not yet on the AppStore.
What it does is that you can log some acceleratormeter data for different types of activities. Then you can look at the data and see how acceleratormeter data looks in case you want to use it for something. So it’s more interesting for research people but there is more to come.
This is some of his PhD work.
Problem is right now the setup of the sensors. It might take 40-50
minutes to setup everything. Moreover things can fail quite easily.
To solve this they are trying to use existing devices with sensors
embedded already, like phones. But the problem is that you don’t know
their exact location on the body. So displacement is a problem. The
same is true for orientation of the device. What are the coordinates?
„A mobile phone ringing or vibrating sounds different depending on
where it is“. From this you can derive it’s symbolic location.
This works as follows:
- Sampling of acceleration and sound signatures playing a sound fingerprint
- sampling the sound
- activating the vibration motor
- Sampling acceleration and sound.
He showed the curves of the different accelerationmeter data and indeed it looks different depending on the surface.
Classification is not running on the device though but done by batch processing with e.g. SciPy. There are two modes:
- pre-trained locations (result: „phone is on your office desk“, over 30 distinct locations possible)
- abstract classes (result: „phone is in a closed wooden
compartment“, surface types: padding, glass, metal, … + compartment:
Is it really useful for normal people?
They did an experiment, a maintenance scenario. In this case they used heads-up displays and three maintenance tasks for one machine. All participants are skilled technicians. They used 3 utilities: a paper manual (500 pages but was slimmed down a bit), Speech Recognition (previous/next, basic type), Context (some errors were defined up-front and the technician gets a warning). All this was displayed on the heads-up display.
They also try to open-source this system but no code is available yet.
They did this experiment several times.
Results: Context is the fastest modality, then comes speech (30% slower) and then paper (50 % slower).
Question: Would you like to wear the system in everyday work? 13 yes, 1 no (because of contact lens problem), 4 indifferent)
Question: Which modality did you dislike the most? (17 disliked paper most, 1 context/speech)
So the net result is: It’s useful!
Privacy (special CCC topic)
First: There ain’t such a thing as free lunch (if I want to use a mobile phone/credit card etc. I need to live with the consequences).
Then he wants to raise awareness. Many people don’t know what information can be datamined from e.g. a phone.
Last point: Making big brother smaller, meaning helping people to make the right decision for the right system.
Small example: Keep the last puzzle piece. E.g. make the final computation on the device the person wears. He shows a video where a video camera just tracks persons anonymously but combined with acceleratormeterdata on your phone can take the tracking from the camera and knows who you are (e.g. if you stop, the camera notices and your sensor as well and you know who you are).
Tweets by kai_ser
Technorati Tags: 25c3, ccc, wearablecomputing, bcc, berlin, conference, congress