embedded software boot camp

Long number Entry and Equal Opportunity

August 30th, 2009 by

I recently worked on a security application where the employee used a swipe card to identify themselves at certain locations. The event of identifying themselves was transmitted to a server, which had a database which contained the mapping from the serial number of the swipe card to the name of the employee.

All o f the employee records existed in a database. As we added cards to the system, the long and unwieldy swipe card number had to be entered manually into the employee’s record to create the mapping. As with any long-number entry, there is a risk that the number was entered incorrectly. Checksums within the serial number detected some faults but it was still not very satisfactory. And if the mistake was not discovered immediately, it led to hassle for the employee and supervisor the first time the card was used.

As we thought through this challenge, one option would have been to add a card reader to the server, or the PC accessing the server, and allow a card to be swiped instead of typing in the number. This added a number of unwelcome challenges. This reader would be different from the already installed readers since the communications link was different. The server was updated via the web, so you could not be sure where the user would be located, and restricting them to one location or PC would be troublesome.

Many solutions look so obvious after the fact and this was one of those cases. We eventually realized that the scenario where a card is unrecognised also provided the ideal opportunity to enter the correct user name.

So when the card is not recognised, instead of simply rejecting the user, the supervisor has the opportunity of picking a name from a list of employees, and the mapping from card number to employee is then created. This solution requires no extra hardware. The supervisor no longer has to type in long error prone numbers. The card can be swiped at any location, so when a new employee receives his card he simply goes to his nearest access point, and over the phone, tells the supervisor that he is about to swipe for the first time. The swipe is unrecognised, and the supervisor sees the new serial number, and its location, and links it to the appropriate employee.

This is a case of applying a principle called equal opportunity. It means that something that was delivered as output to the user can be turned around and used as input. This means that the user never has to enter the original data, since they received that data as output. Another example would be receiving a call on your cell phone from an unrecognised number, and being allowed to add it as a contact without having to type in the number all over again.

My online tutorial on equal opportunity provides a number of other examples of this useful technique:
http://www.panelsoft.com/tut_equal/index.htm

Measurement Changes Everything

August 8th, 2009 by

Introducing an electronic or computerized system to a human activity often opens up opportunities to measure aspects of the activity that could previously not be monitored in any cost effective way. This column strays a little from pure usability issues, but the nature of the measurements you take is partly a feature-set decision, but it also dramatically changes the nature of the relationship between your device and the people using it. In this column we will first look at systems as diverse as call centers, pulse oximeters, and running aids that tell athletes their pace and distance.

Consider a call center routing phone calls to a band of support or sales staff. Once the call routing is computerized, it is possible to measure the exact duration of each call, the time before a call is picked up and the amount of time between each call. These numbers can be used to measure productivity of employees. Once that productivity can be measured then many steps can be taken to increase it, such as linking pay with the percentage of a person’s time they spend on the phone. A case can be made that such measurements make the work place more pressurized, less pleasant, and might occasionally cause the most talented employees to seek employment elsewhere. That is a discussion on quasi-ethical issues that I am not interested in addressing here – the main point is that the measurements, which are a side effect of the call routing system can have a profound effect on the control of the system.

In other cases the measurement is not an incidental by-product, but the core purpose of the system. In medical devices, many innovations have been in the area of providing real time measurements of attributes that had previously been only occasionally measureable. Pulse oximeters give real time feedback on blood oxygenation levels. Previously a blood test was required, and this limited the number of samples that could be taken, and by the time the sample data came back from the lab, the patient condition could have changed. Modern respiratory therapists can make minor adjustments to a patient’s lung ventilator or to their medication, and then observe second-by-second the impact of the adjustment. It even opens up the possibility that the control loop could be closed by automatically adjusting lung ventilator parameters’ in response to changes in blood oxygenation levels. This has been implemented in experimental cases, but is not a mainstream solution.

At one point in my career I thought that the cutting edge of medical device development was working on therapeutic devices that delivered treatment to the patient, but I now realize that measurement can have just as big an impact on patient outcome. If doctor’s decisions are based on guesswork rather than raw data, then they are going to make less precise diagnosis.

Another area that has been revolutionized by measurement is running. In the past few years, consumer devices which tell you how far and how fast you run have come to market and proved extremely popular. Some are based on a foot sensor that measures how many paces you have taken, and others are based on GPS technology to measure the distance and route of the run. This is not just a replacement of a stopwatch – because the feedback is real time during the run, these devices can provide a motivating influence that is almost as good as having a running companion who is always a couple of seconds quicker than you.

One of the big advantages of using a gym (which I do not visit often enough, but that is another story), is that most activities are precisely measurable. If I am on the treadmill for 20 minutes, it can tell me precisely how far I ran and how fast. If I return to the gym tomorrow (well OK – next week), I can try to equal or better that run, which is a huge motivational factor. Even the gym activities that do not involve electronics allow me this precision of recording. The number of chin-ups or the weight that I bench press are numbers that are easy to record.

Road running always contains a vagueness that does not happen in the gym. Unless I run precisely the same route, I cannot compare times. Even with the same route, it is difficult to know how I am performing while I am on the run. I want real-time feedback, not just a result at the end – too late to motivate me to do a final sprint. A system that measures my pace as I run, and tells me whether I am faster or slower than my target speed revolutionizes road running. I can now pick any road and just go.

It is interesting to note a couple of the differences between the two main technologies. If you use a shoe sensor then it requires calibration to match your typical stride length. A GPS based system avoids this issue and also offers the advantage that it can record the actual route and later superimpose it on a map when the route data is downloaded to your PC.

The Nike+ system is a collaboration between Nike and Apple. At first glance I was surprised that Apple did not opt for the GPS solution. They are the kings of simplicity, and would have wanted to avoid the calibration step. While I am not sure what the rational was, a case could have been made that GPS raised other complexity issues – battery life means that you have to remember to charge the device between runs – at least that is true for the Garmin wrist-watch based devices. While GPS adds lots of extra information because it knows the route, it could be argues that most of that information is superfluous – the runner just wants to know how far and how fast. So maybe their rational was to choose to measure the things that help motivate the runner, but avoid flooding them with so much data that only the statistics junkie is interested.

There is a good article on the motivational effect of the Nike+ at http://www.wired.com/medtech/health/magazine/17-07/lbnp_nike

I also wondered about the effect of a shoe sensor that might have accuracy issues as terrain, weather and fitness levels varied. It then struck me that the accuracy would be rarely tested and so a few percent of an error would go unnoticed by the runner. If you are trying to break your personal best for five miles and your monitoring device tells you that you have improved by ten seconds, when in fact you have improved by 5 seconds, then you will not realize this. The important thing is that the motivational effect of seeing an improvement is still there. Accuracy only matters if you are comparing the results to some reference, and these devices are not used to record world records, so no one will really care.

Adding measurement and recording to a device can also introduce a bond between the device and owner, which has advantages in terms of marketing – the owner feels the device is like a pet dog that actually knows the owner. It has a disadvantage for the owner that it can become impossible to share the device. If I borrow my wife’s Garmin 405 to go for a run, it cannot tell the difference between my run and the ones my wife has done, and so her weekly mileage stats will be messed up. They could have incorporated a multiple-user feature, but that adds complexity, and I guess the marketing people figured it might also reduce sales. They would obviously prefer that each runner bought their own rather than have one device shared between two or more runners.

Have a look at your own designs and see if there are opportunities for game-changing measurements that will alter the way your system motivates the user.

More Smooth Sounds

May 31st, 2009 by

People are not very good at distinguishing the pitch of two sounds, unless they hear them close together. My Casio watch makes good use of a fairly subtle change in tone. The watch has several modes: normal time, stopwatch, set alarm and dual time. The mode button changes from one to another. When I am finished using a mode, I can press the mode button to leave that mode, but I have to visit each of the remaining modes before I get back to normal mode. At first I found this a bit frustrating. The number of button presses required to get back to normal mode varied depending on what mode I was leaving. If I had just used the stopwatch, I iterate through set alarm and dual time to ge to normal mode, so that took three presses, but if I was just using dual time it would only take one press to get to normal mode. Of course one press too many means that you have to iterate through the whole list again.

Then I noticed that the tone of the key beep when entering normal mode was slightly different than the key beep when entering any other mode. The interface designer was giving me an easy way to know when I had reached the end of the list. So now, rather than watching each mode appear on the display, I press the button quite quickly and listen for the button beep to change.

I have used other devices that have different tones to convey different messages, but they rarely succeed because it is too tricky to remember what each tone represents. There are some ways to encode meaning into sounds, sometimes humorously called ‘earcons’ ! A rising tone suggests success or happiness, while a descending tone implies the opposite.

Beeps can get closer together to imply that some threshold is about to be met. Anti-bump sensors in cars can provide these beeps while reversing. Note that the driver can not usually map the spacing of the beeps to the distance from the object behind. However the driver can judge the spacing of the beeps relative to the spacing a second earlier, so he knows he is getting closer. This changing sound also has a natural limit as the reduction in the gap between beeps eventually leads to a constant tone implying that collision is imminent, or perhaps has already occurred.

A much less critical application is a kid’s music keyboard. We have one at home with five volume levels. When you turn the device on it always defaults to five – the loudest. It is always easier to convince a kid to turn up the music than to turn it down, so why did they not default to a quieter volume? An even better solution would be to remember the last used volume, but that might have a cost impact since some non-volatile storage would be required. Without that ideal option, try to pick the volume defaults to be the least disruptive.

These scattered examples of the use of sounds in the user interface might influence your thinking the next time you add a beep.

Sounds Exciting

May 22nd, 2009 by

Sounds, beeps, buzzes and clicks can make a useful addition to your user interface, or they can form the most exasperating parts. Subtle sounds can give feedback that a button press was detected and less subtle sounds can inform you that urgent action is required or your machine, or car, or patient might be permanently damaged. But badly designed sounds can annoy the user, and those near to the user.

One common assumption that your system is the central focus of the user’s attention. That leads to the assumption that if your device is making an ‘urgent’ noise then it will be attended to quickly. This violates another rule, that I often refer to, that states that the designer should always assume that the user is very busy. While you are designing one piece of medical equipment, it is easy to forget that the user may be responsible for thirty pieces of equipment spread across six patients. When your device starts beeping urgently, it may mean that a patient needs urgent attention, or it might mean that the machine is sitting at the side of the room waiting to be used, and it is not interacting with a patient in any way. So for urgent noises, it is important to investigate use-cases of the worst case, most dangerous, scenario, but it is also important to investigate the most benign case to find out if the harmless case is going to lead to nuisance noises.

Another common decision is to sound a buzzer if the device is stuck in reset, or in some ‘not working properly’ mode. I recently worked on an after-market automotive device that did this. If some configuration parameters were not set up right, then the device would beep constantly. It seemed reasonable, since you never wanted to drive down the road with the wrong configuration parameters. The problem was that the device would often be like this in the workshop while the vehicle had wiring installed. While the vehicle was being worked on, no one cared about the configuration, but the noise drove everyone nuts – they used to ring me up rather than e-mail me just so I would hear how annoying the background noise was so that I would add an override feature.

You should always give some thought to whether noises can be silenced for these special cases. For the service guys, the ideal thing would be a simple dip switch to turn off the buzzer, but of course the danger with that sort of measure is that it would be left turned off when the device is shipped (or a user would turn it off) and then the buzzer would be disabled in the field. So you have to find a balance between ‘easy to override’ and ‘hard to accidentally override’. The solutions tend to be application specific.

If the user can not override the sound, then it can lead to undesired behaviour.Consider a pasanger seat with a weight sensor, that will beep if the vehicle moves while the seat belt is open. Now a driver who regularly leaves a bag of shopping on the passanger seat gets beeped at even though there is no passanger present – just a bag of groceries that weight enough to be a small person. So the driver plugs the seat belt in and leaves it that way permanently, just to make sure the beep does not happen. Of course this makes it more difficult for a real passanger to put on the belt on the occasions when this driver has a passanger, and it might even discourage this passanger from using the belt at all, which is the opposite of the original design intention. If the original warning sound been only temporary and more subtle then the driver may not have felt obliged to work around it, and so the problem might have been avoided. Again there is a trade of between making the sound so annoying that the user can not ignore it, and allowing for the case where that sound is actually a false alarm.

When designing a sound, always consider whether it is just for the ears of the user, of if other will hear. If you have a ‘failure sound’ (usually implied by a descending tone), when the user presses an illegal key, then the people near to the user will hear it and get the impression that the user is doing badly (if the user is a doctor, and the patient can hear the ‘wrong key’ noise then that can reduce the patient’s confidence in the doctor !). No user will thank you for announcing to the world that they pressed the wrong button, and these users will quickly want to silence the device. Having an easy to access mute or volume facility is vital for any noise-making device. Beeps that might be very subtle in noisy lab environment may be deafening in a library.

For the urgent noises that you might not want to volume control, consider having long silences between the beeps. If a noise is too intrusive, then the user’s priority is to silence the noise, and not to solve the problem. In medical devices I often see staff silence an alarm before they make any attempt to see the alarm message or to asses the patient. It is obvious that their first thought was ‘How do I stop the noise- it is driving me nuts’ when their first thought should be ‘What is the patient care issue that needs to be addressed’. Design noises to inform the user, not to bully them into responding.

My next post will point out some particularly bad uses of sound, and some particularly good ones. In the mean time, if you have any examples of your own, let me know.

(Code) Size isn’t everything…

May 7th, 2009 by

I have been looking at some code sizes recently and wondering why GUI code gets so darn big. I can understand that compiling in fonts and bitmaps are bulky and so the executable size can get big, but even when measuring lines of code the number of lines taken up by the GUI always seems to be greater than the rest of the system put together.

I pointed this out on one project where we had about 12 KLOC (thousand lines of code) for the GUI and about the same for the rest of the system. One of the other engineers quite reasonably queried if I had included the third party library that we were using. No – that was another 30 KLOC that I had left out; since it was not ‘our’ code (i.e. we did not write or maintain it). That 30 KLOC dwarfs our system, though I guess we probably only used about 20%. Still, a fair chunk of the graphics code was done for us before we even started.

So even when a lot of the drawing and filling routines and screen drivers are left out you still find your self with tons of code to manage what is visible on the screen. And I have seen this pattern repeat on many projects.

“So what” you might say. The interesting thing for the developers is that if the company is making photocopiers, then some of their programmers are going to have knowledge specific to photocopying, ink pump control and the like. But once their photocopiers get more complex and have a GUI on board the company is going to spend just as much on GUI expertise. If they do not purchase some third party tools, then you are going to spend an awful lot (of time/money/resourses) on graphics code.

All this is worth knowing when you are hiring a new engineer, or if deciding to buy a GUI library, or write one yourself. Having a sense of the size of the challenge helps drive good project management decisions.

In fairness the 50/50 split might be slightly exaggerated. Controls code is often shorter but more difficult to write than GUI code, so each 1000 lines of code is not an equal measure.

The numbers still tell me that there is a real benefit in third party tools that dramatically reduce the workload on the GUI portion of the project – since that is a big chunk of the total project, and there are even cases where you can justify spending more on the hardware if it makes the programming job easier.

If you have used any tools that you think tipped the balance on the amount of GUI code that you had to write, let me know.