I’m proud to be the latest member of the embeddedgurus.net team. Welcome to my blog, Area 0x51.
Given the name Area 0x51, there can only be two topic choices for my first blog. I can either discuss alien embedded engineering techniques (AEET) or the venerable 8051 processor. Let’s go with the 8051. As luck would have it I’ve recently been working on a ZigBee one-chip-wonder based on the 8051. In general this is an amazing chip. It runs at a speed that could only be imagined when the original 8051 was introduced. In addition to the ZigBee radio it contains a pile of peripherals and a reasonable amount of RAM and FLASH ROM. Indeed, this chip allows a fully functional ZigBee device to be built with very few components.
But all is not well in this 8051 kingdom. The chip is true to its 8-bit CPU ancestry and suffers from the 64K address limit… then things go down hill.
One of the chip’s on-board peripherals is a relatively standard UART. It has a holding register to buffer the next character while one is being sent from the transmit register. When sending of the current byte is finished, the content of the holding register is loaded into the transmit register and the process of sending the next character begins. When the byte in the holding register is moved into the transmit register, a bit is set in the UART status register indicating the holding register has room for a new byte. All this is very nice and very traditional.
Unfortunately, the geniuses that designed the chip decide the power up initialization default value of the UART status register would indicate the holding register was full. This is downright silly. What were they thinking? The default state of the chip says the UART cannot accept a character to transmit. The programmer has to just KNOW that no meaningful byte will be overwritten. Sending the very first byte on the UART peripheral is an act of faith. One must write to a register when the status says that register is full.
A more serious challenge is the basic chip architecture. The interrupt vector table resides in FLASH memory. For most programs this presents no problem. Just burn the program and the associated interrupt vectors into FLASH and start running. However, this architecture makes an entire class of programs much more difficult to implement. Programs that may load different interrupt service routines must use an indirection table since writing a new interrupt vector into FLASH is prohibitively complex. This complexity includes the necessity of erasing the FLASH page before it can be written which means content must be saved, interrupts must be disable, and more. Writing a bootloader (my recent task) becomes extraordinarily difficult. The bootloader must never be overwritten, so its interrupt vectors must always be available in some fashion. The solution amounts to an arcane mixture of tables and function pointers, intercepted interrupts, and manual tracking of bootloader and application interrupt vectors. It would have been so, SO much easier if the interrupt vectors were in RAM and initialized by a few bytes of code executed early in the power up process.
Both the UART and the interrupt vector architecture issues come under the category of “what were they thinking”? Like a T-shirt manufacturer sewing a scratchy tag into the neck of the shirt, it is clear the manufacture did not actually use the product themselves, or perhaps has no structure in place to hear complaints of the early testers. In a competitive world, why would you work so hard to get such amazing capabilities into the chip, yet stumble on something as simple as a UART – a design that has been standardized for decades. It only makes sense that a set of experienced eyes should take a close look at new designs before they go into production. This is an extraordinarily cheap insurance policy that would prevent “stupid mistakes” and “dumb ideas” from being shipped. You have to wonder why management didn’t make use of experienced internal reviewers or perhaps an outside review by a company like Netrino. Was it hubris or was it the ever-present short schedule that allowed no time for such niceties as a review or a “stop-and-think”.
In my career, I’ve seen so many occasions where project management has driven an immature project to market that I can hear the echoes of their demands as I type: “Hurry, hurry, hurry… We have no time for a design review, or that code review, or to fix that bug. We must ship TODAY”. So many of the companies where I heard this are out of business now that you would think it would be blatantly obvious that rushing bad products to market eventually has a devastating effect on the company.
Unfortunately, this learning rarely takes place since those who rushed buggy products to premature market are, in fact, often congratulated for their skill and ability to drive a project to completion. Indeed, the corporate bonus structure is often based on the ability to deliver on commitments and stay on schedule. Furthermore, hiding bugs or perhaps not looking very hard for them allows the team and team leaders to bask in the glow of successfully meeting their schedule. The damage inflicted on their company is subtle and may take years to kill the company. By that time the culprits are too far removed to be tarnished by their misdeeds.
I’ve seen over and over job descriptions that require candidates to have a demonstrated ability to keep projects on schedule. I must say that I have NEVER seen a job description that required candidates to have a demonstrated ability to deliver quality products. Many companies talk the talk of quality but few actually care enough to walk the walk of ensuring their engineers have the training and time to deliver quality products.
I’m sure the 8051 wonder chip I recently used will sell because it has amazing capabilities. The problem is this chip, as good as it is, did not achieve its true potential. Like so many products, it can only dream of what might have been.