Software-defined radio: The computer meets wireless

Farpoint Group –

I've mentioned software-defined radio (SDR) several times over the past few years, but I was surprised recently to discover I've never written a column on it. This is perhaps a minor oversight - SDR is already playing a role in many radio designs, but it's not yet mainstream, for reasons that I'll cover below. But SDR is regardless one of the most important directions in wireless today, and one that will be more than influential in the products you'll be buying in just a few years.

What is SDR? Well, imagine a very-high-performance computer designed to run software that emulates (becomes) a radio. Imagine an antenna on, for example, the back of your PC, and a little hardware inside to convert the analog signals of radio into corresponding digital signals. Then we process this digital representation of the airwaves with software that performs all of the major functions formerly done by analog hardware in the radio. That's about it. And, in reality, many radio chipsets today, including those in your cell phone and wireless LAN, already do something a lot like this - almost all have digital cores that run firmware (software that's burned into the chip and can't be changed) performing key radio functions.

But let's suppose for the moment that all we have is analog processing - radio waves are, after all, an analog phenomenon, since they belong to the "real world." Keep in mind here that all this "digital" stuff you hear about is just another way of representing that analog information. Analog is typified by continuous waves; digital represents a sampling of the amplitude (loudness) of these waves. The more samples we take per unit of time, and the more bits of resolution in each sample, the better the result - up to a point of diminishing returns. In general, sampling at twice the highest frequency in the analog signal is sufficient, but "oversampling" is now popular in many applications (particularly digital audio).

Anyway, in a purely analog receiver (we'll only discuss receivers here due to limited space, but the transmit side isn't all that different), we first use an antenna to attract the signal (see the figure). This signal is typically very, very weak (billionths of a Watt at best), so we use a special low-noise amplifier (LNA) to boost it. By low noise, we mean that only minimal distortion, if any, is added to the signal. One of the problems in analog processing is noise, so we need to make sure that when processing these very weak signals we don't do any damage. Low-noise amps are often based on gallium-arsenide technology, which is particularly good at linear, low-noise performance. We then feed the boosted signal to a mixer, which heterodynes the signal with a local oscillator that is used to select the particular frequencies of interest. This design goes back to its inventor, Edwin Armstrong, who did his work around 1918 - and it's still used, in various forms, in many radios today. This stage is also sometimes call downconversion, because we convert relatively high radio frequencies to lower frequencies that are easier to process. It's also sometimes called the first detector, because it's the first stage to actually deal with the radio signal itself.


We then feed the output of the mixer to one or more intermediate-frequency (IF) stages, which perform additional filtering and amplification. After all of this, we're ready to demodulate the signal, extracting the original waveform representing music, voice, digital bits, or whatever was sent. This is sometimes called baseband processing.

The problem with all this analog stuff is that it's not very precise. It's subject to issues with thermal stability, for example, where the ambient temperate can seriously affect the ability of the radio to stay on frequency. Some radios even have built-in heaters to deal with this problem! But, mostly, analog is just expensive and complex. Digital, on the other hand, is cheap. Many engineers are great with digital but know only the basics of the analog world. Computers perform functions like this all the time. So, how can we apply digital techniques to a radio?

As you can see in the figure, it's actually pretty easy. We still need the antenna and the LNA, of course. At this point, we could use an analog to digital converter (ADC, or just A/D) to turn the waveform into bits using the sampling methodology we described above. But the frequency is still very high, so the design of this converter would be difficult and likely very expensive (to say nothing of power-hungry!). So we still need some form of downconversion in order to reduce the signaling rate to something more reasonable. At this point we can now convert the signal directly to digital, and then do all of the additional filtering and demodulation in the digital domain. This technique is sometimes called direct conversion, as it eliminates the IF stages.

What all of this means is that we're going to write software and run it on a special, high-performance processor to emulate the functions previously done in the analog world. The processor is usually what's called a digital signal processor, or DSP, a microprocessor that particularly good at the addition and multiplication operations that are the essence of signal processing.

The real beauty of this approach is in the ability to change a radio's very nature by simply changing the software. Your cell phone could be entirely independent of GSM, CDMA, 3G, or whatever's next - just load new software, and you're off. It would also be easier to fix bugs and add new features, simply by downloading new software. Handset manufacturers and carriers could conceivably use this as a source of incremental revenue, something they're always after. The regulators are justifiably concerned, though, about hackers, viruses, and assorted other threats enabled by a fundamentally software-centric paradigm. Highjack a cell phone and use it for some nefarious purpose? Without appropriate security, such is indeed possible. But the FCC, for example, is very interested in the technology, and I think they'll be able to accommodate it when it becomes practical in consumer devices.

The real challenge of this approach, however, is in battery life. How can we build a DSP that can execute perhaps 20 billion operations a second (15 years ago, that was supercomputer territory) without draining a battery in an instant, and without the handset becoming to hot to (literally) handle? Well, as the saying goes, they're working on it, and I expect steady progress in this space over the next few years. I would guess that by 2010, we'll see SDR in at least some handsets. It's already common in base stations where power isn't such a big issue.

So, you're going to be hearing a lot more about SDR over the next few years. If you'd like to learn more, a search for "software-defined radio" will yield a lot of hits. I'd suggest, however, that you start with the SDR Forum, a trade association. Two companies you might also want to look at are Vanu and picoChip. Vanu is the leader in producing the software for SDR, and picoChip is building high-performance DSPs for wireless applications.

ITWorld DealPost: The best in tech deals and discounts.
Shop Tech Products at Amazon