|
Northern
Utah WebSDR
Observations of the SDRPlay RSP1a
when used at HF frequencies |
What this page is not:
What follows is not
an in-depth technical review of the RSP1a, but rather a series of
observations. Various signal-handling parameters like intercept
point, RMDR and NPR under particular conditions are not discussed,
being better-left for other venues - see the links at the bottom of this page.
For this discussion, the focus will primarily be on its use on the HF
spectrum which, for our purposes, will encompass 1-30 MHz. Because we
are interested in the highest-possible performance, we are interested
in using only the "14 bit" mode, which is available with a sample rate
of between 2 and 6 MSPS. Furthermore, because our emphasis is on using
the SDR1a with the PA3FWM WebSDR server, we'll be making our
observations using the Linux operating system (Ubuntu 20.x) and version 3.07 of the SDRPlay API.
Using the RSP1a on HF:
The SDRPlay RSP1a is a relatively inexpensive (US$120 at the time of writing)
USB Software Defined Radio tuner capable of tuning from audio
frequencies through 2 GHz. Considering its cost, it has pretty
reasonable performance, having an advertised "14 bits" of A/D
conversion depth, selectable input filtering, a 0.5 ppm TCXO for
frequency stability and the ability to inhale up to at least 8 MHz of
spectrum at once - the bit depth being dependent on the actual sample
rate.
Exactly how this receiver performs and under what conditions is not
well-defined in specifications and observations that might shed light
on this topic appeared to be scattered across the web and/or are
subjective in nature, so I decided to make some measurements of my own.
"Is it really 14 bits":
Immediately upon researching the RSP1a you will note that it doesn't
actually use a 14 bit A/D converter per se, but rather the MSi2500
delta-sigma converter is rated for 12 bits. Because a delta-sigma
converter is, at it's heart, a 1-bit A/D converter (comparator)
surrounded by other circuitry, it's necessary for it to be doing its
internal clocking at a rate far higher than the actual sampling rate.
What this can mean is that given appropriate on-chip hardware (e.g. available bits in a shift register, low enough noise internally),
this type of A/D converter can, when operated at a lower output sampling rate, yield additional LSBs (Least Significant Bits)of
usable data in its conversion process - and this isn't really a
"trick", but rather a useful property of this converter architecture
that has been well-used for decades. If we take the SDRPlay
folks at their word, that means that running the A/D converter's
internal sample clock at a higher rate in proprtion to a given output
sampling rate can yield a greater bit depth.
For more information on this topic, see the write-up at RTL-SDR.com here: https://www.rtl-sdr.com/a-review-of-the-sdrplay-rsp1a/
16 bit data is available:
Even though a full 16 bits isn't available from the A/D
converter itself, the converter is actually run at a rate higher than
the sample rate specified in the driver configuration. For
example, if one specifies an output sample rate of 768 kHz, the RSP1a's
A/D converter is run at higher than that - and as an example, let's
presume that it is operating at 3072 kHz - precisely four times the
output rate. According to signal theory, 4x oversampling will
effectively yield an additional bit of A/D resolution
- and presuming decent hardware and software implementation, this extra
bit isn't just "made up", but is real. In the process of decimation
- which is the process by which a higher rate is converted to a lower
rate - the data isn't simply reduced by throwing away three out of four
bytes, but it's common that a sort of averaging
of the data occurs, preceded by low-pass filtering to prevent
"aliasing" of the original wide-bandwidth data in a now-lower bandwidth
and in this process, a "remainder" is generated mathematically which
can contain more low-order bits than were available in the original.
The ultimate result is that the RSP API presents 16 bit signed data to its client programs. In other words, the value of the raw I/Q data will be between -32768 and 32767.
Is it real?
Whether it is really14
bits or 16 bits or its equivalent is a matter of debate. To be
sure, if
you look at the specifications of almost any high-speed A/D converter -
such as the LTC6401 14 bit A/D used in the KiwiSDR - you'll notice that
it falls slightly short of the theoretical S/N specifications for the
number of bits (e.g. 6 dB/bit should yield 6 * 14 = 84 dB)
- but this is mainly due to the failure of real-world hardware to
exactly meet theoretical specifications owing to contributed noise and
nonlinearity in the signal path. What can be difficult to
ascertain is the real dynamic range of the system as
a whole owing to oversampling. For example, with just 14 bits,
the "theoretical" dynamic range will be the 84 dB value discussed
before - but this applies only at the full sample rate. Because
the sample rate of the A/D converter is much higher than the detection
bandwidth of the user (around 3 kHz SSB bandwidth) the answer is somewhat complicated.
If, for example, we were to sample at 3 MHz, but use a 3 kHz receive
bandwidth, this represents a 1000:1 ratio - and oversampling theory
tells us that the square root of this ratio represents the effective
gain of bit depth (e.g. approx 32)
- which, itself, represents about 5 bits of extra range. Assuming
14 bits of acquisition - such as is the case with the RSP1a and our
KiwiSDR example - our theoretical bit depth is now 19 bits which is
capable of representing 114 dB. To be sure, this is an
oversimplification of a complicated topic, but the synthesis of bit
depth is real and very apparent - especially in those cases where just
eight bits are used to convey signal data as is done with the RTL-SDR
dongles or the RSP1a when using an 8-bit signal path.
RF Front-end filtering within the RSP1a:
The RSP1a contains several hardware L/C filters preceding most elements
of the RF signal path that are of interest to users at HF, but they are
of limited use as they are specified as:
- 2 MHz low-pass
- 2-12 MHz band-pass. There are front-end filtering issues on this band and 12-30 MHz - see the text below.
- 12-30 MHz band-pass
- MW Filter: Attenuation of >30dB from 660-1550 kHz
None of these filters are very sharp, For instance, one could cover the 160 meter amateur band (1.8-2.0 MHz)
with either the 2 MHz low-pass filter or 2-12 MHz band-pass filter, although there
might be a slight bit of roll-off of the low end of the band at the
other. The MW filter appears to do a pretty good job in removing
the majority of signals within the AM broadcast band, although if you
are unfortunate enough to have a very strong station at either end end
of the band (e.g. around 550 kHz or above 1600 kHz) you might need to see other means of knocking them down.
If you have a large and effective HF antenna, the two band-pass filters
- 2-12 and 12-30 MHz - are going to be of only limited usefulness as
the coverage of each encompasses a lot of spectrum which can harbor
very strong signals - not only from the shortwave broadcast bands,
where signals can be extremely strong (particularly if you live in Europe)
but also the total energy of summer lightning static which is, itself,
very broadbanded: Applying such a wide swath of spectrum to a
frequency mixer - even a fairly "strong" one - is not the best idea
when propagation is favorable and the total amount of energy present may, at times, overwhelm the input circuitry.
The problem with the "2-12 MHz" and "12-30 MHz" band filtering
While the SDRPlay receivers seem to work fine across HF, there is a
"gotcha" to be considered - particularly when using the HF frequences
between 2 and 12 MHz - and that problem is related to harmonic response
of odd multiples of frequencies. This topic was discussed in a
recent blog post of mine ( https://ka7oei.blogspot.com/2023/05/characterizing-spurious-harmonic.html ) but aspects of that post are repeated below.
The sensitivity to harmonics was tested with the RSP1a's local oscillator (but not necessarily the virtual receiver) tuned to 3.7 MHz 4 . For reasons likely related to circuit symmetry, it is odd
harmonics that will elicit the strongest response which means that it
will respond to signals around (3.7 MHz * 3) = 11.1 MHz. "Because
math", this spurious response will be inverted spectrally - which is to
say that a signal that is 100 kHz above 11.1 MHz - at 11.2 MHz - will appear 100 kHz below 3.7 MHz at 3.6 MHz.
(It's likely that there are also weaker responses at frequencies around
5 times the local oscillator, but these are - for the most part -
adequately suppressed by the filtering.)
In other words, the response to spurious signals follow this formula:
Apparent signal = Center frequency + ((Center frequency * 3) - spurious signal) )
Where:
- Center frequency = The frequency to which the local oscillator on the RSP is tuned. In the example above, this is 3.7 MHz.
- Spurious signal
= The frequency of spurious signal which is approximately 3x the center
frequency. In the example above, this is 11.2 MHz.
- Apparent signal = Lower frequency where signal shows up. In the example above, this is 3.6 MHz.
In
our example - a tuned frequency of 3.7 MHz - the 3rd harmonic would be
within the passband of the 2-12 MHz filter built into RSP1a meaning that
the measured response at 11.2 MHz will reflect the response of the
mixer itself, with little effect from the filter as the 2-12 MHz filter won't really affect the 11 MHz signal - and according to the RSP1a documentation (link), this filter really doesn't "kick in" until north of 13 MHz.
In other words, in the area around 80 meters, you will also be able to see the strong SWBC (Shortwave Broadcasting) signals on the 25 meter band around 11 MHz.
How bad is it?
Measurements
were taken at a number of frequencies and the amount of attenuation is
indicated in the table below. These values are from measurement of a
recent-production RSP1a and spot-checking of a second unit using a calibrated signal generator and the "HDSDR" program:
LO Frequency
|
Measured Attenuation at 3X LO frequency
|
Attenuation in "S" Units
|
2.1 MHz
|
21 dB (@ 6.3 MHz)
|
3.5
|
2.5 MHz |
21 dB (@ 7.5 MHz)
|
3.5
|
3.0 MHz |
21 dB (@ 9.0 MHz)
|
3.5
|
3.7 MHz |
21 dB (@ 11.1 MHz)
|
3.5
|
4.1 MHz |
23 dB (@ 12.3 MHz)
|
3.8
|
4.5 MHz |
30 dB (@ 13.5 MHz)
|
5
|
5.0 MHz |
39 dB (@ 15.0 MHz)
|
6.5
|
5.5 MHz |
54 dB (@ 16.5 MHz)
|
9
|
6.0 MHz |
54 dB (@ 18.0 MHz)
|
9
|
6.5 MHz |
66 dB (@ 19.5 MHz)
|
11
|
12.0 MHz |
21 dB (@ 36.0 MHz)
|
3.5
|
12.5 MHz |
21 dB (@ 37.5 MHz)
|
3.5
|
13.5 MHz |
22 dB (@ 40.5 MHz)
|
3.7
|
14.5 MHz |
26 dB (@ 43.5 MHz)
|
4.3
|
15.5 MHz |
31 dB (@ 46.5 MHz)
|
5.2
|
16.5 MHz |
35 dB (@ 49.5 MHz)
|
5.8
|
17.5 MHz |
39 dB (@ 52.5 MHz)
|
6.5
|
18.5 MHz |
43 dB (@ 55.5 MHz)
|
7.2
|
19.5 MHz |
46 dB (@ 58.5 MHz)
|
7.7
|
20.5 MHz |
50 dB (@ 61.5 MHz)
|
8.3
|
21.5 MHz |
53 dB (@ 64.5 MHz)
|
8.8
|
Table 1: Measured 3rd harmonic response of the RSP1a of the 2-12 and 12-30 MHz filters
Interpretation:
- In the above chart we see the local oscillator frequency in the
left column, the measured attenuation of the 3rd harmonic response (and
its frequency)
in the center column, and that amount of attenuation expressed in "S"
units. Here, an "S" unit is based on the IARU standard(Technical
recommendation R.1) of 6 dB per S unit, which is reflected in programs
like SDRUNO, HDSDR and many others.
- The attenuation of the 3rd harmonic response was measured by
first noting the signal level required to obtain a given reading -
typically "S-9" near the fundamental frequency - and then observing the
level required to obtain that same reading - within +/-1dB - near the
3rd harmonic frequency, using the relationship formula, above.
- Below the cutoff frequency of the relevant filter (nominally 12
MHz for receive frequencies in the range of 2 to 12 MHz, nominally 30
MHz for receive frequencies in the range of 12 to 30 MHz) the harmonic
response is limited to that of the mixer itself, which is 21 dB.
- We can see that on the 2 to 12 MHz segment, the attenuation
related to the 3rd harmonic doesn't exceed 40 dB (which is the low end
of what I would call "OK, but not great) until one gets above about 5
MHz (which translates to 15 MHz) and it doesn't get to the "goodish"
range (50dB or more) until north of about 5.5 MHz which is borne out by
the filter response charts published by SDRPlay.
- On
the 12 to 30 MHz band the filter has practically negligible effect
until one gets above about 20 meters, at which point it gets into the "OK, but not great"
range by about 18 MHz, and it doesn't really get "goodish" until north
of 20.5 MHz. What this means is that strong 6 meter signals may well
appear in the 16.5 to 17.5 MHz range as frequency inverted
representations.
- If there
is a relatively strong signal source in the area of the 3rd harmonic
response, it will likely appear at the lower receive frequency where the
attenuation of the filter is less than 40 dB or so. The severity of
this response will, of course, depend on the strength of that signal,
the amount of attenuation afforded by the filters at that frequency, and
the amount of noise and other signals present in the range of the
fundamental frequency response.
Based on the above data, we can deduce the following:
- When the RSP1a is tuned between 2 MHz and (below) 12 MHz, it is
using its "2-12 MHz" filter. In this range - and below approx. 4 MHz -
the 12 MHz cut-off of the filter has negligible effect in reducing 3rd
harmonic response.
- What this means is that signals from 6-12 MHz will appear
more or less unhindered (aside from the 21 dB reduction afforded by the
mixer) when the local oscillator of the receiver is tuned between 2 and
4 MHz.
- The
3rd harmonic response across 2-4 MHz - which is the 6-12 MHz frequency
range - can contain quite a few strong signals and noise sources such as
those from shortwave broadcast stations.
- When the RSP1a is tuned between 12 MHz and (below) 30 MHz, it
is using its "12-30 MHz" filter. Below about 14 MHz, the 30 MHz
cut-off of the filter has negligible effect in reducing 3rd harmonic
response.
- Signals from 36-40 MHz will appear with just 21-26 dB attenuation when tuned in the range of 12-13.5 MHz.
- In
most cases there are probably few signals in the 36-40 MHz range that
are likely to be an issue when tuning in the 12-13.5 MHz range.
The recommendation: If you are using this receiver on just a single HF band - as you would with a WebSDR - it is strongly suggested that it be preceded with a filter that is intended to pass just the frequencies of interest.
"Band pass" filtering in the RSP1a:
The RSP1a also includes what is called "band pass" filtering that must
be used appropriately to minimize aliasing and to maximize dynamic
range by rejecting signals outside the bandwidth of interest.
This filtering is implemented in hardware in the "tuner" chip (which precedes the A/D converter)
and is actually a low-pass filter present on each of the I/Q channels -
but since raw I/Q data can represent "negative" frequencies (with respect to the center) a low-pass filter of half the stated bandwidth will appear to be a band-pass filter.
There are two groups of filters - one being "narrow" and the other being "wide":
- Narrow: 200, 300,
600 and 1536 kHz. The stated bandwidths are at their -0.5dB
points and are down 39, 71 and 100 dB at the 2x, 4x and 8x bandwidths,
respectively.
- Wide: 5, 6, 7 and 8 MHz. The stated bandwidths are actually 4.6, 5.6, 6.5 and 7.4 MHz (respectively)
at their -0.5dB points so the 5, 6, 7 and 8 MHz attenuation is going to
be higher than that. The specifications indicate -43, -63 and
-100 dB at the 2x, 3x and 4x bandwidths, respectively.
Using the RSP1a with the PA3FWM WebSDR:
At the Northern Utah WebSDR, we have been able to make use of the
wider-bandwidth capabilities of the RSP1a to provide receive bandwidth
on the 16 bit data path up to 768 kHz - significantly wider than the
previous 192 kHz limit of the more traditional softrock+sound card
approach. The obvious advantage is that rather than having to
cover many of the bands in discrete segments (e.g.
3 "bands' to cover the U.S. MHz 80 meter band, 2 "bands" to cover the
U.S. 40 meter and the 20 meter bands, etc. with just 192 kHz being
available) just one receiver for each
band may be used to cover the bands through 12 meters in their
entirety: 10 meters, being 1.7 MHz wide, would still require
several receivers for full-band coverage.
What this means is that not only is the user presented with a
continuous tuning spectrum of each band, the "eight band" limit of the
WebSDR software itself is less of a hindrance. For example:
A WebSDR that had covered 160, 80, 40 and 20 - which would have
required eight 192 kHz-wide receivers - now can do the same with just four receivers, permitting that same server to now cover four additional bands.
This proposition is discussed in more detail on this page: Operating a WebSDR receiver at 384 or 768 kHz using the 16 bit signal path - link.
Observations on receiver performance:
What follows are various observations on receiver performance using the
RSP1a operating at 768 kHz on a WebSDR system using version 3.07 of the
SDRPlay API for Linux.
Raw I/Q format:
The raw I/Q receive signal data from the API is in the form of a 16 bit signed number.
The maximum value represented is, not surprisingly, -32768 and
+32767. This may come as a surprise in light of the fact that 14
bits are available from the A/D converter within the receiver, but keen
observers will note from the specifications that the minimum sample rate from the RSP1a is actually 2 MSPS. Although the API is a bit opaque, it would seem
that for lower rates like 768, 384 and 192 kHz that the raw sample rate
across the USB is actually around 3072 kSPS, meaning that for every
sample at 768 kHz, there were four actual A/D samples. According
to Nyquist theory, a 4x oversampling will effectively yield one more
bit of A/D resolution (read about oversampling here: https://en.wikipedia.org/wiki/Oversampling ) - but since this resampling results in rounding-off errors, the apparent
data can contain even more than the 15 bits that you might expect,
which is why examination of the raw I/Q data will show no missing LSB
values.
Lowest possible A/D reading:
With an "ideal" A/D converter, the "no signal" input condition should
yield an A/D converter reading of zero. In reality, this is
almost never the case with higher-resolution A/D converters (at least those that are affordable!) as noise from the "system" (e.g. surrounding circuitry, the A/D converter itself) will inevitably "tickle" some of the lowest-order bits.
Considering that the A/D converter is, at heart, "officially" only 12
bits, what might we see if we remove as much signal from the input as
possible? To properly do this, it would be necessary to go onto
the circuit board and disconnect/terminate the input of the converter -
something that I'm not wont to do - so instead, what if we minimize the
input signal level as much as possible?
To do this,we must:
- Set "gain reduction" to the maximum value - which is 59 dB.
- Set "LNA gain reduction to the maximum value.
- The "bandwidth" value is set to 600 kHz. This is apparently a filter interposed within the I/Q sample chain.
Doing both of these will remove as much signal as possible from the
input of the A/D converter (within the bandwidth specified), and in doing this the signed 16-bit value
that I saw being output by the API is around +/-40, representing a bit
more than six bits at a tuned frequency of 7.2 MHz.
Comment: This minimum value may vary with the sampling rate and filter bandwidth. After all, with a given noise power (dBm/Hz), a higher bandwidth will naturally intercept more noise energy.
With the RSP1a set in this state, I took a look at the noise spectra on
the resulting waterfall and saw very little in the way of coherent
noise, implying that at least a significant portion of the "tickling"
of these bits is caused by uncorrelated
noise. This is important as this means that signals that might be
present in this noise floor will be mixed with what is essentially
Gaussian (random, white)
noise rather than discrete spectral components, reducing the likelihood
of "birdies" appearing due to the limited quantization depth - a common
problem with too-low signal levels on the input of an 8-bit receiver
such as the RTL-SDR dongle. It's worth noting that many
higher-end, high-resolution A/D converters actually include circuitry
to add noise to the input for
this very reason - although it's unlikely that they would need to add 6
bits worth! It's worth noting that many 16 bit sound cards -
including those of good quality - will often have several bits of LSB
noise - and this noise is often not particularly Gaussian!
The source of these 6 LSBs of noise is hard to divine, however:
Is it from the A/D converter itself or from the signal processing
stages (amplifiers, attenuators, filters)
that precede it? Based on its apparent "Gaussian-ness" I'm
guessing that a good chunk of it is likely contribution from the
circuitry that precedes the A/D converter.
What about less "gain reduction"?
This is a good time to mention that one of the peculiarities of the
documentation of the SDRPlay is that they don't refer to gain, but
rather "gain reduction" - a bit backwards from what one might think, as
one might normally call this "attenuation". Apparently, the way
the RSP1a is designed is to normally have all amplification enabled and
no added attenuation as its "normal" state - and to reduce signal
appropriately, attenuation is added. This philosophy has some
interesting implications when the uninitiated person attempts to
configure an RSP1a on HF!
Considering that the vast majority of the frequency coverage of the
RSP1a is at VHF and above, it's not surprising that it would be
designed to operate effectively at such frequencies. At VHF/UHF,
the limiting factor of receiver sensitivity - given the lack of
environmental noise from nearby electronic devices - would be the
receiver system
noise figure which encompasses losses at the antenna, the feedline and
the noise contribution of the receiver itself. According to the
RSP1a's specifications, the rated noise figure at, say, 450 MHz, can be
as low as about 4 dB, which while not "moonbounce-worthy" (where one needs system noise figures of well under 1 dB) really isn't too bad, and is on par with a reasonably good multi-mode amateur transceiver.
On HF, the situation is quite different: The very nature of the
ionosphere and the geomagnetic field make that RF environment much
noisier. For example, if one has access to an "absolutely quiet"
receive location (no man-made noise),
one can expect to be able to easily hear the natural, background noise
with a receiver system noise figure of 15 dB at 30 MHz - and the lower
the frequency, the noisier it gets: At 15 meters, a 20dB system
noise figure will suffice and at 80 meters, a 37 dB system noise figure
will also suffice.
Clearly, one need not attain the lowest-possible noise figure values at
HF to "hear everything there is to hear" - and to do so might imply the
application of too much
gain in the signal path which would waste the limited dynamic range of
an A/D converter on noise energy below which there would never be any
signals. With this in mind, the designers of the RSP1a seem to
have limited the absolute sensitivity at frequencies below 60 MHz - but
the question remains: If the "gain reduction" is reduced, how
much noise - presumably from the amplification - appears on the A/D
converter.
For this test, the following settings were used:
- Set "gain reduction" to 20, the minimum value
- Set "LNA gain reduction" to 0, the minimum value.
Under these conditions, the minimum A/D value is around 2000-2500.
Again, visual inspection of the noise floor in the waterfall
implies that the noise is uncorrelated, leading me to think that in
this configuration, much of the signal is from the intrinsic noise of
the RF amplifier and, possibly, thermal noise of attenuators/filters.
It was observed that with the bandwidth set to 600 kHz, there was
"edge" darkening on the waterfall due to roll-off of the noise
amplitude, but when the bandwidth was switched to 1536 kHz, this edge
darkening was absent, further pointing to the noise source as being
external to the A/D converter and preceding the mixer/converter chip and originating from the RF amplifier.
This minimum value changes most dramatically with the "gain
reduction" parameter, further showing that this minimum noise level is
being established in the signal path prior to the A/D converter (it will attain the value of about "40" when "gain reduction" is set to 59, regardless of the LNA setting) and hovers around 350-400 when the LNA value is set to 6 and "gain reduction" is set to 20.
The conclusion:
- Do NOT use a higher "LNA gain reduction" value than necessary to hear your antenna system's noise floor.
- If practical to do so, do not set the "gain reduction" value
lower than needed to attain your antenna system's noise floor - but
make sure that there is enough headroom such that the strongest signals
that might occur are accommodated if the AGC needs to set gain
reduction to the highest value (e.g. 59)
- A "gain reduction" value of below 30 may not be beneificial
in the sense that the circuitry noise is starting to dominate:
Setting the "gain reduction" value below this has no effect on
sensivity as the A/D converter is already "seeing" the noise floor, so
further reduction will mainly serve to represent that device noise
floor with more A/D bits. In certain cases, doing this may help
reduce sampling artifacts, but it's not likely to be very beneficial in
the general sense.
Minimum discernible signal:
Many years ago, receiver manufacturers would tout sensitivity as a
major selling point, but with the advent of "quieter" amplifying
devices (solid state amplifiers, certain types of vacuum tubes)
this became less important as it became almost trivial to have a
receiver sensitivity that would exceed the noise floor of even the
quietest of antenna systems - even poor ones.
Typically, "MDS" (Minimum Discernible Signal) is measured in a certain bandwidth, but to make this measurement easier to replicate by anyone else who wishes to do so we
will use DL4YHF's "Spectrum Lab" program with its built-in SINAD
test and setting the -6dB bandwidth of the WebSDR software to 2.80 kHz.
Audio was looped from the web browser to Spectrum Lab using
"Virtual Audio Cable".
With these settings, the measured 12 dB SINAD sensitivity at 7.2 MHz,
under various configurations of the LNA with "Gain Reduction" remaining constant are as
follows Note that these values apply only to the RSP1a only:
Table 1:
"LNAstate" versus the absolute receiver sensitivity and signal path attenuation, measured at 7.2 MHz.
See comments, below, about the "calculated MDS".
LNAstate Value
(atten in dB) |
Gain Reduction value |
Signal level for 12dB SINAD |
Calculated MDS |
Relative S-meter reading |
0 (0dB) |
30 |
-111.5 dBm |
-128.0 dBm |
0 dB |
1 (6dB) |
30 |
-107.0 dBm |
-123.5 dBm |
-6.5 dB |
2 (12dB) |
30 |
-101.0 dBm |
-117.5 dBm |
-13 dB |
3 (18dB) |
30 |
-96.0 dBm |
-112.5 dBm |
-19 dB |
4 (37dB) |
30 |
-87.5 dBm |
-104.0 dBm |
-37 dB |
5 (42dB) |
30 |
-75.5 dBm |
-92.0 dBm |
-42 dB |
6 (61dB) |
30 |
-65.0 dBm |
-81.5 dBm |
-61 dB |
Important: The "LNAstate" attenuation PRECEDES the LNA built into the RSP1a, so attenuation values applied there directly impact absolute sensitivity - see below.
Another test was done with LNAstate = 1, this time checking the SINAD and the relative S-meter reading while "Gain Reduction" was varied:
Table 2:
"Gain Reduction" versus absolute receiver sensitivity and signal path attenuation, measured at 7.2 MHz
See comments, below, about the "calculated MDS".
Gain Reduction value |
Signal level for 12 dB SINAD |
Calculated MDS |
Relative S-meter reading |
20 |
-107.0 |
-123.5 dBm |
0 dB |
30 |
-107.0 |
-123.5 dBm |
-10 dB |
40 |
-106.7 |
-123.2 dBm |
-20 dB |
50 |
-106.1 |
-122.6 dBm |
-30 dB |
59 |
-103.1 |
-119.6 dBm |
-39 dB |
Comments:
- "LNAstate" is a value used by the API to set the LNA gain/configuration and is the "l" (lower-case "L") parameter in the "sdrplayalsa" program - see below for more information about this program.
- "Gain Reduction value" is the setting used to set the signal path
attenuation via the API and is the "g" parameter in the "sdrplayalsa"
program.
- Note: Some forks of the "rsp_tcp" program include the "l" and "g" parameters, but not the version from SDRPlay - see below for more information.
- As noted above, the SINAD reading is based on the WebSDR's bandwidth being set to 2.8 kHz at the -6dB points.
- The "Relative S-meter reading" is referenced from the top
line of each table, respectively and is a reading from the WebSDR's
S-meter, which is known to properly follow/scale signal levels within
the detection passband.
- The measurement of "MDS" (Minimum Discernible Signal) is, according to the ARRL Test Procedures Manual (link) is typically made at an S+N of 3 dB in an 500 Hz bandwidth. The approximate conversion of the SINAD numbers given in tables 1 and 2 to this MDS measurement would be to subtract
16.5 dB from the values shown: This is the source of the data in
the "Calculated MDS" column: Because we are dealing with "white"
noise, this should be a valid conversion if I've done my math right.
The above data gives a bit of insight to the way the SDR1a's signal path works. Looking at the upper table, we can see that the "LNAstate" value has the expected effect on the absolute
signal level as indicated by the relative S-meter reading.
According to the v3.07 API specifications, the values of 0-6 will
provide 0, 6, 12, 18, 37, 42 and 61 dB of attenuation, respectively -
and this is very close to what was measured. In each case, we
also measure the signal level required for a 12 dB SINAD value - which
is, in this case, a true
reference of absolute receiver sensitivity. In looking at the
lower table, we hold the "LNAstate" value at 1, but vary the "gain
reduction" value, inserting up to 39 dB of additional "gain reduction" (a.k.a. attenuation) into the signal path.
If we change the "gain reduction" value between
its minimum value of 20 to its maximum of 59 we see only a slight (4 dB)
decrease in absolute sensitivity - while the S-meter difference clearly
shows that we are changing the signal level. In comparison, if we
change the "LNAstate" value by about the same amount (from "0" to "4") the S-meter changes by about the same amount, but the actual receiver sensitivity changes from a reasonable (for HF use) 12 dB SINAD sensitivity of about -111.5 dBm (a bit below "S-2") to a terrible (!) -87.5 dBm (approximately S-6)
- about 24 dB. Comparing a roughly similar change in total gain
when we compare an "LNAstate" of 3 and 6, we see that the S-meter
changes by 42 dB and the 12 dB SINAD changes by 31 dB.
What this tells us is something that isn't readily divined from the
block diagram of the SDR1a, but that the attenuation associated with
the "LNAstate" value precedes the LNA in the signal path, while it would
appear that the "Gain Reduction" value follows it. This
supposition is evidenced by the fact that changing "gain reduction"
parameters - which follows the built-in LNA, barely has an effect on the actual sensitivity of the receiver, but
"LNAstate" does! One can infer this by reading between the lines
of the SDR1a's "General Specifications" document in the section where
the gain, noise figure and "IIP3" (3rd order intercept point)
are tabulated - and the fact that the IIP3 values shift by the same
amount as the relative S-meter reading would seem to verify this.
Appropriately setting the receiver gain parameters:
Tried the "rsp_tcp" driver and found it to be terrible? Here's why:
For anyone who has tried to use an RSP1a "out of the box" on a Linux
computer, the result was likely that it performed very badly with the
default gain settings found in the original "rsp_tcp" driver (see the Git from SDRPlay here) - and for good reason: The "LNAstate" value - which is not configurable from the command line is set to 4 and the "Gain Reduction" value (also not settable from the command line) is set to 40: These settings result in a uselessly-deaf receiver as one can infer from the Table 1, above. If you wish to use "rsp_tcp" for ANYTHING, make certain that you obtain a version/fork that allows configuration of these parameters.
If you get nothing else from this document, PLEASE NOTE THE FOLLOWING:
For almost ANY instance where HF reception is anticipated with either no or a moderate (10-15 dB) of system RF signal gain is used between the antenna and receiver, the ONLY
usable values for "LNAstate" are likely to be 0 through 3, with 0 or 1
being appropriate for the higher HF bands (20-10 meters) and a value of 3 possibly
being appropriate for the lowest HF bands (160 and 80 meters) in situations where the noise
floor of the local receive environment is likely to be fairly high.
A useful tool for ANY
operator of an SDR is the ability to check the "high water mark" of the
raw A/D conversion: Without this, it is very difficult to
appropriately distribute gain preceding the A/D converter. ALL SDR software should include the ability to see - as much as practical - the raw I/Q data to determine appropriate system gain distribution.
Fortunately, the author of the PA3FWM WebSDR software knew this.
Sometimes not known by WebSDR operators is the ability to check
the A/D
values of their receivers, and this is done by going the "status" web
page of the WebSDR, which may be found at this URL: http://websdr.ip.addr:8901/~~status
where "websdr.ip.addr:8901" is the IP address/host name of the WebSDR
in question. This page will give you something that looks like
this:
name overruns readb. blocks timestamp users 10sec_ADmin/max ever_ADmin/max source
80M 1 8 5657577 1636589970.412967 6 -9452 8767 -32613 32411 0,$plug:f_loop0_out
60M 0 20 3926416 1636589970.408832 2 -512 508 -512 508 5,!rtlsdr 127.0.0.2:65144
40M 1 2 6837680 1636589970.399954 24 -21440 20772 -32708 32693 0,$hw:26,0,0
The columns are as follows:
- name - This is the name of the band as it appears in the "websdr.cfg" file.
- overruns - The number of
times, since the server was started, that there was more data from the
receiver than could be handled by the server. It's common for
several overruns to occur when the server starts up, but gradually
increasing values may indicate that the WebSDR service does not have
enough CPU power, or that there is something else on the server
consuming resources.
- readb. - I'm not sure what this is, but it may be based on multiples of 128 kHz of receiver bandwidth.
- blocks - The number of
blocks from the receiver since start-up. This number may not
increment unless someone is using that particular receiver.
- users - The current number of users on the receiver
- 10_sec_ADmin/max - This is the maximum A/D value seen in the most recent 10 second block.
- For ALSA devices: This value can range from 0 (no signal at all - usually bad!)
to -32768 32767 maximum - which likely indicates that the receiver's
A/D converter has clipped within the past 10 seconds - which can also
be bad!
- For RTL_SDR devices: For RTL-SDRs or SDRPlay devices using the "RTL_SDR" interface, this value can range from 0 (no signal - bad!) to -512 508. This value is the 8 bit signed value (-128, 127) multiplied by FOUR.
- ever_ADmin/max - This is the maximum A/D value seen since the WebSDR was last restarted using the numbering as the "10_sec_ADmin/max" column.
- Note: Some drivers - typically RTL-SDR - are prone to "glitching" upon start-up and may show a maximum value from the beginning.
- source - This is the name
of the "audio" device as defined in the audio system and used in the
"websdr.cfg" file.
- Devices using the 8-bit RTL_TCP interface will
show up as type "rtlsdr" regardless of the actual hardware (e.g. an actual RTL-SDR, or an SDRPlay using the "rtl_tcp" interface of the WebSDR which is 8 bits ONLY, no matter the device.
- Device type "hw" is a physical sound card: The actual name of the sound card might appear rather than its device number.
- Device type "plug" is a virtual sound card created using "snd-aloop" (or similar)
and is defined using the ".asoundrc" file. This is the type of
interface used for feeding I/Q data from an RSP1a or other receiver,
into ALSA and the WebSDR server.
When setting the gain parameters for the RSP1a, it's recommended that you do this under two different conditions for each band:
- When the band is closed and only weak signals are present - if there are any at all. This is the time to set the "maximum" gain. The best test is to disconnect the antenna from the input of the receiver system and note the change in the S-meter reading: If the level of the noise is about 6 dB higher when the antenna is connected then your overall gain is probably close - but if you see a change well in excess of 10 dB, you probably have too much
receiver gain and should reduce it - but this could also mean that you
have a high RF noise source: If this source is intermittent,
re-run this test when it is gone.
- While you are doing this, you should also disconnect the RF input from the receiver while the antenna system itself is disconnected. If you get a notable decrease (more than 6 dB) in signal level on the receiver when it is disconnected from your antenna system you
either have an excess of RF amplifier gain or, if you don't have an RF
amplifier in your system, you have issues with your cabling and noise
ingress.
- Remember: If, without an amplifier, you can see "antenna noise", you probably do not need the amplifier!
- Under low/no signal conditions, you can expect the 10_sec_ADmin/max
value to be somewhere in the range of 250-500 with the antenna
connected, and it should noticeably reduce when the antenna is
disconnected. A value greater than 1000 or so when no signals are
present is indicative of too much gain.
- Comment: If you are using an 8 bit signal path (e.g. RTL-SDR or RSP1a on the "RTL_TCP" input)
your "band is closed with no signals" A/D reading should never drop
below 20 or so at the lowest or else you can expect a lot of
quantization noise and distortion on the weak signals that might be present.
- When the band is wide open and signals are strong.
This is harder to do as band openings naturally vary so it's
worth checking frequently after first setting up a receiver - or if
conditions on a particular band improve dramatically with the change of
season and/or solar activity. If you see the 10_sec_ADmin/max
value frequently hitting "-32768 32727" with an ALSA-based receiver as
you refresh the "~status" page, it's likely that you have too much overall gain - and it's what AGC is for!
Conclusion:
While setting the gain of the receiver may seem tricky, it's quite
manageable with the available tools - notably the "~~status" page from
the WebSDR server. This adjustment is an iterative process and
it's unlikely that the first attempt at setting things up will be the
final configuration as observations are made over time during both low
and high signal conditions.
About the "sdrplayalsa" program:
This is a driver that interfaces with the SDRPlay API that
allows configuration of the receiver in terms of frequency and sample
rate, It also allows the configuration of a built-in AGC that may
be used to regulate the signal levels being applied to the A/D
converter to allow operating under widely-varying signal conditions.
See the page "Getting the raw I/Q data from the receiver" - link for more information.
More to follow.
Additional
information:
- For general information about this WebSDR system -
including contact info - go to the about
page (link).
- For the latest news about this system and current issues,
visit the latest news
page (link).
- For more information about this server you may contact
Clint, KA7OEI using his callsign at ka7oei dot com.
- For more information about the WebSDR project in general -
including information about other WebSDR servers worldwide and
additional technical information - go to http://www.websdr.org
Back to the Northern Utah WebSDR landing page