Nothern Utah WebSDR Logo - A skep with a Yagi Northern Utah WebSDR
Observations of the SDRPlay RSP1a
when used at HF frequencies

What this page is not:

What follows is not an in-depth technical review of the RSP1a, but rather a series of observations.  Various signal-handling parameters like intercept point, RMDR and NPR under particular conditions are not discussed, being better-left for other venues - see the links at the bottom of this page.

For this discussion, the focus will primarily be on its use on the HF spectrum which, for our purposes, will encompass 1-30 MHz.  Because we are interested in the highest-possible performance, we are interested in using only the "14 bit" mode, which is available with a sample rate of between 2 and 6 MSPS.  Furthermore, because our emphasis is on using the SDR1a with the PA3FWM WebSDR server, we'll be making our observations using the Linux operating system (Ubuntu 20.x) and version 3.07 of the SDRPlay API.

Using the RSP1a on HF:

The SDRPlay RSP1a is a relatively inexpensive (US$120 at the time of writing) USB Software Defined Radio tuner capable of tuning from audio frequencies through 2 GHz.  Considering its cost, it has pretty reasonable performance, having an advertised "14 bits" of A/D conversion depth, selectable input filtering, a 0.5 ppm TCXO for frequency stability and the ability to inhale up to at least 8 MHz of spectrum at once - the bit depth being dependent on the actual sample rate.

Exactly how this receiver performs and under what conditions is not well-defined in specifications and observations that might shed light on this topic appeared to be scattered across the web and/or are subjective in nature, so I decided to make some measurements of my own.

"Is it really 14 bits":

Immediately upon researching the RSP1a you will note that it doesn't actually use a 14 bit A/D converter per se, but rather the MSi2500 delta-sigma converter is rated for 12 bits.  Because a delta-sigma converter is, at it's heart, a 1-bit A/D converter (comparator) surrounded by other circuitry, it's necessary for it to be doing its internal clocking at a rate far higher than the actual sampling rate.  What this can mean is that given appropriate on-chip hardware (e.g. available bits in a shift register, low enough noise internally), this type of A/D converter can, when operated at a lower output sampling rate, yield additional LSBs (Least Significant Bits)of usable data in its conversion process - and this isn't really a "trick", but rather a useful property of this converter architecture that has been well-used for decades.  If we take the SDRPlay folks at their word, that means that running the A/D converter's internal sample clock at a higher rate in proprtion to a given output sampling rate can yield a greater bit depth.

For more information on this topic, see the write-up at RTL-SDR.com here:  https://www.rtl-sdr.com/a-review-of-the-sdrplay-rsp1a/

16 bit data is available:

Even though a full 16 bits isn't available from the A/D converter itself, the converter is actually run at a rate higher than the sample rate specified in the driver configuration.  For example, if one specifies an output sample rate of 768 kHz, the RSP1a's A/D converter is run at higher than that - and as an example, let's presume that it is operating at 3072 kHz - precisely four times the output rate.  According to signal theory, 4x oversampling will effectively yield an additional bit of A/D resolution - and presuming decent hardware and software implementation, this extra bit isn't just "made up", but is real.  In the process of decimation - which is the process by which a higher rate is converted to a lower rate - the data isn't simply reduced by throwing away three out of four bytes, but it's common that a sort of averaging of the data occurs, preceded by low-pass filtering to prevent "aliasing" of the original wide-bandwidth data in a now-lower bandwidth and in this process, a "remainder" is generated mathematically which can contain more low-order bits than were available in the original.

The ultimate result is that the RSP API presents 16 bit signed data to its client programs.  In other words, the value of the raw I/Q data will be between -32768 and 32767.

Is it real?

Whether it is really14 bits or 16 bits or its equivalent is a matter of debate.  To be sure, if you look at the specifications of almost any high-speed A/D converter - such as the LTC6401 14 bit A/D used in the KiwiSDR - you'll notice that it falls slightly short of the theoretical S/N specifications for the number of bits (e.g. 6 dB/bit should yield 6 * 14 = 84 dB) - but this is mainly due to the failure of real-world hardware to exactly meet theoretical specifications owing to contributed noise and nonlinearity in the signal path.  What can be difficult to ascertain is the real dynamic range of the system as a whole owing to oversampling.  For example, with just 14 bits, the "theoretical" dynamic range will be the 84 dB value discussed before - but this applies only at the full sample rate.  Because the sample rate of the A/D converter is much higher than the detection bandwidth of the user (around 3 kHz SSB bandwidth) the answer is somewhat complicated.

If, for example, we were to sample at 3 MHz, but use a 3 kHz receive bandwidth, this represents a 1000:1 ratio - and oversampling theory tells us that the square root of this ratio represents the effective gain of bit depth (e.g. approx 32) - which, itself, represents about 5 bits of extra range.  Assuming 14 bits of acquisition - such as is the case with the RSP1a and our KiwiSDR example - our theoretical bit depth is now 19 bits which is capable of representing 114 dB.  To be sure, this is an oversimplification of a complicated topic, but the synthesis of bit depth is real and very apparent - especially in those cases where just eight bits are used to convey signal data as is done with the RTL-SDR dongles or the RSP1a when using an 8-bit signal path.

RF Front-end filtering within the RSP1a:

The RSP1a contains several hardware L/C filters preceding most elements of the RF signal path that are of interest to users at HF, but they are of limited use as they are specified as:
None of these filters are very sharp,  For instance, one could cover the 160 meter amateur band (1.8-2.0 MHz) with either the 2 MHz low-pass filter or 2-12 MHz band-pass filter, although there might be a slight bit of roll-off of the low end of the band at the other.  The MW filter appears to do a pretty good job in removing the majority of signals within the AM broadcast band, although if you are unfortunate enough to have a very strong station at either end end of the band (e.g. around 550 kHz or above 1600 kHz) you might need to see other means of knocking them down.

If you have a large and effective HF antenna, the two band-pass filters - 2-12 and 12-30 MHz - are going to be of only limited usefulness as the coverage of each encompasses a lot of spectrum which can harbor very strong signals - not only from the shortwave broadcast bands, where signals can be extremely strong (particularly if you live in Europe) but also the total energy of summer lightning static which is, itself, very broadbanded:  Applying such a wide swath of spectrum to a frequency mixer - even a fairly "strong" one - is not the best idea when propagation is favorable and the total amount of energy present may, at times, overwhelm the input circuitry.

The problem with the "2-12 MHz" and "12-30 MHz" band filtering

While the SDRPlay receivers seem to work fine across HF, there is a "gotcha" to be considered - particularly when using the HF frequences between 2 and 12 MHz - and that problem is related to harmonic response of odd multiples of frequencies.  This topic was discussed in a recent blog post of mine (  https://ka7oei.blogspot.com/2023/05/characterizing-spurious-harmonic.html ) but aspects of that post are repeated below.

The sensitivity to harmonics was tested with the RSP1a's local oscillator (but not necessarily the virtual receiver) tuned to 3.7 MHz 4 .  For reasons likely related to circuit symmetry, it is odd harmonics that will elicit the strongest response which means that it will respond to signals around (3.7 MHz * 3) = 11.1 MHz.  "Because math", this spurious response will be inverted spectrally - which is to say that a signal that is 100 kHz above 11.1 MHz - at 11.2 MHz - will appear 100 kHz below 3.7 MHz at 3.6 MHz.  (It's likely that there are also weaker responses at frequencies around 5 times the local oscillator, but these are - for the most part - adequately suppressed by the filtering.)

In other words, the response to spurious signals follow this formula:

Apparent signal = Center frequency + ((Center frequency * 3) - spurious signal) )

Where:

In our example - a tuned frequency of 3.7 MHz - the 3rd harmonic would be within the passband of the 2-12 MHz filter built into RSP1a meaning that the measured response at 11.2 MHz will reflect the response of the mixer itself, with little effect from the filter as the 2-12 MHz filter won't really affect the 11 MHz signal - and according to the RSP1a documentation (link), this filter really doesn't "kick in" until north of 13 MHz.

In other words, in the area around 80 meters, you will also be able to see the strong SWBC (Shortwave Broadcasting) signals on the 25 meter band around 11 MHz.

How bad is it?

Measurements were taken at a number of frequencies and the amount of attenuation is indicated in the table below.  These values are from measurement of a recent-production RSP1a and spot-checking of a second unit using a calibrated signal generator and the "HDSDR" program:

LO Frequency
Measured Attenuation at 3X LO frequency
Attenuation in "S" Units
2.1 MHz
21 dB (@ 6.3 MHz)
3.5
2.5 MHz 21 dB (@ 7.5 MHz)
3.5
3.0 MHz 21 dB (@ 9.0 MHz)
3.5
3.7 MHz 21 dB (@ 11.1 MHz)
3.5
4.1 MHz 23 dB (@ 12.3 MHz)
3.8
4.5 MHz 30 dB (@ 13.5 MHz)
5
5.0 MHz 39 dB (@ 15.0 MHz)
6.5
5.5 MHz 54 dB (@ 16.5 MHz)
9
6.0 MHz 54 dB (@ 18.0 MHz)
9
6.5 MHz 66 dB (@ 19.5 MHz)
11
12.0 MHz 21 dB (@ 36.0 MHz)
3.5
12.5 MHz 21 dB (@ 37.5 MHz)
3.5
13.5 MHz 22 dB (@ 40.5 MHz)
3.7
14.5 MHz 26 dB (@ 43.5 MHz)
4.3
15.5 MHz 31 dB (@ 46.5 MHz)
5.2
16.5 MHz 35 dB (@ 49.5 MHz)
5.8
17.5 MHz 39 dB (@ 52.5 MHz)
6.5
18.5 MHz 43 dB (@ 55.5 MHz)
7.2
19.5 MHz 46 dB (@ 58.5 MHz)
7.7
20.5 MHz 50 dB (@ 61.5 MHz)
8.3
21.5 MHz 53 dB (@ 64.5 MHz)
8.8
                      Table 1:  Measured 3rd harmonic response of the RSP1a of the 2-12 and 12-30 MHz filters

Interpretation:

Based on the above data, we can deduce the following:
The recommendation:  If you are using this receiver on just a single HF band - as you would with a WebSDR - it is strongly suggested that it be preceded with a filter that is intended to pass just the frequencies of interest.

"Band pass" filtering in the RSP1a:

The RSP1a also includes what is called "band pass" filtering that must be used appropriately to minimize aliasing and to maximize dynamic range by rejecting signals outside the bandwidth of interest.  This filtering is implemented in hardware in the "tuner" chip (which precedes the A/D converter) and is actually a low-pass filter present on each of the I/Q channels - but since raw I/Q data can represent "negative" frequencies (with respect to the center) a low-pass filter of half the stated bandwidth will appear to be a band-pass filter.

There are two groups of filters - one being "narrow" and the other being "wide":

Using the RSP1a with the PA3FWM WebSDR:

At the Northern Utah WebSDR, we have been able to make use of the wider-bandwidth capabilities of the RSP1a to provide receive bandwidth on the 16 bit data path up to 768 kHz - significantly wider than the previous 192 kHz limit of the more traditional softrock+sound card approach.  The obvious advantage is that rather than having to cover many of the bands in discrete segments (e.g. 3 "bands' to cover the U.S. MHz 80 meter band, 2 "bands" to cover the U.S. 40 meter and the 20 meter bands, etc. with just 192 kHz being available) just one receiver for each band may be used to cover the bands through 12 meters in their entirety:  10 meters, being 1.7 MHz wide, would still require several receivers for full-band coverage.

What this means is that not only is the user presented with a continuous tuning spectrum of each band, the "eight band" limit of the WebSDR software itself is less of a hindrance.  For example:  A WebSDR that had covered 160, 80, 40 and 20 - which would have required eight 192 kHz-wide receivers - now can do the same with just four receivers, permitting that same server to now cover four additional bands.

This proposition is discussed in more detail on this page:  Operating a WebSDR receiver at 384 or 768 kHz using the 16 bit signal path - link.

Observations on receiver performance:

What follows are various observations on receiver performance using the RSP1a operating at 768 kHz on a WebSDR system using version 3.07 of the SDRPlay API for Linux.

Raw I/Q format:

The raw I/Q receive signal data from the API is in the form of a 16 bit signed number.  The maximum value represented is, not surprisingly, -32768 and +32767.  This may come as a surprise in light of the fact that 14 bits are available from the A/D converter within the receiver, but keen observers will note from the specifications that the minimum sample rate from the RSP1a is actually 2 MSPS.  Although the API is a bit opaque, it would seem that for lower rates like 768, 384 and 192 kHz that the raw sample rate across the USB is actually around 3072 kSPS, meaning that for every sample at 768 kHz, there were four actual A/D samples.  According to Nyquist theory, a 4x oversampling will effectively yield one more bit of A/D resolution (read about oversampling here:  https://en.wikipedia.org/wiki/Oversampling ) - but since this resampling results in rounding-off errors, the apparent data can contain even more than the 15 bits that you might expect, which is why examination of the raw I/Q data will show no missing LSB values.

Lowest possible A/D reading:

With an "ideal" A/D converter, the "no signal" input condition should yield an A/D converter reading of zero.  In reality, this is almost never the case with higher-resolution A/D converters (at least those that are affordable!) as noise from the "system" (e.g. surrounding circuitry, the A/D converter itself) will inevitably "tickle" some of the lowest-order bits.

Considering that the A/D converter is, at heart, "officially" only 12 bits, what might we see if we remove as much signal from the input as possible?  To properly do this, it would be necessary to go onto the circuit board and disconnect/terminate the input of the converter - something that I'm not wont to do - so instead, what if we minimize the input signal level as much as possible?

To do this,we must:
Doing both of these will remove as much signal as possible from the input of the A/D converter (within the bandwidth specified), and in doing this the signed 16-bit value that I saw being output by the API is around +/-40, representing a bit more than six bits at a tuned frequency of 7.2 MHz.

Comment:  This minimum value may vary with the sampling rate and filter bandwidth.  After all, with a given noise power (dBm/Hz), a higher bandwidth will naturally intercept more noise energy.

With the RSP1a set in this state, I took a look at the noise spectra on the resulting waterfall and saw very little in the way of coherent noise, implying that at least a significant portion of the "tickling" of these bits is caused by uncorrelated noise.  This is important as this means that signals that might be present in this noise floor will be mixed with what is essentially Gaussian (random, white) noise rather than discrete spectral components, reducing the likelihood of "birdies" appearing due to the limited quantization depth - a common problem with too-low signal levels on the input of an 8-bit receiver such as the RTL-SDR dongle.  It's worth noting that many higher-end, high-resolution A/D converters actually include circuitry to add noise to the input for this very reason - although it's unlikely that they would need to add 6 bits worth!   It's worth noting that many 16 bit sound cards - including those of good quality - will often have several bits of LSB noise - and this noise is often not particularly Gaussian!

The source of these 6 LSBs of noise is hard to divine, however:  Is it from the A/D converter itself or from the signal processing stages (amplifiers, attenuators, filters) that precede it?  Based on its apparent "Gaussian-ness" I'm guessing that a good chunk of it is likely contribution from the circuitry that precedes the A/D converter.

What about  less "gain reduction"?

This is a good time to mention that one of the peculiarities of the documentation of the SDRPlay is that they don't refer to gain, but rather "gain reduction" - a bit backwards from what one might think, as one might normally call this "attenuation".  Apparently, the way the RSP1a is designed is to normally have all amplification enabled and no added attenuation as its "normal" state - and to reduce signal appropriately, attenuation is added.  This philosophy has some interesting implications when the uninitiated person attempts to configure an RSP1a on HF!

Considering that the vast majority of the frequency coverage of the RSP1a is at VHF and above, it's not surprising that it would be designed to operate effectively at such frequencies.  At VHF/UHF, the limiting factor of receiver sensitivity - given the lack of environmental noise from nearby electronic devices - would be the receiver system noise figure which encompasses losses at the antenna, the feedline and the noise contribution of the receiver itself.  According to the RSP1a's specifications, the rated noise figure at, say, 450 MHz, can be as low as about 4 dB, which while not "moonbounce-worthy" (where one needs system noise figures of well under 1 dB) really isn't too bad, and is on par with a reasonably good multi-mode amateur transceiver.

On HF, the situation is quite different:  The very nature of the ionosphere and the geomagnetic field make that RF environment much noisier.  For example, if one has access to an "absolutely quiet" receive location (no man-made noise), one can expect to be able to easily hear the natural, background noise with a receiver system noise figure of 15 dB at 30 MHz - and the lower the frequency, the noisier it gets:  At 15 meters, a 20dB system noise figure will suffice and at 80 meters, a 37 dB system noise figure will also suffice.

Clearly, one need not attain the lowest-possible noise figure values at HF to "hear everything there is to hear" - and to do so might imply the application of too much gain in the signal path which would waste the limited dynamic range of an A/D converter on noise energy below which there would never be any signals.  With this in mind, the designers of the RSP1a seem to have limited the absolute sensitivity at frequencies below 60 MHz - but the question remains:  If the "gain reduction" is reduced, how much noise - presumably from the amplification - appears on the A/D converter.

For this test, the following settings were used:
Under these conditions, the minimum A/D value is around 2000-2500.  Again, visual inspection of the noise floor in the waterfall implies that the noise is uncorrelated, leading me to think that in this configuration, much of the signal is from the intrinsic noise of the RF amplifier and, possibly, thermal noise of attenuators/filters.  It was observed that with the bandwidth set to 600 kHz, there was "edge" darkening on the waterfall due to roll-off of the noise amplitude, but when the bandwidth was switched to 1536 kHz, this edge darkening was absent, further pointing to the noise source as being external to the A/D converter and preceding the mixer/converter chip and originating from the RF amplifier.

This minimum value changes  most dramatically with the "gain reduction" parameter, further showing that this minimum noise level is being established in the signal path prior to the A/D converter (it will attain the value of about "40" when "gain reduction" is set to 59, regardless of the LNA setting) and hovers around 350-400 when the LNA value is set to 6 and "gain reduction" is set to 20.

The conclusion:
Minimum discernible signal:

Many years ago, receiver manufacturers would tout sensitivity as a major selling point, but with the advent of "quieter" amplifying devices (solid state amplifiers, certain types of vacuum tubes) this became less important as it became almost trivial to have a receiver sensitivity that would exceed the noise floor of even the quietest of antenna systems - even poor ones.

Typically, "MDS" (Minimum Discernible Signal) is measured in a certain bandwidth, but to make this measurement easier to replicate by anyone else who wishes to do so we will use DL4YHF's "Spectrum Lab" program with its built-in SINAD test and setting the -6dB bandwidth of the WebSDR software to 2.80 kHz.  Audio was looped from the web browser to Spectrum Lab using "Virtual Audio Cable".

With these settings, the measured 12 dB SINAD sensitivity at 7.2 MHz, under various configurations of the LNA with "Gain Reduction" remaining constant are as follows  Note that these values apply only to the RSP1a only:

Table 1:
"LNAstate" versus the absolute receiver sensitivity and signal path attenuation, measured at 7.2 MHz.
See comments, below, about the "calculated MDS".
LNAstate Value
(atten in dB)
Gain Reduction value Signal level for 12dB SINAD Calculated MDS Relative S-meter reading
0 (0dB) 30 -111.5 dBm -128.0 dBm 0 dB
1 (6dB) 30 -107.0 dBm -123.5 dBm -6.5 dB
2 (12dB) 30 -101.0 dBm -117.5 dBm -13 dB
3 (18dB) 30 -96.0 dBm -112.5 dBm -19 dB
4 (37dB) 30 -87.5 dBm -104.0 dBm -37 dB
5 (42dB) 30 -75.5 dBm -92.0 dBm -42 dB
6 (61dB) 30 -65.0 dBm -81.5 dBm -61 dB

Important:    The "LNAstate" attenuation PRECEDES the LNA built into the RSP1a, so attenuation values applied there directly impact absolute sensitivity - see below.

Another test was done with LNAstate = 1, this time checking the SINAD and the relative S-meter reading while "Gain Reduction" was varied:

Table 2:
"Gain Reduction" versus absolute receiver sensitivity and signal path attenuation, measured at 7.2 MHz
See comments, below, about the "calculated MDS".
Gain Reduction value Signal level for 12 dB SINAD Calculated MDS Relative S-meter reading
20 -107.0 -123.5 dBm 0 dB
30 -107.0 -123.5 dBm -10 dB
40 -106.7 -123.2 dBm -20 dB
50 -106.1 -122.6 dBm -30 dB
59 -103.1 -119.6 dBm -39 dB

Comments:
The above data gives a bit of insight to the way the SDR1a's signal path works.  Looking at the upper table, we can see that the "LNAstate" value has the expected effect on the absolute signal level as indicated by the relative S-meter reading.  According to the v3.07 API specifications, the values of 0-6 will provide 0, 6, 12, 18, 37, 42 and 61 dB of attenuation, respectively - and this is very close to what was measured.  In each case, we also measure the signal level required for a 12 dB SINAD value - which is, in this case, a true reference of absolute receiver sensitivity.  In looking at the lower table, we hold the "LNAstate" value at 1, but vary the "gain reduction" value, inserting up to 39 dB of additional "gain reduction" (a.k.a. attenuation) into the signal path.

If we change the "gain reduction" value between its minimum value of 20 to its maximum of 59 we see only a slight (4 dB) decrease in absolute sensitivity - while the S-meter difference clearly shows that we are changing the signal level.  In comparison, if we change the "LNAstate" value by about the same amount (from "0" to "4") the S-meter changes by about the same amount, but the actual receiver sensitivity changes from a reasonable (for HF use) 12 dB SINAD sensitivity of about -111.5 dBm (a bit below "S-2") to a terrible (!) -87.5 dBm (approximately S-6) - about 24 dB.  Comparing a roughly similar change in total gain when we compare an "LNAstate" of 3 and 6, we see that the S-meter changes by 42 dB and the 12 dB SINAD changes by 31 dB.

What this tells us is something that isn't readily divined from the block diagram of the SDR1a, but that the attenuation associated with the "LNAstate" value precedes the LNA in the signal path, while it would appear that the "Gain Reduction" value follows it.  This supposition is evidenced by the fact that changing "gain reduction" parameters - which follows the built-in LNA, barely has an effect on the actual sensitivity of the receiver, but "LNAstate" does!  One can infer this by reading between the lines of the SDR1a's "General Specifications" document in the section where the gain, noise figure and "IIP3" (3rd order intercept point) are tabulated - and the fact that the IIP3 values shift by the same amount as the relative S-meter reading would seem to verify this.

Appropriately setting the receiver gain parameters:

Tried the "rsp_tcp" driver and found it to be terrible?  Here's why:

For anyone who has tried to use an RSP1a "out of the box" on a Linux computer, the result was likely that it performed very badly with the default gain settings found in the original "rsp_tcp" driver (see the Git from SDRPlay here) - and for good reason:  The "LNAstate" value - which is not configurable from the command line is set to 4 and the "Gain Reduction" value (also not settable from the command line) is set to 40:  These settings result in a uselessly-deaf receiver as one can infer from the Table 1, above.  If you wish to use "rsp_tcp" for ANYTHING, make certain that you obtain a version/fork that allows configuration of these parameters.

If you get nothing else from this document, PLEASE NOTE THE FOLLOWING:

For almost ANY instance where HF reception is anticipated with either no or a moderate (10-15 dB) of system RF signal gain is used between the antenna and receiver, the ONLY usable values for "LNAstate" are likely to be 0 through 3, with 0 or 1 being appropriate for the higher HF bands (20-10 meters) and a value of 3 possibly being appropriate for the lowest HF bands (160 and 80 meters) in situations where the noise floor of the local receive environment is likely to be fairly high.


A useful tool for ANY operator of an SDR is the ability to check the "high water mark" of the raw A/D conversion:  Without this, it is very difficult to appropriately distribute gain preceding the A/D converter.  ALL SDR software should include the ability to see - as much as practical - the raw I/Q data to determine appropriate system gain distribution.

Fortunately, the author of the PA3FWM WebSDR software knew this.  Sometimes not known by WebSDR operators is the ability to check the A/D values of their receivers, and this is done by going the "status" web page of the WebSDR, which may be found at this URL:  http://websdr.ip.addr:8901/~~status where "websdr.ip.addr:8901" is the IP address/host name of the WebSDR in question.  This page will give you something that looks like this:
name     overruns readb.  blocks      timestamp       users     10sec_ADmin/max  ever_ADmin/max  source
80M 1 8 5657577 1636589970.412967 6 -9452 8767 -32613 32411 0,$plug:f_loop0_out
60M 0 20 3926416 1636589970.408832 2 -512 508 -512 508 5,!rtlsdr 127.0.0.2:65144
40M 1 2 6837680 1636589970.399954 24 -21440 20772 -32708 32693 0,$hw:26,0,0
The columns are as follows:
When setting the gain parameters for the RSP1a, it's recommended that you do this under two different conditions for each band:

Conclusion:

While setting the gain of the receiver may seem tricky, it's quite manageable with the available tools - notably the "~~status" page from the WebSDR server.  This adjustment is an iterative process and it's unlikely that the first attempt at setting things up will be the final configuration as observations are made over time during both low and high signal conditions.

About the "sdrplayalsa" program:

This is a driver that interfaces with the SDRPlay API that allows configuration of the receiver in terms of frequency and sample rate,  It also allows the configuration of a built-in AGC that may be used to regulate the signal levels being applied to the A/D converter to allow operating under widely-varying signal conditions.  See the page "Getting the raw I/Q data from the receiver" - link for more information.



More to follow.



Additional information:
 Back to the Northern Utah WebSDR landing page