Northern
Utah
WebSDR


Nothern Utah WebSDR Logo - A skep with a Yagi
Using "KA9Q-Radio"
with the SDRPlay receivers


Preliminary

Important:

This document represents an effort on
my part to understand the operation of "ka9q-radio" and is not intended to be authoritative.

As such, this is a work in progress and will certainly contain many "blank spots" and errors.  What it is intended to do is to help the new user along and start to get the "feel" of how the pieces go together.

Please read EVERY document in the /docs directory of the "ka9q-radio" git - and refer back when you see something you don't understand!


For more information about ka9q-radio, go here:

Using KA9Q-Radio - link

This page has much more information about the internal workings of ka9q-radio and other examples of its use.




IMPORTANT - Observe gain settings for the RSP or else you may end up with a "deaf" receiver:


Be sure to properly set the "lna gain" and "gain reduction" values in the configuration file.  Suggested settings:
  • For the lower HF bands (160-30 meters):  Set lna-state to a value of 2 and a gain-reduction value of 30-45, around 40 being a good starting place
  • For the upper HF bands (20-10 meters):  Set lna-state to a value of 1 and a gain-reduction value of 30-45, around 40 being a good value.
    • If you have a large amount of attenuation in your feedline (e.g. splitters, lossy cable) an lna-state value of 0 may be appropriate.
Note that a gain-reduction value below 30 does not result in improved performance and doing so is more susecptible to causing A/D converter overloads.






What is "KA9Q-Radio" - and why is it different from other SDR programs/suites?


One of the advantages of SDRs is the capability of receiving multiple signals at the same time - but this is typically exploited only in a limited fashion.  The limit of this capability is a combination of both the bandwidth of the acquisition device (e.g. how much spectrum the device is inhaling) and also the processing capability of the host computer.  Usually it's the latter point that has limited the usefulness/capability of many wide-bandwidth SDRs:  It is typical for each "instance" of a receiver used by a user to have to process data from the high-bandwidth acquisition stream - which may be several megasamples.  Because each per-user instance requires so much processing, this can make a multi-user receiver system "un-scalable" - that is, each user requires a significant amount of CPU processing.

In 2006, an article was published  1  that described what might be considered to be a mathematical "shortcut" when it comes to processing large amounts of data.  Without going into detail, the "traditional" method for producing a single virtual receivers is to crunch the full bandwidth data to yield - at least in an amateur radio application - only a narrow bandwidth - perhaps a few kHz for an SSB or FM signal or even a few 10s of kHz for a waterfall - and if multiple receivers are required, it's necessary to "re-crunch" the large amount of raw input data for each, individual receiver even through that mathematical operation for each receiver is expensive in CPU time and nearly identical.   A far more efficient method - potentially one that is many hundreds of times more efficient, depending on how much "economy of scale" was done - would be to do the "expensive" number crunching just once and then use that already-processed data to synthesize each, individual receiver - and it is this method, generally referred to "Overlap and Save" - that is used by KA9Q-Radio.


As an example of the "former" method:  If the "csdr"  2  utility is used on, say, an RTL-SDR with 2 MHz of bandwidth, a Raspberry Pi4 is capable of only handling 4-8 simultaneous receivers before all available CPU cycles are used:  This is one of the reasons why the open-source "OpenWebRX" server isn't particularly salable to a large number (e.g. dozens) of users.  Conversely, the PA3FWM WebSDR server (which is closed source) likely (unconfirmed!) uses same the techniques as KA9Q-Radio - which are noted in Footnote #1 -to allow hundreds of users on the same hardware platform as an OpenWebRX server that may be only to handle a half-dozen or so.

Using the aforementioned "Overlap and Save" method in reference #1, a Raspberry Pi4 running ka9q-radio can simultaneously decode every FM channel in the same 2 MHz bandwidth
simultaneously with plenty of processing power to spare!

KA9Q-radio is open-source and it may be found here:  https://github.com/ka9q/ka9q-radio/ - and the instructions for compiling it along with a list of dependencies may be found here:  https://github.com/ka9q/ka9q-radio/blob/main/INSTALL.txt

IMPORTANT - READ THIS BEFORE PROCEEDING:

NOTE:  You MUST have the SDRPlay API installed to compile and use an SDRPlay receiver!

The API is available from https://www.sdrplay.com/downloads/ - then select "RSP1A" and "Linux/x86 Ubuntu", click "next, select "API", check the box on the right edge of the line for "API 3.07", and then click on the blue bar above this that says "Select one download from the list below and then click here".


As of the time of this writing (20230612) "sdrplayd.c" will not compile unless the following changes are made:

Hardware requirements:

This article describes using the SDRPlay RSP1a SDR which can operate up to a sample rate of a bit more than 10 Msps - but only to about 6.1 Msps with the optimal 14 bit analog-to-digital resolution.  That which is described below should work with other SDRPlay receiver hardware such as the RSPduo, RSPdx and the older RSP1, RSP2 and RSP2pro, but only the RSP1a has been tried.

USB port usage limitations with SDRPlay hardware:

By default, the SDRPlay uses "isochronous" mode for high-rate data transfer on USB.  This mode, which "reserves" bandwidth is quite efficient, but there are a few caveats in its use:
As noted elsewhere in this document, ka9q-radio will also work with other popular SDR hardware, but the discussion of the use of these other devices is not covered here although the aspects of configuration common throughout ka9q-radio will apply.

Additional installation instructions:


Also, be sure to read this file:  https://github.com/ka9q/ka9q-radio/blob/main/docs/notes.md as it contains information about configuring multicast and the local DNS needed to resolve the hostnames.

After installing and building ka9q-radio, run the following commands (sudo may be required):

mkdir /var/lib/ka9q-radio                          Note:  This may fail if it already exists
chown <username> /var/lib/ka9q-radio               Substitute the user name under which you are running "ka9q-radio"

It may be worth verifying that /var/lib/ka9q-radio/wisdom is "owned" by the user running "ka9q-radio"

Also make sure that this directory - and the wisdom file - belong to the same group under which you are running "ka9q-radio" using "chgrp".  If, when starting "radiod" you see an error related to the wisdom file it probably has to do with access to it.

FFT "wisdom" file:

Once you have installed ka9q-radio, execute the following to optimize the operation of the FFTW3 algorithm.  The data that this produces - the "wisdom" file - is specific to every computer and running this optimizes performance on the hardware that you are using.

time fftwf-wisdom -v -T 1 -o wisdom rof500000 cof36480 cob1920 cob1200 cob960 cob800 cob600 cob480 cob320 cob300 cob200 cob160

NOTE:  This may take many minutes - or even hours to run, depending on your computer hardware.

Once this is done, take the resulting file - "nwisdom" - and place it in /etc/fftw - but back-up the previous version that was there!

For more information about this, see:  https://github.com/ka9q/ka9q-radio/blob/main/docs/FFTW3.md

Also recommended:

It is recommended that you also install "Avahi" for local DNS name resolution of the multicast streams using the name rather than the IP address - one method being to do:  snap install avahi

After this is installed, enable it by typing:   sudo systemctl start avahi-daemon.service - and then verify that it is running by typing:  
sudo systemctl status avahi-daemon.service

The use of this will be discussed later.

To Do:
 See if there is a way to install Avahi using "apt install" rather than Snap as it is desirable to uninstall Snap as it can operate in the background and "break" an already-working system - particularly after a reboot.


IMPORTANT - Back up your .conf files!


As of the time of this writing (June, 2023) the default configuration files WILL BE OVERWRITTEN every time you do an update/make of ka9q-radio.  Both the files in the home "~/ka9q-radio" and "/etc/radio" directories can be overwritten.

What this means is that if you modify the original configuration files (e.g "/etc/radio/sdrplayd.conf", "/etc/radio/radiod@hf.conf" - and those in the "ka9q-radio directory) you will LOSE those modifications when you do an update.

When you make changes to ANY configuration file, be sure to save a copy elsewhere, and be prepared to restore it after you do updates.

It is on the list of future updates to change this behavior.




The "magic" of "ka9q-radio":  One input/multiple outputs:

In the simple case, let us presume that you have a single SDR of some sort: KA9Q-radio supports several receivers, including the RTL-SDR, HackRF, AirspyHF, Funcube Dongle, SDRPlay devices and the RX-888 - and more may be added in the future as this is open-source.

Let us suppose that you wish to receive audio from, say, 22 frequencies simultaneously - all WSPR frequencies on the LF, MF and HF bands (14 frequencies - including two each on 60 and 80 meters), all WWV signals (2.5, 5, 10, 15, 20 and 25 MHz), all CHU frequencies (3.330, 7.85, 14.66 MHz) the purposes of propagation monitoring.  Conventionally, one might configure 22 audio channels in ALSA/Pipewire/Pulse and while this is possible, it gets cumbersome:  What if you also wanted to monitor the FT-8 and FT-4 channels as well - adding at least another 20 audio streams?

A usable work-around would be to convey these demodulated audio streams via multicast.  This means of propagating data via UDP dates back to at least the late 1980s and is widely used to convey video streams over LANs:  In this case, we could use this transport method to loop data to the same computer to which the receiver is connected - but we could also send this same data to one or more computers on the same LAN, each one taking the data that is relevant to its needs.

While this means of transport is elegant in its simplicity, it does have a few caveats:


Using KA9Q-radio:


IMPORTANT:  This document - as is KA9Q-radio - is a work in progress and will evolve.  I (the author of this document) am learning the various aspects of this utility as well and errors, misunderstandings and omissions are surely going to appear.  Please consider this document to be a first step in getting acquainted with this software.

Comment:  Various pieces of KA9Q-radio may be configured to start/operate as a service - but at the time of this writing, I could not get this to work reliably and is a work in progress.

Example:  A multi-channel receiver using the SDRPlay RSP1

The RSP1 is a fairly low-cost (approx. US$125) USB-interfaced SDR receiver that is capable of tuning from audio through the low microwave frequencies.  This SDR - like many others - uses a frequency converter to translate a passband of frequencies down to baseband where low-pass filtering may be applied (effecting the "band pass" response) and then digitized.  The raw sample rate and bit depth of the RSP1a is as follows:
Sample rates of lower than 2 Msps are not supported directly by the hardware's A/D converter, but are instead supported in software by decimation - that is, the actual sample rate of the A/D converter is run at some power-of-two multiple (2, 4, 8, 16, 32, etc.) higher than the desired output sample rate to get above 2 Msps and then down-sampled and filtered.  For example, if you needed an output rate of 192 kHz, it would run at 192 * 16 = 3.072 Msps internally and down-sample from there.  For our purposes, you do not need to configure this down-sampling as it is handled by the API, but it is worth knowing as the USB interface and the host computer may be processing far more data than you might otherwise think.

Configuring the RSP1a:

"Config" files are used extensively by ka9q-radio and this is no exception.  As an example, let us consider an absolute minimum configuration file that we will call sdrplayd.conf

[rsp1a_test]
description = RSP1a
serial = 2271031F98
samprate = 600000        ; default is 2M

lna-state = 3
if-att = 40
#if-agc = yes
#iface = eth0               ; force primary interface, avoid wifi
iface = lo               ; default loopback interface, avoid wifi
status = rsp1a-status.local
data = rsp1a-pcm.local
frequency = 5100000       ; locks tuner when manually set



In the file above we have defined the name of our receiver configuration as [rsp1a_test].   This is actually a section header with the name "rsp1a_test" and if we wish, we can have several such sections within our .conf file with different configurations/frequencies.  In other words, if we have more than one SDRPlay receiver - or we wish to have different frequencies and sample rates - we can create several such sections within the file.

The fields shown above are as follows:

A complete list of parameters available in the .conf file for use with the SDRPlay receivers is located near the end of this page.





Starting the receiver:

For this example we will place the sdrplayd.conf file described above in the "ka9q-radio" directory.

From the directory containing the complied binaries, execute the following line:

 ./sdrplayd rsp1a_test -f ~/ka9q-radio/sdrplayd.conf &

In the above line:

This will start the "sdrplayd" binary using the "rsp1a_test" configuration contained within the file "sdrplayd.conf" - the "&" at the end will cause this to run in the background:  Use "killall sdrplayd" to stop the process.

If you get an error:

Pay close attention to the errors that you might get.  Typical causes of errors are:

Starting sdrplayd as a service:

The above shows "sdrplayd" running as a program.

Alternatively, "sdrplayd" can be run as a service - but the configuration file, using the default ".service" file - must be located in /etc/radio.

It may be started as a service with the line:

sudo systemctl start sdrplayd@rsp1a_test.service

To determine success, do the following:  
sudo systemctl status sdrplayd@g5rv.service - or you could also look at "top" or "htop" to verify that it is running.

To terminate the service, do:

sudo systemctl stop sdrplayd@rsp1a_test.service




Once sdrplayd is running:

With sdrplayd running, we need to set up our receivers which are invoked using "radiod" along with a configuration.  "radiod" is the heart of ka9q-radio as it does the work of processing the massive amount of data coming in from our receiver hardware.  Even on a modest processor it is capable of simultaneously demodulated hundreds of individual receive channels in a mix of frequencies, bandwidths, sample rates and modes.

As with the receive hardware itself, a configuration file is used to set things up and here we will use "sdrplay.conf".  (Notice that this is named "sdrplay.conf" not "sdrplayd.conf").

Consider the following:

./radiod radiod@sdrplay.conf &


Note that the name of the above file is "radiod@sdrplay.conf" - not "sdrplayd.conf"

The above will invoke "radiod" using configuration file "radio@sdrplay.conf" and if we peer into this we'll see which frequencies are configured - and the various receive mode.  Multiple definitions of receive frequencies and modes may be included in various named sections.  For more detailed information, see:  https://github.com/ka9q/ka9q-radio/blob/main/docs/ka9q-radio.md


Our example "radiod@sdrplay.conf" file:

Our "radiod@sdrplay.conf" file used by "radiod" is used to define the virtual receiver(s) that we might want.  Let's take a look at a minimum configuration:

The first - and required - section is the "global" section which contains the following:

[global]
overlap = 5
blocktime = 50
input = rsp1a-status.local
samprate = 12000
mmode = usb
status = hf.local
fft-threads = 4


Breaking this down:

Let's take a look at the sub-sections located after the [global] section:

[WWV5]
# 5 MHz WWV
data = wwv5-pcm.local
mode = am
freq = "5000k"

[SWBC60]
# 60 meter shortwave broadcasters
data = swbc60-pcm.local
mode = am
freq = "5025k 5050k 5130k"
samprate = 12000

In the [WWV5] section we define an output multicast stream that contains the 5 MHz WWV frequency using AM and will be carried on a multicast stream with the name "wwv5-pcm.local" and the "ssrc" (described below) will be 5000.  
Since the defined "default" sample rate of 12 kHz in the [global] section, this is what will be used.  

In the [SWBC60] section we define three additional virtual receivers on 5025, 5050 and 5130 kHz using AM and we are specifically defining a 12 kHz sample rate.  These three virtual receivers will have their audio carried on a multicast stream with the host name "swbc60-data.local".

In both sections we see that we can insert comments on lines beginning with "#" - and these can also be used to comment out an existing line - say, if we want to make configuration changes, but don't want to delete the old one.

As mentioned above, a single multicast stream can carry multiple audio channels using the 
SSRC to identify/use the sub-stream for a particular receiver - and in the above we see that we have defined the use of six frequencies on which we will receive WWV simultaneously.  By default, the SSRC will be the frequency with any non-numeric characters removed.  What this means is that contained within the "wwv5-pcm.local" multicast stream, ssrc 5000 has the audio from 5 MHz WWV signal and on the "swbc60-pcm.local", ssrc 5025 has the audio from the 5025 kHz signal on the "swbc60-pcm.local" stream.  As defined in the "global" section, the default audio sample rate is 12 kHz for each of the receivers.


Note:



"fftwf_export_wisdom_to_filename" errors when starting radiod


If you see the errors "fftwf_export_wisdom_to_filename" produced by "radiod" when it is starting, that does not mean that it won't work properly, but it likely will not be as CPU-efficient as it could be as the FFT "wisdom" needed to optimize operation of the algorithm is missing.  To help resolve this - and potentially reduce CPU utilization - do the following:  sudo chown <username> /var/lib/ka9q-radio/wisdom - substituting for <username> the name of the user under which you are running ka9q-radio.




Getting audio output (to speakers):

Being able to "hear" the demodulated audio is a quick and easy way to verify that everything is working - even if this isn't likely to be the main purpose to which ka9q-radio would be put.  At this point it is recommended that you place a .wav file on your test system and then use "aplay" to test the speaker:  If your filename were "music.wav", simply do:  play music.wav and if all goes well, you should hear it play:  If not, read the next section, below.

Having verified via "top" or "htop" that sdrplayd and radiod are running, you can test it via a local speaker if you like but note that unless you need an analog audio output of some sort, it is not even necessary to have any audio playback devices on your system - but it's a nice tool to have.  If your computer has a sound card, connect a speaker to it and do the following:  "aplay -l":  You should see a list of available devices such as the following:

**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC662 rev3 Analog [ALC662 rev3 Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

You may see other devices, particularly if you have an HDMI or similar monitor that can convey audio - but the above is typical of an analog audio output device on a motherboard.

What to do if you see "No audio device" when you try to play your local audio file (e.g. "music.wav")

If you do not see any available sound cards - but you know that one is present (a plug-in card, on the motherboard), it may be that you have been the victim of a quirk recent versions of Linux (e.g. Ubuntu 22.04) where parts of the the sound system seem to "go away" at random - likely after a reboot/update.  To repair this, try the procedure at the very top of this page:  http://www.sdrutah.org/info/high_rate_loopback_websdr.html to re-load/restart the audio devices once again.

If you get an error like "play WARN alsa: can't encode 0-bit Unknown or not applicable"

This is a vexing problem to many trying to use their sound cards on recent version of Ubuntu and it seems to be related to pulseaudio and/or pipewire.  If you get this error - and audio does not play, try disabling pulseaudio:
systemctl --user stop pulseaudio.socket
systemctl --user stop pulseaudio.service
After doing the above, try playing the audio file again:  It may work now, even if you still get the "can't encode 0-bit Unknown" error.

If you get an error related to pipewire and a report that no device is available - despite "aplay -l" showing devices, you may need to uninstall pipewire.  This is done using the following commands:
systemctl --user unmask pulseaudio
systemctl --user --now disable pipewire-media-session.service
systemctl --user --now disable pipewire pipewire-pulse
systemctl --user --now enable pulseaudio.service pulseaudio.socket
sudo apt remove pipewire-audio-client-libraries pipewire

Once you have a working audio path

Once you have verified that an audio device is present and will play the audio file that you have put on the computer, consider the following command:

monitor wwv5-pcm.local

We know from the output to screen when "radiod" started and from the contents of "radiod@sdrplay.conf" from the "WWV" section that the name of the stream is "wwv5-pcm.local"  represent the multicast for the WWV receivers (plural!) and upon this invocation we will see a screen like this:

KA9Q Multicast Audio Monitor: wwv5-pcm.local
                                                                 ------- Activity -------- Play
  dB Pan     SSRC  Tone Notch ID                                 Total   Current      Idle Queue
  +0  25     5000             WWV                                    5         5             104

If all goes right, you should hear audio from the speaker containing WWV.  If you invoked "monitor" with the other multicast stream as in "swbc60-pcm.local" you would see three frequencies and you would hear all of them at the same time.  To control these, press the "h" key to get a list of options - the most relevant for the current discussion shown below:
In other words, you can move up/down between receivers to select that on which the controls (keys) will operate.

When you first do this, it's recommended that you hit the "M" key (uppercase) to mute ALL receivers - and then use the up/down arrow to select which one(s) you wish to hear and then hit the "u" (lowercase) key to unmute that receiver.  You can then use the plus and minus keys to adjust the volume, left/right arrows to pan (move between speakers), etc.

To exit "monitor" hit "CTRL-C".




Getting a specific audio source from a stream

While it is possible to use other tools to extract a audio from a multicast stream (To Do:  Discuss other methods in this or another document) the "pcmcat" tool allows you to do so.  Taking the example of the WWV receivers again, consider the following line:


./pcmcat wwv5-pcm.local -s 5000 | aplay -r 12000 -c 1 -f s16_le

If all goes well (e.g. sdrplayd and radiod are running) you will hear the 5 MHz WWV audio.

Similarly, if we wanted to hear a shortwave broadcaster on 5050 kHz we would use this line:

./pcmcat swbc60-pcm.local -s 5050 | aplay -r 12000 -c 1 -f s16_le

In the above we see the multicast IP address for the WWV receivers - but following the "-s" parameter we see the "ssrc" - in this case "5000" representing 5 MHz (or another number if we had used the "ssrc" parameter when we defined the receiver).  Following this we see that we have piped - via STDOUT - the audio to "aplay", specifying a sample rate of 12 kHz (-r 12000), a monaural source (-c 1) and the format of our audio (-f s16_le) which is 16-bit signed, little-endian.

If desired, you could pipe the raw audio somewhere else - perhaps to a file - or use SOX to write it to a .wav file, instead as this example:

 ./pcmcat wwv5-pcm.local -s 5000 | sox -t raw -r 12000 -b 16 -c 1 -L -e signed-integer - out.wav

This will record the 5 MHz WWV receiver, via "STDOUT" from pcmcat to the file "out.wav".

To make it record for 2 minutes, the following will work:

timeout 120  ./pcmcat wwv-pcm.local -s 5000 | sox -t raw -r 12000 -b 16 -c 1 -L -e signed-integer - out.wav

Within ka9q-radio is another utility called "pcmrecord" that can record every virtual receiver within a group defined in the radiod.conf file simultaneously (e.g. all six WWV signals could be recorded at once.)  For more about "pcmrecord", see the page  
ka9q-radio command overview - Link.



Mode definitions

For more details about the "mode.conf" file and the parameters within it, see the page:  Configuration files in KA9Q-Radio - link.

In the configuration file for "radiod" (e.g. "radiod@hf.conf") we see the use of modes such as "am" and "usb" but you might wonder how these are defined.  The answer to this lies in the file "modes.conf" where we see - in each individual section  - the definition of of how this mode is defined in terms of sample rate, actual method of demodulation, filter bandwidth, etc.

While many of the common "modes" are included in "modes.conf", you can define and add your own mode:  Perhaps you need an upper-sideband receiver centered on 1500 Hz that is 400 Hz wide for WSPR - you could do that!

As an example how these are defined, consider the [am] section of "modes.conf":

[am]
demod = linear
samprate = 12000
low = -5000
high = 5000
recovery-rate = 50
hang-time = 0
envelope = yes

Now, consider the [cwu] (upper-sideband CW) section:

[cwu]
demod = linear
samprate = 12000
low =  -200
high = +200
shift = +500
hang-time = 0.2

Now, a "non-linear" mode:

[fm]
demod = fm
samprate = 24000
low =  -8000
high = +8000
deemph-tc = 0
deemph-gain = 0
threshold-extend = no ; don't interfere with packet, digital, etc

Contrary to common parlance, the mode amateurs use on VHF and UHF and call "FM" is really phase modulation - but "fm" is operationally identical to "pm" if, during transmit, the audio is pre-emphasized (boosted) at a rate of 6dB/octave and then de-emphasized (filtered) at the same rate on receive.  "True" FM is used for digital modulation such as that used for D-Star, C4FM, etc.

Finally, "pm" - the receive mode that is appropriate for typical amateur "FM" operation on the VHF/UHF bands:

[pm]
demod = fm
samprate = 24000
low =  -8000
high = +8000
squelchtail = 0
threshold-extend = yes ; PM assumes voice mode, so enable this

For this, the "squelchtail" is specified in the number of "blocks" (defined in the main receiver definition - typically 20 milliseconds) so a value of "0" would mean no squelch tail.  For this, "threshold-extension" is turned on (OK for voice - not recommended for any sort of data) and for +/- 5 kHz deviation, the same +/-8000 Hz (16 kHz) bandwidth is used.  It's worth noting that de-emphasis when using "demod = fm" is on by default, making it appropriate for the "pm" mode used by amateurs on the VHF and UHF bands.

For more information about the parameters found in "modes.conf" and other files refer to:




List of parameters used in the .conf file for "sdrplayd"

The number in parentheses after the parameter name is the default value.  Those with value of (-1) are generally mandatory, except as noted.  Their values are typically specified as <name of parameter> = <value>, as in:

frequency = 5100000
lna-state = 2

Parameters specific to SDRPlay hardware:
Parameters related to the configuration of ka9q-radio (not related to specific radio hardware):

Those parameters with defaults of (null) or (-1) are generally required except as noted.



For more information about ka9q-radio, go here:

Using KA9Q-Radio - link

This page has much more information about the internal workings of ka9q-radio and other examples of its use.


References:


1 - Mark Borgerding, “Turning Overlap-Save into a Multiband Mixing, Downsampling Filter Bank”, IEEE Signal Processing Magazine, March 2006. https://www.iro.umontreal.ca/~mignotte/IFT3205/Documents/TipsAndTricks/MultibandFilterbank.pdf

2 - The csdr tools by HA7ILM may be found here:  https://github.com/ha7ilm/csdr.  This represents a "toolbox" of signal processing engines that can do things like filter, decimate, shift, demodulate, convert formats, provide AGC and more.  These tools may be useful for additional filtering of signals.

3 - IGMP (Internet Group Management Protocol) is used to set up "local" groups of hosts.  In the context of this article, a "group" might be a number of hosts that require multicast data from a source on specific portions of the network, but not everywhere.  The ability to compartmentalize where multicast data is sent can prevent it from flooding to other devices on the network.  See the article:  https://en.wikipedia.org/wiki/Internet_Group_Management_Protocol  

4 - The PA3FWM at the University of Twente in the Netherlands uses an A/D converter that streams raw data via an Ethernet interface to a computer that uses graphics-card processors to do the heavy-lifting.  See these pages for more information:  http://websdr.ewi.utwente.nl:8901/  and http://www.pa3fwm.nl/projects/sdr  P.T. de Boer, PA3FWM, provides a version of the WebSDR software that is similar to that operating at the University of Twente, but does not utilize GPU cores and bespoke hardware and is therefore more limited in bandwidth - but it still is extremely economical with its CPU power in servicing many users - even on computers of limited processor power allowing many times the number of users compared to OpenWebRX.  While I have no direct evidence that it does, I suspect that the PA3FWM WebSDR uses the technique described in Reference #1 to allow its very efficient use of CPU power to service many users simultaneously.



TO DO:




Additional information:
 Back to the Northern Utah WebSDR landing page