 |
Northern
Utah WebSDR
Duplicating receiver streams using ALSA and asoundrc |
Why do this?
On other pages on this site is discussed using the ALSA loopback
facility to facilitate the handling of receiver "audio" - which could
well be the raw I/Q from receiver hardware. In doing this, it's
demonstrated how, by using an "alternate" version of the loopback
utility in ALSA (called "fastloop" - see this web page: making_fplay.html ) up to 768 kHz of bandwidth could be passed from the raw I/Q from a receiver (such as the RSP1a by SDRPlay)
into the "sound card" port: Doing this allows use of the 16 bit
audio pathway afforded to sound cards rather than the higher-bandwidth
8 bit pathway of the "rtl_sdr" pathway.
With this much bandwidth available, it might be useful to be able to make multiple-use of this I/Q data for purposes OTHER than the WebSDR. A few random examples:
- Multiple receiver systems using the same I/Q data.
A separate "receiver" slot for WSPR or FT8 reception, possibly
using the "CSDR" code to process the higher bandwidth down to
SSB-bandwidth audio - or narrower.
- Sending existing I/Q (receiver) data elsewhere. Streaming of audio from the WebSDR server onto the network to a "skimmer".
- Spectral analysis/logging of an entire band.
Perhaps you want to make a "moving" image showing signals, noise
and QRM over the course of the day: By "splitting" the signal,
you can use an existing receiver.
- Recording of all signals within the passband for logging/analysis purposes.
With relatively cheap hard drive space and processing power, all
signals on an entire amateur band could be "recorded"/logged.
- Monitoring receiver performance.
Occasionally, receiver hardware or, more likely, their drivers
quite working, and having access to a local audio source derived from
that receiver can be helpful in determining if that receiver has gone
deaf, noisy, or has in some way malfunctioned.
What follows is one way to "split" the audio from an audio source
to several places, taking advantage of the capabilities of the
loopback and ALSA.
Using .asoundrc:
To do this, we must
add a lot of stuff to the ".asoundrc" file located within the "home"
directory of the user of the receiver in question. Of course, you
should make a back-up copy of the existing .asoundrc file before you
modify anything.
We'll assume that we've already set up the loopback - and for the
examples below, we'll assume the use of "fastloop" - a version of
"snd-aloop" modified to allow operation at 768 kHz as described here:
high_rate_loopback_websdr.html
For the examples below, loopback device #0 is called "flp0" (e.g. "fastloop zero").
Splitting the output to multiple devices:
This entire operation will be done in several steps, so we'll the following code in .asoundrc to create device "rich0_split" (receiver channel zero split) using the "multi" directive as follows:
pcm.rxch0_split {
# same output to multiple channels
type multi;
#
# send to "input" side of loopbacks
slaves.a.pcm "hw:flp0,0,0";
slaves.a.channels 2;
slaves.b.pcm "hw:flp0,0,1";
slaves.b.channels 2;
slaves.c.pcm "hw:flp0,0,2";
slaves.c.channels 2;
slaves.d.pcm "hw:flp0,0,3";
slaves.d.channels 2;
#
# tell which channel goes where
bindings.0 { slave a; channel 0; }
bindings.1 { slave a; channel 1; }
bindings.2 { slave b; channel 0; }
bindings.3 { slave b; channel 1; }
bindings.4 { slave c; channel 0; }
bindings.5 { slave c; channel 1; }
bindings.6 { slave d; channel 0; }
bindings.7 { slave d; channel 1; }
}
In the above, we specify type "multi" and specify several "slave"
outputs, each to sub-devices of loopback "flp0". As noted in the
referenced web page, you can have up to eight loopback devices in an standard LINUX system and each of those may have up to eight "sub-devices". In
the example above, we define slaves "a" through "d" and assign each one
to sub-devices 0-3, respectively. In other words, our device
"rxch0_split", if you put audio into it, that same audio gets output on
these four sub-devices.
It's not enough just to define where audio gets duplicated so the
"bindings" statements farther down define where each of the two
channels ("left" and "right" being channel "0" and "1", respectively)
for each of the channels: With four audio sources, we therefore
need to define eight destinations. If you wonder if we can "swap"
the I/Q channels with the binding definitions - the answer is yes, you
can!
Setting up an audio route:
Now that we have a way to duplicate to multiple audio outputs, we need to configure the route to this as follows:
pcm.rxch0_route {
# set up route from input to output
type route
slave.pcm "rxch0_split"
#
# set mixer tables
# Format: channel {binding level; binding level}
# e.g. above, bindings 0 and 2 are channel 0 (left) for outputs a and b, respectively
# as in: ttable { 0 { 0 1.0; 2 1.0 } }
# ALTERNATIVE FORMAT:
# ttable.channel.binding level
ttable.0.0 1
ttable.1.1 1
ttable.0.2 1
ttable.1.3 1
ttable.0.4 1
ttable.1.5 1
ttable.0.6 1
ttable.1.7 1
}
In this function we specify the type "route" and the virtual device that we defined earlier " (e.g. "rxch0_split").
Below this, we use the "ttable" to define, again, how much of
what audio goes where. In the "ttable" statement, the first
number defines the channel (0, 1)
while the second is the binding defined in "rxch0_split" with the final
digit being 0-1 defining how much of that content is to go there:
A value of "1" (e.g. 1.0) is 100%: If you wanted to change this amplitude for mixing/fading, it could be done here.
Getting audio into the "splitter":
While one should, in theory, be able to use "rxch0_route" as the destination audio device for "aplay" (e.g. "aplay -D rxch0_route <filename.wav>")
that doesn't actually work as a "Channels count non specified" error is
thrown by aplay despite the fact that the number of channels is
explicitly specified in the command line for aplay (or fplay, for that matter!). To get around this we'll use the "plug-in" function of ALSA and allow it to convert formats as necessary as shown below:
pcm.rx_ch0 {
type plug
slave {
pcm "rxch0_route"
}
}
As you might guess, this defines virtual audio device "rx_ch0" and
using the sample conversion, this invokes audio device "rxch0_route"
and avoids the "Channels count non specified" error.
Invoking aplay or fplay or other audio source:
From this point, you may invoke your desired audio source as in:
aplay -D rx_ch0 <arguments>
and the output will be on the loopback devices defined above.
As noted in the earlier reference article using the fast loopback, it's
strongly recommended that you create other "plug-in" virtual devices to
go between the loopback output and the WebSDR input as these will:
- Automatically to sample-rate conversion. You can run the acquisition device (the RSP1a, for example)
at 768 kHz but configure the WebSDR for 384 or 192 kHz. Doing
this will cause ALSA to do down-sampling and low-pass filtering rather
than the SDRPlay driver and any aliasing that appears at the edges of
the receiver's bandpass will be much better-attenuated as the ALSA's
facilities for doing this are much better than those in the SDRPlay
software/hardware! This is especially useful for bands like 30,
17 and 12 meters where one could run the RSP1a's sample rate at 384 kHz
but configure the WebSDR for 192 kHz.
- Provide a "dummy" source for input data.
Configuring a virtual device in .asoundrc as the input for the
WebSDR means that it will exist whether or not the receiver is actually
running. This allows one to restarted/modify the receiver
configuration separately from the WebSDR (e.g. you won't have to start and stop the WebSDR to restart a receiver)
as that data source will "always" be there. Of course, if the
receivers and their drivers aren't running, you'll get nothing at all (e.g. a completely black waterfall, no signals.)
Some examples of this are as follows:
pcm.f_loop0_out0 {
type plug
slave {
pcm "hw:flp0,1,0"
rate "unchanged"
format "unchanged"
channels "unchanged"
}
}
pcm.f_loop0_out1 {
type plug
slave {
pcm "hw:flp0,1,1"
rate "unchanged"
format "unchanged"
channels "unchanged"
}
}
In the above we can see that the loopback device "flp0,1,0" (sub-device 0) is renamed "f_loop_out0" and it is this name that we would use in the "websdr.cfg" file (e.g. "device $plug:f_loop0_out0") in our receiver/band definitions.
Similarly, we would use another device (e.g. "f_loop0_out1") if we wanted to send a copy of that same audio data somewhere else - say, via a network connection using netcat or similar (to another computer), to "csdr" for further processing or to another program for recording/analyzing.
For information on using "csdr", see this page: Using CSDR for auxilary receivers on a WebSDR system - Link
How much CPU does it use:
Since we are moving the audio data unmodified, it uses negligible CPU power (each instance shows up as "0.0% utilization) and minuscule RAM: Certainly, anything
that you plan to do with this data - even shoveling out out the LAN
port - is likely to take more processing power than replicating this
data!
Different receiver types:
The above techniques may be used for "Sound Card" type receivers (e.g. the "Softrock" where analog audio is piped to the line-in port of a sound card), and receivers with a built-in sound card like the FiFiSDR.
This method can also work with other receivers, specifically the
SDRPlay RSP1a in conjunction with the other utilities described on the
page linking to this article (linked HERE) where the "sdrplayalsa" driver and "fplay", a version of the Linux "aplay"
modified to operate at least to 768 kHz, may be used to provide such
bandwidth to the WebSDR server using the 16 bit audio path rather than
the 8 bit "rtl_tcp" path.
Conclusion:
In a WebSDR system
with multiple receivers, you'll have to replicate the above code,
taking special care to uniquely identify the virtual devices as they
are replicated for other channels. In the examples above, the
number within the names indicate "channel 0" - and a reasonable
convention would be to increment those as necessary for the other
channels.
I'm making no claims that I'm an expert with ALSA and .asoundrc - or
any other type of LINUX sound system. It took several hours of
hair-pulling and web-searching to come up with what is shown above as
most "examples" of what I wanted to do were in the form of "This code doesn't work - why?"
rather than as understandable examples of how to do things. It's
very likely that the above code could be streamlined a bit, but I
decided to keep it as shown as it's at least somewhat decipherable when
coupled with the included comments.
Additional
information:
- For general information about this WebSDR system -
including contact info - go to the about
page (link).
- For the latest news about this system and current issues,
visit the latest news
page (link).
- For more information about this server you may contact
Clint, KA7OEI using his callsign at ka7oei dot com.
- For more information about the WebSDR project in general -
including information about other WebSDR servers worldwide and
additional technical information - go to http://www.websdr.org
Back to the Northern Utah WebSDR landing page