Audiophile network switches

I also do believe that these switches, and for that matter ethernet cables, can enhance sound quality, whether or not they are made from inexpensive materials, sold at a high price or feature cheap or costly internals and extensive R&D.

However, proving that the sonic differences you hear are directly caused by what is measured at the input/output of the switch, filter chain, or streamer clock is always challenging.

There is no single “best audiophile switch.” Instead, there are many good options, and their effectiveness (or the perceived differences) can vary depending on your system setup and how the switch is integrated into your audio chain. Furthermore, the overall improvement will also depend on the entire network’s configuration and isolation.

To truly determine if a switch provides the improvement you seek, it’s essential to audition it yourself. Otherwise, you might spend a long time reading and discussing in audio forums—unless, of course, you enjoy doing that ;-).

1 Like

I thought it was worth quoting your entire post as you described a method far more likely to achieve better sound quality than what anyone could achieve by allowing measurements to guide them. I would encourage anyone looking to understand how much harm their network is causing to simply disconnect it and see what you hear. If you don’t hear a difference then just move on.

There all all manner of maintenance traffic going on:

802.3az
LLDP
Speed and Duplex Negotiation
mac-address and arp table timers

If Antipodes is based on Linux (a good bet) and you could get shell access and simply run tcpdump and you would see stuff going on all the time on the wire.

The point behind properly designed devices and the combined system as a whole is you never know it’s going on. This also applies to even more demanding video. 8K EXR at 16bit/60fps is 70GB/s.

Running these setups on either Infiniband (nVidia proprietary) or ROCEv2 in multi-million $ setups and the $100,000 display is accurate in every regard. We don’t see the noise. And it’s 100% running Single Mode OS1.

If you don’t publish the end result can you at least publish your validation environment and process? If it’s got any efficacy it will be reproducible.

Uptone Audio promised years and years ago to publish their measurements. The only thing they put out is a white paper that supports my approach of optics and faster speeds in the clock domain boundaried, non-real time, playback systems.

It is a fools errand to debate measurements with people who are not interested in buying the product. Ask Antipodes and I am sure they will say that they do not go down the route of published measurements for the same reason.

Just for completeness though, I don’t recollect you saying which Antipodes devices you own and use, and hence why you are here?

Just on this, what do you mean by “We don’t see the noise” . Are you measuring up into the GHz region (which you will need to do) or are you just looking at digital data transmission?

Maslow’s hammer goes something like this: “when all you have is a hammer, every problem looks like a nail.” Wikipedia describes this as a “cognitive bias that involves the over-reliance on a familiar tool.”

It has seemed to me that those who have the hardest time wrapping their head around what’s different about the networking requirements of high end audio systems are often those who work with networks professionally. Network cables and switches are the tools they’ve relied on to do a particular job. That can lead to a cognitive bias that can keep one from understanding that slightly different tools can benefit high end audio.

Certainly tcpdump can be an extremely valuable tool. I believe that it is oblivious though to what may be going on at layer one, the physical layer, as far as noise that may be traveling along with the data.

A similar cognitive bias can be found amongst many electricians. Ask one about installing a dedicated circuit with larger gauge wire and the odds of you being laughed at are very high. That high end audio systems perform better when current is supplied more quickly to meet instantaneous demands is something that most have never even paused to consider. An electrician could spend their entire lives running Romex and never encounter a situation where over-specifying the wire can provide benefits that a homeowner might appreciate. Not all electricians will laugh at this though. Humility plays a huge role in how one will respond. This is true for all of us, of course.

You didn’t notice any change in sound quality, which suggests that, in your setup, the network connection wasn’t affecting the audio. This contrasts with Nick’s setup, where network noise is impacting the audio equipment.

In highly sensitive audio setups, even minimal network noise can raise the noise floor, potentially masking fine details or adding a subtle haze that reduces clarity.

In less sensitive systems, with better shielding and noise rejection, the noise floor may already be low ( relative to a less sensitive system) making any additional network noise negligible. This could explain why you didn’t notice any difference when disconnecting the Ethernet cable (or this highlights the subjective nature of sound quality perceptions).

1 Like

This also depends on the track being listened to and the experience of the listener. When doing critical listening such as this I have a few known tracks that I know will reveal differences. A random selection of other music might well not reveal differences.

1 Like

Moving on …………. :man_shrugging:

Yep, have Agilent right in the rack… $150K scope. Given a high resolution enough instrument you will always find something. For Audio we are well past affordable gear excising all the gremlins.

Also when I see people start talking about “Ghz” and the audio band I already know the caliber of person that I’m interacting with.

You have yourself a good one.

All I need is a money back guarantee. I put WW, AQ, Nordost in my system and when I could perceive no difference between some Tripplite and Panduit patch back they went.

I have a DX (since replaced by DIY TrueNAS). DAC is an RME ADI Pro now feeding a Genelec stack. I’ve a few other systems (one 2.2 and rebuilding my theater as Atmos 9.2.4).

I’m curious about how engineers engineer stuff when they don’t use any tools to check to see if what they are doing has any efficacy. What I do know as a former A/V and current Network Architect is that I need tools of all variety to design, do Proof of Concept, validation, and troubleshooting.

That’s a great quote. I’ll see your Maslows Hammer and raise you Hitchens Razor: “Claims presented without evidence are just as easily dismissed without evidence”.

Now one that that I originated, and since we are talking about bias which we all have, is when it comes to bias controlled listener evaluations: If you won’t trust your ears, why should I?

1 Like

Claims about the audibility of Ethernet cables are being supported by evidence in the form of listening impressions. These claims can be evaluated by others simply by demoing the same product. When many others arrive at similar conclusions, that is evidence too.

Ah that explains it.

The plural of anecdote doesn’t = data

It only explains a preference. I have the 8361A’s. They are low distortion, accurate, image well.

To be fair I think that Kenny’s ‘Ah that explains it’ could have been directed at other gear listed such as the RME ADI Pro (which I have owned) or the superseded DX (which I have not heard).

Although you do say you no longer use the DX but in which case one must wonder about the motives for participating in such a vociferous manner in an Antipodes user forum when you no longer use any Antipodes devices.

Unless it is to save us from ourselves but in which case you will have to try harder to convince me to give up on my Oladra because the Oladra has so far done a very good job of convincing me that an error free data stream is not sufficient in itself for good sound quality.

1 Like

Oh wow. Can that thing tell you what speaker you should buy?

I’m genuinely curious about a setup that is so obviously affected by network gear that it borders gross configuration error.

Bottom line is I’m still an owner that outgrew the DX and pivoted to TrueNAS. The supposition that it negates what I’m getting at is a curious position to take in any way you look at it.

I come from the Broadcast and Live sound and now used those skills (priority time protocols, multi-cast, spanning tree were of importance in that space) to parlay a career in network engineering (much better pay).

I’m in the midwest (Indianapolis) and am affably curious about a setup that exhibits such behavior and the contrivances that led one to enabling those issues. If anything was somewhere close I’m curious what the physical and logical paths are.

Network play back is a non-realtime system, buffered at every step of the way, multiple clocks separated by clock domain boundaries, and ultimately the data ends on some other, totally different, clocked bus (i2s, usb, AES-EBU, etc).

Even if we just consider the L1 issues: UTP Ethernet is galvonically isolated and something like the magnetics that I’ve seen on common Intel NIC’s is south of -140db in the audio frequency range. It’s mind blowing that a system exhibits demonstrable behavior with a cable plugged in or not whilst playback ensues. This shouldn’t be the case on a properly designed system. Yes a total layer 1 problem.

1 Like

Interesting 1:

Ethernet systems are different. They collect bits that tell the receiver all the data has arrived. Devices, like the [Bridge II), cache the data and reclock it for identical pulse widths and amplitudes, reducing the JITTER, or the random pulse width genetated using the incoming Ethernet steam.

So a recaching device listens to the awkward story told by one voice, but the electronics relistens to the story by the second voice, who in my example would be a professional story teller! It never hear the first voice. ALL the first voice needs go do is get the whole story to the second voice who is really the voice in charge. Ethernet resends data until it is accepted at the destination, so even poor cables work.True, the network through put is slower as data can be re-sent, wasting bandwidth.

For the best performance, you want a cable with the best Shannon law bandwidth. This means either type system, reclocked or not, will get data more consistently. Re clocked systems won’t really see a benefit unless the original cable was pretty bad (loss of data) this is rare as most links are SHORTER, and thus much improves the signal over the noise. This improves the BW significantly.

You can go whole hog, and use CAT7 ISTP. The 22 AWG wire, and shields that pretty much remove NEXT to -100 dB can’t be beat. The signal has no real noise to compete with. But, connectors are expensive, as is the cable, and it is larger on size and more fragile as it is a FOAM dielectric.

For UTP, look at cables like DT600E or 4800. These are the highest shannons law bandwith UTP ethernet cables Belden makes. Both are over twice the spec requirements for Ethernet.

The SOUND of Ethernet depends on the JITTER figures reconstructed in the D to A process. In my case, I use ethernet over a wall wart to the Bridge II. It reconstructs then re clocks all the data so the external network isn’t “heard”. I detect zero skips or issues at all using cat6 to and from the wall warts.

Oh, don’t use 6A UNLESS you really have 10G. 6A is designed as cat6 internal, not as good as 3600, 4800 or DT600E. The reason is the special and unique designs to reduce EXTERANAL ALIEN NEXT impact the ability to reach optimum internal NEXT figures. Only 6A needs a balance of internal to external crosstalk properties.

6A will NOT be as economical or even as good on 10/100/1000 Ethernet. It isn’t made for that. CAT7, by using ISTP design can manage both tecnologies as external and internal are reduced with the shield over each pair. Overall shields called FTP around the group of 4 pairs are for external noise, but not internal noise shielding. And, outer FTP shield reduce internal NEXT performance as the EM wave are coupled into the pairs more aggressively. But, the external FTP shield reduces EXTERNAL or alien NEXT, called ANEXT. There is a balance of internal to external noise mitigation.

Shannon law depends on, as we read, noise reduction and signal level. Using shields ALSO INCREASES attenuation. This mitigates the shannons theoretical limits since the signal is weaker due to using the shield. However, if you lose 1 dB of signal, but gain 3 dB better noise levels, it is still a 2dB improvement in signal to noise. CAT7 uses 22 AWG with shields to offset the loss where cat 5, 5e, 6 and 6A use 24 or 23 AWG in UTP or FTP. CAT 7 is a brute force design on attenuation and shielding, both. The closer a shield is to the pair, the higher the signal loss. Overall shields aren’t as bad as individual shields. Use bigger wire to compensate and less lossy foam dielectrics in CAT7.

Most so called audio grade data cables won’t, again, give you any data to determine their capability. A value called ACR, or attenuation to cross talk ratio, should go as high in frequency as possible and stay above 0 dB, where the noise and signal are the same, if a better Shannon law bandwidth is to be achieved.

This is all real data on how it is tested. I have no magic theories with no supporting data to determine if it is in any way an advantage. The signal needs to be larger than the noise. Anything that loses signal impacts the BW, such as poor impedance match to the load called return loss. And, anything that allows internal cross talk noise. The ACR value combines all this into one trace, nice!

So that’s how this is really working. You can certainly go with unproven theory that magic materials and designs are supposed to improve, but with no adherence to maxing out the fundamentals, how can the cable be better? If a theory is valid, the design should also adhere to good practice to improve ACR, does it?

Best,
Galen Gareis

2:

Steve, Barrows from Sonore here. The answer is “maybe”. I prefer to use an optical Network connection (using Sonore products of course) as this eliminates the possibility of noise traveling on Ethernet cables to the audio system.

After a quick google-fu I pulled these from PS Audio & Roon forums. If you don’t know whom they are just search.

@Jinjuku So lets come at this from a different direction. Do you accept that there are some aspects of data delivery to the dac which affect sound quality and if so which?

Do you for instance hear what we all seem to hear here in that Roon for instance sounds different to Squeeze?

Do you hear differences in say different streamers from the Antipodes range? (This is after all an Antipodes Users Group).

If the answer to any of these is yes then can you explain the mechanism for what you are hearing please?