NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Wi is Fi: Understanding Wi-Fi 4/5/6/6E/7/8 (802.11 n/AC/ax/be/bn) (wiisfi.com)
niobe 1 days ago [-]
An impressive attempt to summarise Wi-Fi which is a very deep topic. However I think the executive summary already missed the most critical thing about Wi-Fi:

only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.

It's a shared medium and it's not even half duplex, unlike the dedicated full duplex you would typically get with an ethernet cable to a switch port.

The fact that Wi-Fi achieves what it does with this limitation, and how it co-ordinates the dance of multiple unknown clients using the same medium - and in the presence of other RF technologies to boot - is indeed an incredible technology story, but this achilles heel is the single most defining thing about Wi-Fi performance.

throw0101a 24 hours ago [-]
> only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.

Not true with newer standards:

> Orthogonal Frequency-Division Multiple Access (OFDMA) is a multi-user wireless transmission technology that divides a single Wi-Fi or cellular channel into smaller subcarriers called Resource Units (RUs), allowing multiple devices to transmit data simultaneously.

[…]

> Instead of one device occupying the entire channel (as in OFDM), OFDMA allows parallel transmissions. As a result, network congestion decreases significantly.

* https://www.netcomlearning.com/blog/what-is-ofdma

* https://airheads.hpe.com/blogs/antar1/2020/10/19/why-is-ofdm...

> In addition, the 802.11ax standard defines the smallest subchannel as a resource unit (RU), which includes at least 26 subcarriers and uniquely identifies a user. The resources of the entire channel are divided into small RUs with fixed sizes. In this mode, user data is carried on each RU. Therefore, on the total time-frequency resources, multiple users may simultaneously send data in each time segment, as shown in the following figure.

* https://info.support.huawei.com/info-finder/encyclopedia/en/...

* https://blogs.cisco.com/networking/wi-fi-6-ofdma-resource-un...

With a 26-Tone RU Type, nine devices can operate simultaneous in even a 20 MHz channel (eighteen in 40 MHz, etc); see Figures 2 and 3:

* https://www.mpirical.com/knowledge-base/wi-fi-6-and-ofdma

timeinput 19 hours ago [-]
OFDMA just makes the channels smaller. Sure there are now 10 transmitters on channel 5, but there's one transmitter on channel 5.1, one on 5.2, ... and each 'channel' has 1/10th the capacity of "channel 5".
9x39 1 days ago [-]
The multi access story is improving, though.

OFDMA on wifi7/802.11be: https://blogs.cisco.com/networking/wi-fi-7-mru-ofdma-turning...

ksec 1 days ago [-]
It is not even switched on in some early version of WiFi 7 router and receivers.

As a general rule of thumb, the best version of WiFi x will only come with WiFi x+1. So for all the problems to be solved and ironed out on OFDMA it will be WiFi 8 then. And for all the promises of Ultra-High Reliability, it will have to be WIFI 9.

WiFi is clearly moving more towards like 4G and 5G with every version. I just hope someday that it really is good enough where there are many people using it at the same time.

throw0101a 24 hours ago [-]
> It is not even switched on in some early version of WiFi 7 router and receivers.

OFDMA was first used with Wifi 6:

* https://blogs.cisco.com/networking/wi-fi-6-ofdma-resource-un...

* https://www.litepoint.com/blog/wi-fi-6-ofdma/

niobe 1 days ago [-]
Yes, and before that MU-MIMO is also an improvement to the problem. Still only 1 transmitter at a time, but multiple receivers.
rayiner 1 days ago [-]
> only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.

That’s not correct. You and your neighbor can use the same channel at the same time. On your network, the transmissions of the other network appear will appear as noise. As long as the other devices are far enough away, however, your devices will still be able to make out their own signal.

niobe 1 days ago [-]
This is a common misconception.. you and your neighbour can configure the same channel, you cannot successfully transmit at the same time on the same channel within range. Nor can you and your own AP successfully transmit at the same time on the same channel.

When you and your neighbour _appear_ to be transmitting at the same time, each adapter is actually spending most of it's time waiting for a clear medium and for various backoff timers to expire before attempting to transmit.

"Appear as noise" is not defined for Wi-Fi adapters. There is only "I received a frame addressed to me and acknowledged it" or "I sent a frame and either did or didn't get an acknowledgement back from the receiver". Receivers do not know why they didn't receive a frame, or, if they received a corrupted frame, why it was corrupted. They just wait for a retransmit. Senders ordinarily wait a certain time to receive an acknowledgement, and if they don't, the start the transmit wait cycle again. But they often then reduce the data rate to increase the odds of a successful transmission.

I'm glossing over some complexity here, because there's a sender and receiver to consider, and each has a different view of the RF environment, but the point is always correct when all transmitters and receivers (lets say the 2 APs and each has 1 client) are in audible range of each other. And this is most of the time. Note that "audible range" (where the signal is such that the medium is deemed as busy by the adapter) is much larger than the "usable range" (where data can be transmitted at reasonable speeds). So transmitters create interference in a much larger area than they actually operate in.

That means your neighbour transmitting at 6Mbps to his AP will indeed degrade the performance of your client who wants to transmit at 600Mbps because your client has to wait ~100 times longer for a clear medium.

rayiner 1 days ago [-]
> There is only "I received a frame addressed to me and acknowledged it" or "I sent a frame and either did or didn't get an acknowledgement back from the receiver". Receivers do not know why they didn't receive a frame, or, if they received a corrupted frame, why it was corrupted.

That's not correct. WiFi is "listen before talk." Radios listen to the channel, trying to decode preambles from other networks, before transmitting. In that process, they can detect other signals well below the threshold where they'll consider the medium in use (the CCA threshold). If you have an otherwise clean channel, the noise floor might be -95 dBm. Radios typically can decode the preambles 3-4 dB above the noise floor. Conventionally, the WiFi standards set the CCA threshold at -82 dBm. So the radio can "hear" a lot of signals that won't cause it to trigger collision avoidance. More recent standards allow using a CCA threshold as high as -62 dBM under certain circumstances to facilitate spatial reuse: https://arista.my.site.com/AristaCommunity/s/article/Spatial....

Also, what the Wifi standards do is less aggressive than what radios could do. The CCA thresholds are set to facilitate orderly use of the spectrum--they're not physical limits. To receive a transmission, you just need sufficient signal-to-noise ratio. An adjacent network transmission raises the noise floor, but if your radio is close enough to your AP, you might still have sufficient SNR.

snovv_crash 1 days ago [-]
Exactly. If you wanted to be a bad citizen you could get much better WiFi performance in shared environments with just a few small firmware tweaks.
estimator7292 1 days ago [-]
[dead]
venusenvy47 17 hours ago [-]
At my inlaws house, they and all the neighbors have Comcast, with routers that don't allow configuration of the channels. And since Comcast doesn't know how to configure their routers properly, all neighbors are sharing the same channels on both 2.4 and 5. It's fine if you are in the room near your own router, but it works poorly on the other side of the house, where I pick up neighbor signals at the same level as the desired one.
mallets 23 hours ago [-]
Only if the difference in signal power is high (>40 dB). It’s like saying collisions aren’t a problem in situations where no collision actually occurs.
rayiner 22 hours ago [-]
If I’m in the room with one of my APs, my closest neighbor is a hair under 40 dB lower. But I can see a dozen other networks on my street, which means the other signals are strong enough where my phone can decode the packets.

The point is that wireless networks can use not only the channel dimension, but the spatial dimension. That’s the basis of things like MIMO.

Onavo 1 days ago [-]
Well the newer WiFi standards on 6Ghz support a lot more channels. Not a perfect work around by any means but it does significantly reduce congestion.
niobe 1 days ago [-]
Yes, that helps quiet a lot in practice because in most places there's limited "frequency-domain" capacity (i.e. free channels) but plenty of "time-domain" capacity, (i.e. free air-time). So even if you are sharing a channel with 4 other APs and their users, everybody may subjectively feel the network is fast. When chopping up the time domain into nanoseconds there's just a lot of idle time available, even if clients are pulling down files at 600Mbps.

But at a fundamental level, the channel space (~60 across all bands best case) is extremely limited but the potential growth in transmitters is unbounded. It's like a linear hack to an exponential problem. It seems to work at first, but under very high load conditions performance still degrades ever faster until it falls off a cliff. Then there's all sorts of complex dynamic behaviour like the hidden node problem to add to this, but it all boils down to needing air-time and SNR.

rayiner 1 days ago [-]
> But at a fundamental level, the channel space (~60 across all bands best case) is extremely limited but the potential growth in transmitters is unbounded.

You’re overlooking the spatial dimension: https://en.wikipedia.org/wiki/Spatial_multiplexing

xuki 1 days ago [-]
Yeah 6Ghz freq doesn't have DFS channels which remove a lot of usable channels for 5Ghz. Unfortunately it'll be a while until most devices support 6Ghz.
throw0101a 24 hours ago [-]
> Unfortunately it'll be a while until most devices support 6Ghz.

Per this May 2025 Juniper presentation, half of their deployed APs have 6 GHZ enabled, and at least 20%—but as much as 50% depending on the environment—of clients have 6 GHz:

* https://www.youtube.com/watch?v=sV-3gA0OP9s

Corporate environments (where client hardware is more standardize) has higher 6 GHz adoption, BYOD (universities) environments have lower adoption.

So I'm not sure how you define "a while" as, but it's probably already the majority at most workplaces, and will be for personal stuff with-in a year or so.

KingMachiavelli 2 days ago [-]
I'd like to understand why the WiFi spec developed so slowly from G to N and finally to AC but now it's seems like a new version is released every other year yet many of the features/extensions are poorly implemented or have nearly 0 real world improvement.
niobe 1 days ago [-]
I would agree with that. G to N was perhaps the most critical move in Wi-Fi because it included MIMO. You can think of this as unwanted signal echoes and reflections being switched from a liability to a benefit. Heck, I _still_ run WiFi-4 networks and they perform very well. WiFi-5 was an incremental upgrade, with many experimental features that barely used in practice.

802.11 is in general a vast swag of cool tricks, and when enough ideas are thrown at a wall, many do end up sticking, but for the most part the benefits are cumulative. MIMO being one major exception.

9x39 1 days ago [-]
I'm not a hardware guy, but my guess would be evolution of radio transceiver tech in the cell space drives improvements downstream in wifi. Better transceivers can pull quality signals from what was noise generations past, its not magic of course, but the speed transceivers can run over copper cable goes up similarly. 1Gbps was a fast cable a while ago, and now we're doing hundreds of gigabits commonly.

Another thing is that features like beamforming and higher QAM, let's say, are going to matter more in ideal scenarios where APs are in their sweet spot relative to clients, and you get to take advantage of high SNRs. Is that going to help when someone buys a Netgear Wifi 7 AP only to flip it upside down behind the couch in their apartment in an environment where 2.4 and even 5 ghz are basically gone from all their neighbors' use? Still, faster data rates mean clients get on and off the air quicker overall, saving airspace and battery if applicable. So, I think there's mainstream and highly specialized features rolling out simultaneously.

Jach 1 days ago [-]
Speaking just on timelines (rather than actual underlying innovations or improvements), 802.11 was in 1997, next in 1999, G in 2003, then a 6 year gap to N in 2009, 4 year gap to AC in 2013, 8 year gap to wifi 6 in 2021, wifi 7 in 2024 (though apparently buyer beware), and wifi 8 expected (according to the article) in 2028. Doesn't seem too rapid? The 8 year gap is the weird one out.

I think part of it is that if there isn't a regular and practiced process for bumping standards, then gaps between revisions can grow quite large and stagnation can set in, and if there are any significant improvements it'll take longer for them to come to fruition than if there were regular revisions that are only modest most of the time. Looking at a few other things that come to mind: USB had an 8 year gap between 2 and 3 as well, PCIe had a 7 year gap between 3 and 4 (albeit while they only had a 3 year gap between the specification for 5 to 6, it still took 3 more years (2025) for the first pcie6 devices, and I still can't buy a consumer-level pcie6 motherboard, it's a separate mess), C++ had an 8 year gap between C++03 and C++11, Java had a 5 year gap between 6 and 7 (and another 3 years after 7 to get to Java 8); all of these things now have more rapid cycles.

dylan604 1 days ago [-]
Does any of it have to do with the spectrum becoming available? After 2.4GHz and 5GHz, I have no idea what else the latest/future gens of WiFi are using. As some tech like 2G is no longer in operation, that spectrum was opened up. There are other frequencies that have become available where operating the older equipment that used to operate there is a big no-no now. There was a frequency range used by old wireless microphone systems that are banned at locations.

Just taking a swing at it, but I don't play that sport so probably a big whiff

ssl-3 1 days ago [-]
In regulatory regions where it is usable, Wifi 6 (802.11ax) added some 6GHz channels. Wifi 6e extended that to roughly the entire 6GHz band, for ~1GHz of contiguous RF bandwidth in that area alone.

The "old" cellular bands aren't generally open, at least in the States. We tend to use them for newer licensed stuff in cellular-land instead of the old licensed stuff we used to do. (Old modulation techniques die out and get replaced, but licensed RF bandwidth is still licensed RF bandwidth.)

throw0101a 24 hours ago [-]
> In regulatory regions where it is usable, Wifi 6 (802.11ax) added some 6GHz channels.

'Plain' Wifi 6 (non-E) had zero 6 GHz. If you think otherwise can you produce a citation?

Edit:

* https://en.wikipedia.org/wiki/List_of_WLAN_channels

ssl-3 23 hours ago [-]
You're right. 6GHz wasn't usable as a part of standardized wifi until 6e.

I'd like to choose option C: I thought otherwise, and I was wrong in thinking that. I'd like to submit my previous comment, just above, as a citation demonstrating the incorrect thought process. ;)

p_j_w 22 hours ago [-]
802.11ax introduced some changes that have real world effects: subcarrier spacing was decreased by a factor of 4. Symbol length was increased by the same factor, but that means receiver sensitivity requirements are looser for the same data rate. So, for the same signal strength you can get a higher data rate. I’ve personally seen this effect on my own phones and laptops: I get consistently higher throughput on ax networks than I do on ac.
crims0n 1 days ago [-]
Surely some of that was need. When G was dominant from around 2004-2009 the theoretical maximum was 54mbps… most people were still on DSL or cable at the time, often capping out way below that.
Avamander 1 days ago [-]
It's all very proprietary and the tooling is ass, there's a lot of wasted effort creating and testing out the same stuff. Bluetooth is just as horrible for the same reasons.
anyfoo 1 days ago [-]
> Wi-Fi signal strength decreases at an exponential rate as you move further away from a router.

This is surprising to me. I'd have guessed it decreases quadratically (i.e. due to the inverse square law), not exponentially.

The paragraph below seems to contain an explanation, but I don't really understand it (namely because I don't know what that percentage "Coverage" column actually means, or what we mean with "the total distance at each QAM step").

niobe 1 days ago [-]
So that table is using distance as a proxy for signal to noise ratio. SNR is what really matters.

Each data rate in the standard uses a different encoding technique. "Faster" encoding techniques cram more data into a given transmission interval but require a higher signal to noise ratio to be received without error. Since SNR declines with distance you can have a rough idea at what distance from a transmitter you will be able to receive at what data rate.

However, people and vendors focus far too much on maximum throughput. I've seen data showing that even in the best conditions, clients spend about 1% of their time transmitting or receiving at the highest data rates. Because they are dynamically adjusting the data rate based on the perceived SNR.

Individual clients' peak throughput also works against _aggregate_ throughput when talking about wireless networks with multiple users. If you have 100 clients, do you want one to be able to dominate the others or everyone get a more or less equal share? These peak speeds assume configurations that I would never deploy in practice, because they favour individual users and cripple aggregate throughput - things like 160 MHz wide channels.

But the sticker speed is what sells..

fireflash38 24 hours ago [-]
Do most clients do a constant throughput or do they do bursts? Because speed does matter a lot if it's burst (send 100MB to fill a buffer, then wait). The faster you fill whatever buffer, the faster you can let another client use the connection.

Correlated, but obviously bad code can really fuck with neighbors. And each client has an incentive to be greedy so users of that client get a better experience. So you fall back again to QOS for what you care about..

niobe 5 hours ago [-]
> The faster you fill whatever buffer, the faster you can let another client use the connection.

Basically this. They way we usually put it is that we want clients to "get on and off the channel as quickly as possible". That requires all clients in range of each other to be behaving (respecting the rules) and using fast enough data rates to minimise their consumption of precious air-time.

Under the hood though, it's a very granular frame-by-frame, almost nanosecond-by-nanosecond thing that leads to the overall throughput at a human timescale. To give you a sense, let me try to summarise the factors affecting throughput this way:

- Data Rate: the transmitting client can adjust the data rate of each frame up or down per frame if they want. For example, a single TCP session on a 2.4GHz channel could in theory see data rates everywhere between 1Mbps and 450Mbps. But in practice most drivers I've seen adjust up or down incrementally. And in a healthy network, they usually hover around the top 25% of the mutually supported data rates (but they also spend very little time at the highest data rate, typically less than 1%). Also the AP could be using different data rate to the client, and usually is. The rx and tx directions are effectively separate streams and data rate is always chosen solely by the transmitter.

- Block Size: Similar to TCP windowing. Data can be sent in multi-frame 'bursts' before an acknowledgement is required by the transmitter for it send more. In the original Wi-Fi, every frame had to be acknowledged. Later standards introduced this idea of block acknowledgements.

- Re-transmits: Whenever acknowledgements are not received, the data has to be resent. Block size will be reduced, possibly to 1, so it will also take longer. Note that re-transmits are expected and very routine in Wi-Fi, whereas in TCP they are usually considered more of an exception (except on the internet). I've observed re-transmit rates of 20% in networks where no user is perceiving any sort of issue at all. So Wi-Fi is very robust to frame loss, up to a point, but even so, re-transmits do end up having a large impact on the aggregate throughput.

- Clear channel wait time: It's no exaggeration to say that transmitters spend most of their _waiting_ to transmit. And a big chunk of that wait time is just waiting for the medium to be clear - the clear channel assessment. If the client thinks there is a transmission going on, it just has to kill time.

- Other wait times: Even when the channel seems clear, there are various requirements to do nothing before and after transmitting. For example, the inter-frame spacing interval and the random back-off interval. These are just the rules of play. In fact, congestion avoidance on Wi-Fi could be said to be entirely a matter of timing.

Note that these are a simplification and clearly I can't mention everything or cover all the nuances. But, in the way I've framed it here, the clear-channel wait time and the re-transmit rate do basically encapsulate the impact of intangibles I didn't mention, like congestion and noise/interference.

TLDR; Wi-Fi transmissions are extremely lumpy at their native timescale, but many seem a lot smoother than many TCP transmissions at human timescales.

> Correlated, but obviously bad code can really fuck with neighbors.

Also true. Bad code is usually exemplified in Wi-Fi by bad drivers (looking at you Broadcom). These will cause clients to "stick" to bad APs when they should roam, or pick the wrong channel/AP/band in the first place. Intel is generally very good.

> And each client has an incentive to be greedy so users of that client get a better experience.

Greed is good in the sense that clients want to transmit their data as soon and as fast as possible and we want them too! But they have to respect the rules. Of course there's only a handful of chipset vendors so they mostly do. But within that, there's still plenty of room for clients and APs to do things that are _sub-optimal_ even if they are Wi-Fi legal, as per the sticky client example I mentioned.

> So you fall back again to QOS for what you care about..

Wi-Fi does indeed have its own implementation of QoS which is of course a timing dance! But I think you're referring to QoS in higher layers like IP. So it's worth mentioning that this WiFi stuff is all happening at layers 1 & 2. All the congestion detection and re-transmissions and so on that may be happening in higher-layer protocols like TCP are happening _in addition_ to what is going on at the WiFi layers.

cortesoft 1 days ago [-]
There are a lot of people who are the only ones using their Wi-Fi, so they probably don't care about the performance for anyone else
niobe 1 days ago [-]
But this is the point. What your neighbour's are doing greatly affects the performance of your network.

If you have a good connection and are successfully able to transmit packets to your AP at 600Mbps, and your neighbour has a poor connection and is transmitting at 6Mbps to his AP at that moment, you literally have to wait ~100 times as long for a free medium before you can attempt to transmit. And that's for every single frame. Then you have to hope his client is well-behaved enough not to transmit while you are transmitting. Otherwise you end up having to wait again and retransmit anyway.

You might not notice this with only 2 clients. It might be the difference between a 80MBps and a 50MBps download for example. But it decays exponentially with the number of clients.

nakedneuron 1 days ago [-]
Did you check out "Appendix I: Wi-Fi signal strength vs distance"? Cheers!
1 days ago [-]
wonnage 1 days ago [-]
yeah, it's pretty common to refer to x^2 as exponential colloquially since there's A. an exponent B. a single term for all values (vs. quadratic, cubic, quartic...)

But you're technically correct!

anyfoo 1 days ago [-]
I'm actually not sure that they don't actually mean exponentially. There's something about not only increasing the distance, but potentially also the modulation (and thus the symbol rate) stepping down, which maybe in total causes the decline to be ~exponential? But it's not clear to me at all. That's why I ask, I have a hard time parsing it.

But then again, the sentence uses the term "signal strength", not "throughput", so that would suggest quadratically. But I guess "signal strength" could be meant colloquially and mean more than just the raw signal power received by the antenna, here.

It's all very fuzzy to me, as it stands.

BrandoElFollito 20 hours ago [-]
> it's pretty common to refer to x^2 as exponential colloquially since there's A. an exponent B. a single term for all values (vs. quadratic, cubic, quartic...)

Where is it pretty common? I have never heard that (outside of being a mistake)

amluto 1 days ago [-]
Do you also think that f(x) = x^1 is exponential? How about f(x) = x^0?
anyfoo 1 days ago [-]
Kind of irrelevant, because you could also ask "Do you also think that f(x) = x^1 is polynomial? How about f(x) = x^0?" The distinction was clearly between polynomial (specifically quadratic) and exponential, leaving those trivial cases out.
Jaxan 1 days ago [-]
No. These are polynomials (in x).
esafak 1 days ago [-]
https://en.wikipedia.org/wiki/Power_law

Because the variable is the base, not exponent.

anyfoo 1 days ago [-]
I know what "exponentially" means, I know what "quadratically" means (and how it's not exponentially), and I know the inverse square law. Hence my question why the article claims "signal strength" decreases exponentially, when the raw power received by an antenna definitely decreases quadratically, not exponentially. That's just physics. But there might be some convoluted thing about stepping down symbol rate which affects throughput (which I guess could be colloquially called "signal strength" if I squint really hard) that I don't understand here.
monk_grilla 1 days ago [-]
Anyone know of a similarly excellent resource for understanding wired networking? CAT specifications, how to pick high quality switches/routers etc.?
lake_trade 1 days ago [-]
Beej's guide will help you with understanding networking overall, I don't think it would help you choosing switches/routers specifically.
NooneAtAll3 20 hours ago [-]
link?
skadge 14 hours ago [-]
Very interesting reference article -- I've learn a lot!

In case someone is interested, I got Claude to create a small Linux tool that analyze the capabilities of your WiFi adapter + the current established Wifi link, and explain each of them based on wiisfi.com: https://gist.github.com/severin-lemaignan/d3f854f6111e8275ae...

mbesto 23 hours ago [-]
It seems the throughput has evolved over each spec but the reliability and distance hasn't, unless I'm mistaken? This is a big problem in place where concrete is used to build homes (e.g. the tropics) as the improvements to Wifi are basically not really relevant.
myself248 22 hours ago [-]
And 802.11ah, in 900MHz which has some hope of penetrating such walls, is still very scarce and the hardware fairly expensive.

Otherwise you just have to run a wire through the wall and put an AP in the room. Your clients can still be wireless for the last few feet, which preserves the convenience of usage, just not the convenience of deployment.

mbesto 13 hours ago [-]
This is basically my strategy.
ipython 23 hours ago [-]
At some point, you just can’t beat physics. For best results you need multiple aps anyway.
Normal_gaussian 1 days ago [-]
Today I set up a NWA210BE (Zyxel) to replace a unifi 6+ AP; I bought it second hand and my key metrics were: 4x4 MIMO, available used/discounted, current gen, fully functional standalone mode.

The 4x4 makes all the difference. Sitting in my car the 6+ would fight with my 4G for internet and cause maps to be super slow; now I'm off the property before its unusable.

I had intended to put APs in multiple rooms, but there doesn't seem like much point now.

bjoli 1 days ago [-]
Inwas about to buy a pair of those, but then I saw the new mikrotik wifi 7 router (and probably upcoming access point) with thread radio.

Now every other brand is dead to me.

Normal_gaussian 22 hours ago [-]
That is really interesting! (https://mikrotik.com/product/hap_be3_media for others). Its coming in at a decent price as well.
ece 1 days ago [-]
What are the chances of a 3x3 or 4x4 router from them I wonder.
Marsymars 20 hours ago [-]
> The 4x4 makes all the difference.

I've stuck with my Eero Pro 6 because it has 4x4 at 5GHz and the Pro 6E and 7 trade that for 2x2 at both 5GHz and 6GHz. The Max 7 has 4x4 on both 5Ghz and 6Ghz, but for a 3-pack for my house, the current pricing on amazon.ca is $2300, compared to the $650 I paid for the Pro 6 3-pack. (And the Pro 6 seems to have notably lower power draw than the Max 7.)

bityard 1 days ago [-]
Interesting...

I have a Netgear WAX218, one of the last cheap business-class APs I could find that don't require a cloud service to manage. WAY better than the pro-sumer wifi routers I was running before in access point mode. I'll have to look into Zyxel offerings a bit more when I'm ready to replace my Netgear.

izacus 1 days ago [-]
What benefit does 4x4 give you? I haven't seen a single client station with more than 1x1 or 2x2. Do you have so many clients that transmit concurrently?
Normal_gaussian 22 hours ago [-]
From the article (which is long):

> But as a very significant bonus, the 'extra' antennas if there is a mismatch in MIMO levels between the client and router do not go unused, but are used for 'diversity' and 'beamforming', which extends range, and improves speed at range.

You don't get higher max speed, but you do get better performance in general. I hadn't expected it to be drastic - I had thought it would be more theoretical than practical - so I had been planning on adding some 2x2 APs in a few months; but I don't think I need to now.

The original drive was because I have 4x4 on my desktop which won't get wired in for at least a year now; and my homelab is wired in.

17 hours ago [-]
AdityaAnuragi 21 hours ago [-]
With Wi-Fi 6's OFDMA handling the hidden node problem better, does it actually make a noticeable difference in dense apartment buildings? Curious if anyone has measured this in practice
supertrope 20 hours ago [-]
Qualcomm did a test with extreme levels of traffic. As in 20 simultaneous video chat sessions. https://www.qualcomm.com/content/dam/qcomm-martech/dm-assets...

For contention between nearby Wi-Fi networks the BSS coloring feature is more directly relevant.

Havoc 2 days ago [-]
Nice detailed article!

Finding it increasingly difficult to avoid bottlenecks though. Even with wifi 7 I still get 1.3 on my mac and 0.5 on iphone. More than enough realistically, but upstream internet is 1.7 so tiny bit unfortunately

Think I'm just going to wire the place with 10 gig fiber

>The speed advantages that Access Points have over mesh systems will become much more obvious with Wi-Fi 7.

From what I've read mesh devices generally can detect when they've got wired backhaul so they can stay in mesh mode for the clean handovers while not relying on it for actually moving data

anyfoo 2 days ago [-]
Due to boring circumstances outside of my control, I have to use WiFi for the most part, so I've got quite some experience with making it run optimally (or rather, as optimally as I managed to, not as optimally as I would like it to).

And yeah, you pretty much already have to have a visible line of sight to get anything even close to 1 Gbps. And still be on channels with little interference. (DFS helps if you're not near radar, which intentionally causes you to get kicked off those channels and lose connection entirely.) And even then you might have to mess about a lot with positioning, because of reflections and generally multipath propagation.

I'd say it's not worth the headache. I would love to lay down Ethernet cable, even if it was just cabling only suitable for 1 Gbps (for which there's no good reason to, might as well do 10 Gbps).

But yeah, any mesh system worth its salt figures out the topology and absolutely favors wired links over WiFi for the back haul. Anything else wouldn't make any sense at all, there is basically no situation where you'd prefer an RF channel over a wire, unless the wire is maybe made of wet string.

walrus01 1 days ago [-]
> And yeah, you pretty much already have to have a visible line of sight to get anything even close to 1 Gbps

If one considers that the higher speeds in 802.11ac and 802.11be require 256QAM modulation or better, this is completely expected (assuming 5 GHz band of course, which doesn't go through material very well at all). If you've sen a live eyeball chart of a 256QAM or 1024QAM constellation on test equipment for clear-air microwave link purposes, and seen how quickly it can degrade or get fuzzy if there's anything in the way of the link, it becomes more readily apparent. MCS levels 8 and onwards here:

https://en.wikipedia.org/wiki/Wi-Fi_7

"Clean" eyeball example of 256QAM: https://www.everythingrf.com/community/what-is-256-qam-modul...

examples of "fuzzy qam" in 16QAM, same principle applies to denser QAM

https://www.researchgate.net/figure/Typical-eye-diagram-Symb...

dnemmers 1 days ago [-]
Have you thought about using powerline devices? I’ve successfully used them in place where running my own cable wasn’t a possibility, and WiFi wasn’t cutting it.

https://www.hp.com/us-en/shop/tech-takes/what-is-a-powerline...

martinald 21 hours ago [-]
Powerline is in my experience vastly worse than WiFi in nearly all cases. It's slow, suffers from bad jitter/interference (often worse than WiFi) and the chips run so hot (especially the last gen ones, AV2000 iirc - I believe they don't sell them any more because they overheat and fail, or at least 2/2 of the ones I had did this).

Even with many walls I was getting 300-400mbit/sec on WiFi vs 100mbit/sec on powerline.

izacus 17 hours ago [-]
Powerline in practice does not come anywhere near performance of a good WiFI7 setup - even through walls WiFi tends to be more stable and faster outside special conditions (e.g. multiple floors or thick walls).
Havoc 1 days ago [-]
Yeah, used to run a wifi bridge by necessity for years (yay rental). I could get stable 1gig over wifi6 which is why I'm a little puzzled that 7 isn't much better
kalleboo 1 days ago [-]
My house is built out of reinforced concrete, so wireless signals reach almost nowhere. I got Ethernet put into the living room and bedroom and put in 2.5 Gbps USB ethernet dongles on powered hubs, so when I plug into my phone/laptop to charge they get wired ethernet automatically.
walrus01 1 days ago [-]
how many spatial streams are you using (2x2, 3x3, etc) and are you using an 80 or 160 MHz channel?

If you have a set of full capability 802.11be clients you'll see the best performance with a 3x3 AP and 160 MHz channels.

Havoc 1 days ago [-]
160hz channel & 2 spatial streams.

It's unfortunately consumer grade tp-links so while they have actually been pretty good...you don't get a lot of knobs to tweak.

Still need to try MLO at some stage and they're currently acting as a bridge so (i.e. wifi backhaul) so think it might get better once I've laid fiber backhaul between them

walrus01 16 hours ago [-]
Would recommend one of the ubiquiti 3x3 802.11be APs for better performance. The U7 Pro or similar. The unifi controller software is a monolithic package you can install on a bare bones debian system. Wifi backhaul definitely also a problem as using any kind of wifi-to-wifi extender stuff will cut your throughput a lot, keep in mind that all 802.11.whatever is a half duplex medium.
Havoc 15 hours ago [-]
>3x3

The ones I have seem to be 2x2...with backhaul on wifi the 1.3 I'm seeing is probably just 1 then.

Aware of ubiquiti popularity in homelab circles. It's not for me tbh.

I've been through a couple of iterations of wifi gear - including a mainland china no-english Xiaomi...that was an adventure.

At this stage I don't think I'm tearing it out and buying new. "slow" gigabit internet on mobile device is an annoyance to my perfectionist side but I'll survive. And with a bit of luck adding fiber backhaul gets me above internet speed.

jhoho 21 hours ago [-]
While it already is a very deep discussion, like most it does not discuss the latency improvement that 6 GHz Wi-Fi brings. Afaik Wi-Fi 6E 6 GHz has a latency of 5 ms and less, while Wi-Fi 6 comes with about 10 ms at 5 GHz.
martinald 21 hours ago [-]
Not sure if that's true really - its just 6GHz is a lot less congested in general (for now).
jhoho 20 hours ago [-]
I struggle to find good resources on this topic. The few I found indeed attribute the lower latency of more recent Wi-Fi generations to techniques like MLA and OFDMA, not necessarily the frequency (which makes sense, of course). The cudy link makes it look like channel congestion is a factor, too. https://www.rtings.com/router/learn/research/wireless-latenc... https://www.zbtwifi.com/blogs/frontier-technology-news/wi-fi... https://www.cudy.com/en-benelux/blogs/news/6-ghz-vs-5-ghz-th...
oofabz 1 days ago [-]
I recently got a Grandstream GWN7615 access point to add coverage on the other side of the house from the main router. It does not meet the minimum spec listed in this article but for more modest requirements I think it's an excellent value. You can get one for well under $100. It is WiFi 5, 3x3 MIMO, and you don't need any cloud account to manage it.
1 days ago [-]
Scroll_Swe 15 hours ago [-]
WiFi is always a compromise. Wife factor maybe, but for now, cable.
Neywiny 2 days ago [-]
Good to see the subjective adjectives in the RF world are here too. Except they're not the same ordering, as EH is before UH for WiFi but after in RF
Dylan16807 1 days ago [-]
I hate how they did this big rebrand to simplify things and then immediately ruined it with 6e and 7.

Okay, we have wifi 6, now we're adding 6GHz. How do you know if you have 6GHz? You check if it says 6...e. And is wifi 7 an upgrade to that? Lol who knows, depends on the individual device specs. Check if it says tri-band, that will tell you it supports 6GHz... OR that it can support two simultaneous networks on one of the other frequencies.

1 days ago [-]
peter_d_sherman 10 hours ago [-]
I like this table in the page very much:

https://www.wiisfi.com/images/wifi-phy-table.jpg

Basically, it shows how a different encoding scheme/modulation is used based on distance (1024-QAM 5/6 - BPSK 1/2) (https://en.wikipedia.org/wiki/Signal_modulation), which is interesting!

If we think about it, any WiFi adapter (in its most basic core functionality) is just the ability to send/receive radio at a specific frequency, and a modulation (send) and corresponding demodulation (receive) scheme on top of that.

If the modulation/demodulation can be handled by a DSP, FPGA or fast-enough CPU, then all that's really required to create a WiFi device is that and the radio component, and then of course, there are devices like "Cantennas" that could give increased range, although loss of omnidirectionality is the trade-off there...

Anyway, I never thought about the fact that different encoding methods are used relative to distance prior to reading this article (i.e., I learned something!) -- but it makes perfect sense now that I think about it!

saejox 1 days ago [-]
Must wifi speed issues are actually obstacle issues. One concrete wall and wifi5/6/8 all dead. Only 4 survives. 2.4ghz is here to say
abhinavk 1 days ago [-]
6(ax) works on 2.4GHz too.
WillPostForFood 1 days ago [-]
I was on top of G, started to lose track after N.
dbcooper 1 days ago [-]
Love this site for communicating the properties of WiFi protocols.
ibatindev 1 days ago [-]
Once again, IEEE 802.11ah -Wi-Fi HaLow-, completely forgotten. This one would be perfect for all the lights/sensors.
walrus01 1 days ago [-]
Latest-gen zigbee stuff and zwave 800 seems to have already thoroughly occupied that niche for a great deal of home and office automation equipment.
Avamander 1 days ago [-]
There aren't any usable chipsets with usable drivers for 802.11ah unfortunately.
ece 1 days ago [-]
Informative page, but the most common speeds with 2x2 MIMO probably were (in Mbps):

Far (QPSK) 4(20Mhz)/5(80Mhz)/6(160Mhz)/7(320Mhz): 28.8/130/288.23/576.47

Near (64QAM) 4(20Mhz)/5(80Mhz)/6(160Mhz)/7(320Mhz): 144.4/650/1,441.17/2,882.35

Not bad for throughput increases, though most of the increases come from more spectrum, and the reliability comes from more MIMO antennas/streams. I've had WiFi 4/5 2x2 routers and something tells me I won't see much more than what's listed above for 7. Buying a 4x4 does get you a generation of throughput in advance pretty much, if you need it.

blindriver 1 days ago [-]
One thing that wasn't mentioned is that the more APs you have, the worst off your life gets. That's because the way clients connect to a particular AP is done client-side and you have no control over it or visibility. So, no matter how you fiddle with it, your client may connect to the AP that is 40 feet away and on another floor rather than the one that is 10 feet away with a perfect line of sight. And you won't know why. This is the problem I had with my house and had to decrease the number of APs to get over better reliability and performance.
supertrope 20 hours ago [-]
If a device can still hear a farther away AP at say -62 dBm it’s not going to start searching. Searching has a cost in lower speeds and higher latency due time spent tuning to other channels. It’s only done if the current signal weakens. Decrease AP transmit power until each room only has one AP signal at -67dBm or louder. https://support.apple.com/guide/deployment/wi-fi-roaming-sup... Intel Wi-Fi cards have a roaming aggressiveness setting.
jauntywundrkind 1 days ago [-]
There's band steering. You absolutely do have control, if you opt to do so.

On openwrt, DAWN or usteer can both help your APs to get sounding maps from clients and to tell them which AP to join. Looking at the sounding maps is very fun data to see: highly tecommend! The settings aren't the world's greatest but they are pretty good starts! https://github.com/berlin-open-wireless-lab/DAWN https://openwrt.org/docs/guide-user/network/wifi/dawn

Multiple APs are really nice because you can turn down the AP power, ideally, as you add more stations. Unfortunately I don't think you can tell a client to be quieter though; someone's laptop can be at 200mW tearing the hell out of the spectrum when everyone else is nicely conversing at 10-20mW.

toast0 1 days ago [-]
My experience with DAWN wasn't great. Some of my clients don't like the extensions you need, so I had to go back to no roaming extensions and just hoping clients make good decisions and tuning ap power levels to help.

Might try it again though, I'd love for it to work. And I was also dealing with some baseline wifi instability that I think firmware updates has resolved.

blindriver 19 hours ago [-]
Band steering doesn't work great. Neither does minimum RSSI. It's completely client-dependent and it's a headache. The best solution is to always minimize the number of APs you have with as little overlap as possible because of how unpredictable client behavior is. Like I said I have a very bad problem with line of sight APs are ignored for further away APs, and no amount of fiddling is helping.
jauntywundrkind 16 hours ago [-]
This runs contrary imo to a lot of people's experience with for example Google Mesh, which is a product that I dare say works quite well for most people & most devices.

Agreed that signals like RSSI are device dependent. And open source software like DAWN is not the best at adjusting to this automatically. But in principle, most devices will give your AP a sounding map on request, and most clients will obey instruction to move to a different AP. Even really bad devices have generally worked ok for me at this.

The counter advice if use the minimum number of APs leaves pretty large zones of bad reception, and still already accepts the problem of roaming for many people. It's my hope that open source et al get better, get more competitive with what is clearly possible, especially given that we seem so well positioned to have control that could make good decisions here. To give up, when we have so much rich data & options, does not tempt me.

mc32 1 days ago [-]
From what I hear, Macs are stickier and Windows clients more promiscuous. So a Mac will stick with an AP further out when you have one near, on the other hand a Windows client can go back and forth between APs -which can sometimes be a problem too.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 11:45:48 GMT+0000 (Coordinated Universal Time) with Vercel.