anduni, y9ou posted --- The fact is, your handset's output power is controlled by the cellular network, not by the phone. (So, why compare the phones? - Compare the NETWORKS!) Phone-x may have the CAPABILITY to output (slightly) more power than Phone-y, but in real-world use, neither one of them even come close to their maximum capabilities. The phone that, on average, will "have" the highest SAR will be the one that has to power up the most to reach the network towers. Typically, that means the carriers who have the fewest towers constructed in your service area.
Clarify please.
Hello there,
Admittedly, not terribly familar with Mexico states, but hope that you and your family survived the recent Hurricane Patricia just fine!!
From the news, looks like the impacts were not as bad as many feared.
As to your question - I can read this two ways, so let me try to answer both:
1) HOW the cell phone output power is controlled:
I did some quick searching online and found this link (maybe not the best?), but of the few I looked at, seems to be "understandable" by non-engineers.
The Inner Loop uplink power control discussion starts on Slide-10.
You just need to know that "UE" = "User Equipment" (in other words, a cell phone handset).
http://www.comlab.hut.fi/opetus/333/reports/Yin_Power_control_in_cellular_systems.pdf
Also, there are a few different power control schemes going on, but the one we're interested in in the "Inner Loop Power Control" because this is the one that is active the VAST majority of the time. There is a separate scheme for INITIAL network access, and even this is power-control limited - but it's a little more autonomous than the inner loop. Not by much, and, it's only active for those first few seconds when you turn the phone on, and maybe when switching to roaming if you leave your network. The third control is the "Outer Loop", and this deals with balancing the whole network, and doesn't have the same effect on your handset's output power. Think of it as the slow trending, or "course tune" power control for the whole system. The handset output power is controlled via networks commands sent to the handset 1500 times a second (for UMTS 3G networks).
2) The other way to read your question is: What does carrier tower density have to do with it?
Typically, fewer towers mean the towers are spaced further apart to cover a given geographic area. Underlying this is population density, because the more traffic demand there is, the more towers you will need to service the traffic at reasonable speed and quality - there's only so much spectrum, you have to re-use it (either via codes, or discrete channels).
Anyway -- the further the towers are apart, the more likely you're going to more distant to them.
Keep in mind, your phone is sometimes connecting to more than one tower at a time. (That's too complicated to get into here - but it explains the need for Outer Loop power control.)
Another consideration is actual frequency. Here in the US, "Cellular" is in the 850 MHz band (Uplink = handset-to-tower: 824 to 849 MHz), and (Downlink = tower-to-handset) is 45 MHz above that. For PCS, it's UL: 1850-1910, with DL 80 MHz higher. There are other bands with different frequency splits.
The point is, the higher frequencies don't propagate as well as the lower ones do -- particularly for in-building coverage.
In the US, for "most" markets, Verizon and AT&T have the majority of the lower spectrum. The other (smaller) carriers tend to have the higher channels. That's not to say that Verizon and AT&T don't have higher channels too. They do. But Verizon and AT&T have huge market share, therefore the most customers, therefore the most towers, therefore the highest tower density - and - therefore, on average, the least need for high handset power in the uplink to reach those towers.
So, to recap: Your handset output power (uplink: handset-to-tower) is controlled by the network.
For the 2nd, or 3rd tier network provider, the "network" is typically more distant from you, and typically using "non-optimal" high frequencies, which causes your phone to transmit more power, on average, over time.
In the case of a 1st tier provider (Verizon and AT&T here in the US), the towers, on average, are closer and using frequencies that propagate better -- meaning that your handset can put out less power, on average, over time, as it communicates with those networks. And even when on the "less than ideal" higher frequencies used by these 1st tier carriers, they still tend to have more towers (which means they are statistically closer to you) so your handset still doesn't need to talk as far, and thereby uses less power as determined by the inner power control loop. BUT - these networks can be more crowded, and cost more. So, they're not necessarily ideal for every user.
This is a big-picture discussion.
There are places, even here in the US, where Verizon and AT&T are NOT the 1st tier carriers.
Doesn't really matter who is as far as the inner-loop power control goes, however.
Tower density is usually a function of traffic demand, which goes hand-in-hand with population density.
Your handset is going to transmit the most power when it is far from the nearest tower (network) resource in terms of downlink signal strength from that tower.
Could be 20 miles away or more.
Note also that the higher frequencies are not absorbed by the human body as well as the lower frequencies are, so you have to factor this in as well.
It turns out the FM Broadcast Band (i.e., 88-108 MHz) has the best absorption by humans because those wavelengths are close to the resonant length of adult human beings.
The wavelenghts for Wi-Fi, cellular, PCS, AWS, and LTE (above 700 MHz) are all a small fraction by comparison, not resonant, and not nearly so well absorbed.
Did this help, or did I just confuse the heck out of you!?
