Today I finally had Okonomiyaki. There’s only two places in Suzuka that serve it. Really, I have no clue why it hasn’t really made the transition overseas.  Quite a delicious dish, especially after an 11 hour workday.

Then three large Japanese beers. Ahhhh, what a joy that is it, when paired with Peace cigarettes.

Then there’s coming home and finding a debate de facto ended when the other side takes that victory can be had when they shut down the ability for the other side to reply. More and more I see how some people become dubious of the claims made by certain types of people on gnxp.

http://www.haloscan.com/comments/raldanash/1709626939114881876/?src=hsr#2520642

A wiser friend told me that my chain is being yanked and I should perhaps let fools wallow in their supposed intellectual superiority. I say that as much as my chain is being yanked, I’m yanking back. Let me make it clear. I know the difference between the mathematical/statistical term “assume” and the logical term “assume”. I also know when statistics are being applied in a manner that’s logically faulty. In other words when someone assigns a theoretical value to a statistical analysis that does not follow. In other words a statistical non-sequitor due to bypassed or ignored information due to either ideological reasons or simple human error (let’s remain silent for the causes of the human error, eh). So for example we have this link:

http://www.gifted.uconn.edu/siegle/research/Normal/Interpret%20Raw%20Scores.html

Which explains the normal uneasiness when gathering z-scores. Note that it’s within the context of standardized test spread. I don’t have any problem with that, that’s clear enough. So let’s look at things within the context of statistical analysis when the math is right but base values assigned are completely off for whatever reasons. In this case mostly scientific and economic:

http://www.cs.toronto.edu/~radford/mm-errata/errata.html

Let’s say this: The debate is about an anlysis done on the purported predictable variability of IQ along a geographic North-South gradient, using such data as IQ tests and TIMSS scores. The hypothesis is that a pathological agent is responsible for the lower scores within a subset of a population. The opponent takes the position that his statistical analysis is sound, that with his corrections, he’s able to soundly route myself and other who say otherwise.

Now let’s look at the limits of z-analysis within a highly controlled environment:

http://www.stanford.edu/~engler/sas-camera-ready.pdf

Can’t get more controlled than that, right? So looking at the original source data on which he’s drawing to make a conclusion, can we say that there are factors that simply can’t be controlled for given the data set and the explicit claim made? Again, even small discrepancies can revebrate, as shown here:

http://www.swarthmore.edu/NatSci/wstromq1/stat11/11solutions5.htm

Extremely narrow claim set and data set, which can be verified very easily, yet still problematic.

In other words, when running a z-analysis, the larger the claims placed upon the data set, the more contingent factors come into play. If the claimant is ignorant of various key problems with his analysis, the factors in winnowing the data set and even key issues on the gathering of the data itself, well, everything is problematized, especially when massive looming facts obstruct analysis.

When we “assume” in statistics it’s with the hope that the person doing so is doing so competantly within the data set. We can “assume” anything mathematically and gain a certain result if the numbers are cranked. If the base values “assumed” are flawed logically, well… If a guy who’s really drunk can poke larger holes than an iceberg hitting the Titanic in an argument, that says something about the claimant.

Stating that opposition needs to have perfect grasp of *all* the math at hand is pointless when we are not the ones make extraordinary claims. One popped plate is all that needs to sink a battleship of hubris and ego.