Errata for Statistics for Linguistics with R, 2nd ed.
=====================================================
p. 1, bullet point 3:
data, summarize
->
data, how to summarize
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 1, last line:
they can handle)
->
they can handle
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 2, paragraph starting with "Chapter 3":
i.e. procedures, in which several potential
->
i.e. procedures in which several potential”
(thanks to Daria Bębeniec for pointing this out to me)
p. 2, lower third:
things, Ican only deal
->
things, I can only deal
(thanks to Laurence Anthony for pointing this out to me)
p. 10, Table 3:
CLAUSALLY MODIFIED
->
CLAUSALLY-MODIFIED
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 12, above Figure 3
correlated with independent dependent variables
->
correlated with independent and dependent variables
(thanks to Laurence Anthony for pointing this out to me)
p. 24, Table 6
ONJ
->
OBJ
p. 28, bullet point 2, line 2 from bottom:
etc. (Outliers
->
etc.) (Outliers
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 28, last par.:
, that other probability
->
, the other probability
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 33, last full par, line 2 from bottom:
where here there
->
where there
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 34, caption of Figure 4:
All probabilities of possible results
->
All possible results
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 36, caption of Figure 6:
All probabilities of possible results
->
All possible results
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 41, first bullet point
the standard normal distribution with z-scores (norm);
->
the standard normal distribution (norm) with z-scores;
(thanks to Laurence Anthony for pointing this out to me)
p. 46, mid of last par.:
Since there are two independent variables for each of the two levels
->
Since there are two levels for each of the two independent variables
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 49, last par., line 1:
do not follow 0
->
do not follow (5)
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 50, line 1:
items of Table 13
->
items in Table 13
(thanks to Laurence Anthony for pointing this out to me)
p. 53, line 2:
*in* front *of*
->
*in front of*
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 54, line 11:
website (see
->
website; see
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 54, line 7 from bottom:
Now that every subjects
->
Now that every subject
(thanks to Laurence Anthony for pointing this out to me)
p. 55, line 3:
(without double quotes, of course)
->
(without double quotes, of course))
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 55, mid:
and blue arrives indicate
->
and blue arrows indicate
(thanks to Laurence Anthony for pointing this out to me)
p. 55, above bullet points:
you enter them into and
->
you enter them into and
(thanks to Laurence Anthony for pointing this out to me)
p. 57, line 1 after 6.:
files with example files
->
files with examples and data sets
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 57, mid:
(for _s_tatistics _f_or _l_inguists _w_ith _R_)
->
(for _s_tatistics _f_or _l_inguistics _w_ith _R_)
(thanks to Laurence Anthony for pointing this out to me)
p. 58, line 1:
the console with
->
the console by
(thanks to Laurence Anthony for pointing this out to me)
p. 58, line 7:
high-lighting
->
highlighting
(thanks to Laurence Anthony for pointing this out to me)
p. 66, last line of code:
> numbers
->
> numbers1.and.numbers2
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 71, below Figure 12:
You get
->
You get (with <_inputfiles/02-3-2_vector2.txt>)
(thanks to Laurence Anthony for pointing this out to me)
p. 71, mid:
Now, how do you save vectors into files.
->
Now, how do you save vectors into files?
(thanks to Peter Hancox for pointing this out to me)
p. 74, the bottom second code box
> which(x<=7) which elements of x are <= 7?¶
->
> which(x<=7) # which elements of x are <= 7?¶
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 74, the bottom second code box
> which(x>8 | x<3) which elements of x are >8 or <3?¶
->
> which(x>8 | x<3) # which elements of x are >8 or <3?¶
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 75, last par.:
The output of %in% is a logical variable which says for each element of the vector before %in% whether it occurs in the vector after %in%.
->
The output of %in% is a logical variable which, for each element of the vector before %in%, says whether or not it occurs in the vector after %in%.
(thanks to Peter Hancox for this suggestion)
p. 79, beginning of 4:
we read in data frames
->
we read in data frames (to be discussed below)
(thanks to Laurence Anthony for pointing this out to me)
p. 79, 2nd to last line of text:
mostly the vector
->
usually the vector
(thanks to Peter Hancox for this suggestion)
p. 80, in the middle
- 0.992 < interval/level 1 ≤ 3.66;
- ? 3.66 < interval/level 1 ≤ 6.34;
- ? 6.34 < interval/level 1 ≤ 9.01.
->
- 0.992 < interval/level 1 ≤ 3.66;
- 3.66 < interval/level 2 ≤ 6.34;
- 6.34 < interval/level 3 ≤ 9.01.
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 82, line 2:
chang the name
->
change the name
(thanks to Laurence Anthony for pointing this out to me)
p. 86, beginning of 5.2:
save it as a comma-separated text file
->
tab-delimited text file with the extension .csv
(thanks to Laurence Anthony for pointing this out to me)
p. 86, last full par.:
; then you choose tabs as field delimiter
->
); then you choose tabs as field delimiters
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 86, bullet point 1:
here, too;
->
here, too);
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 87, par. before last block of code:
with read.delim:, which
->
with read.delim, which
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 90, line 2 of second grey box of code
Class
->
CLASS
(thanks to Laurence Anthony for pointing this out to me)
p. 90, last line:
easy with vector
->
easy with vectors
(thanks to Peter Hancox for pointing this out to me)
p. 91, line 1 after 1st code block
b<-a[a$Class=="open",]; b
->
b<-a[a$CLASS=="open",]; b
(thanks to Peter Hancox for pointing this out to me)
p. 91, last two lines:
in a spreadsheet software
->
in a spreadsheet software program
(thanks to Laurence Anthony for pointing this out to me)
> p. 92, right above the THINK BREAK
within Class according to
->
within CLASS according to
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 93, recommendations
to read in tab-delimited files
->
to read in comma-separated files
(thanks to Laurence Anthony for pointing this out to me)
p. 94, first block of code:
what to do if this logical expression evaluates to FALSE
->
what to do if this logical expression evaluates to TRUE
p. 95, last block of code:
as often often as
->
as often as
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 100, block of code:
> peek< function (x, n=6) {¶
->
> peek<-function (x, n=6) {¶
(thanks to Joanna Zaleska for pointing this out to me)
p. 118, below third grey box of code:
10.000
->
10,000
(thanks to Steven Coates for pointing this out to me)
p. 118, below third grey box of code:
5.000
->
5,000
(thanks to Steven Coates for pointing this out to me)
p. 136, near the bottom
the levels of the first vector in the rows
->
the unique values/levels of the first vector/factor in the rows
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 139, code below Figure 31
fil.table
->
fill.table
(two times, thanks to Rik Vosters for pointing this out to me)
p. 153, par. 2:
colleages
->
colleagues
(thanks to Daria Bębeniec for pointing this out to me)
p. 162, line 5 of 4.1.1.1:
you must some know test like this
->
you must know some test like this
(thanks to Simone Ueberwasser for pointing this out to me)
p. 163, grey block of code:
xlab="Tense-Apect correlation"
->
xlab="Tense-Aspect correlation"
(thanks to Simone Ueberwasser for pointing this out to me)
p. 170, mid of last full par.:
0.00000125825
->
0.000001125825
(thanks to Susanne Flach for pointing this out to me)
p. 173, second bullet point:
interval/ratio-scaled: the
->
interval/ratio-scaled variable: the
(thanks to Daria Bębeniec for pointing this out to me)
p. 186, block of code:
sqrt(test.Peters$statistic/
sum(Peters.2001)*(min(dim(Peters.2001))-1))¶
->
sqrt(test.Peters$statistic/
(sum(Peters.2001)*(min(dim(Peters.2001))-1)))¶
(thanks to Alvin Chen for pointing this out to me)
p. 211, first grey block of code:
Dices <- read.delim(file.choose()
->
Dices <- read.delim(file.choose())
(thanks to Simone Ueberwasser for pointing this out to me)
p. 217, last grey block of code:
ylim=(c(0, 1000))
->
ylim=c(0, 1000)
p. 218-221: Given the the test for homogeneity of variances returned a ns result, it would have been didactically more consistent to apply t.test with the additional argument var.equal=TRUE. If the t-test is computed like that, the p-value changes from 0.01619 (with var.equal=FALSE) to 0.01611 (with var.equal=TRUE); the formula for the df changes to n_1+n_2-2.
(thanks to Yuliya Morozova for pointing this out to me)
p. 221, recommendations box:
of this F-test, which
->
of this t-test, which
p. 229, second line from bottom
it may safer to compute
->
it may be safer to compute
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 248, line 12:
a small a pilot
->
a small pilot
(thanks to Alvin Chen for pointing this out to me)
p. 256, par. 2, line 2
advanatage
->
advantage
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 264, the first line of the paragraph right above the last code example
ecdf plots).
->
ecdf plots),
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 273, mid of last paragraph
trials is 0.95^2=0.857375.
->
trials is 0.95^3=0.857375.
(thanks to Earl Brown for pointing this out to me)
p. 276, section heading of 5.2.4
2.4. A linear model with a two categorical predictors
->
2.4. A linear model with two categorical predictors
p. 278, the second item in the list at the top
(averaging across IMAGEABILITY)’
->
(averaging across IMAGEABILITY);
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 295, Procedure box
fewer than 95% of the model'2 absolute
->
fewer than 5% of the model's absolute
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 304, bottom
Since FAMILIARITY has more than 1 df, you use drop1 (or other functions, see the code file) to get one p-value, and FAMILIARITY is highly significant.
->
Since CONJ has more than 1 df, you use drop1 (or other functions, see the code file) to get one p-value, and CONJ is highly significant.
(thanks to Nina Julich for pointing this out to me)
p. 306, par. 3:
Fox and Weisberg (2011:239)
->
Fox and Weisberg (2011: 239)”
(thanks to Daria Bębeniec for pointing this out to me)
p. 315, par. 3:
(but see Fox and Weisberg (2011: Ch. 6))
->
(but see Fox and Weisberg 2011: Ch. 6)
(thanks to Daria Bębeniec for pointing this out to me)
p. 319, line 2 of the paragraph below the last code box
(and the argument ".*\\." means ‘characters up to and including a period’)
->
(and the argument "^.*?\\." means ‘characters up to and including a period’)
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 320, last par., mid:
more problematic, though so
->
more problematic, though, so
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 322, last full par., line 1:
get again get
->
again get
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 323, line 2 before last code block:
take take
->
take
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 328, par. below bullet points:
popluation
->
population
p. 328, last par.:
her,e
->
here,
p. 330:
Errror: TEXT
->
Error: TEXT
p. 333, par. 1 of Section 5.4:
use as the above may have made you expect.
->
use as you may have expected.
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 333, line 3 from bottom:
system, they allow the
->
system, allowing the
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 335, par. 2, line 3:
chapter. Well
->
chapter? Well
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 335, bullet point 2, penultimate line:
do no converge
->
do not converge
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 335, bullet point 2, mid:
with a maximal random-effect structure
->
with a maximal random-effects structure
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 336, middle of the page (above the Recommendation(s) box):
that make tackling some of these questions more easily
->
that make tackling some of these questions easier
(thanks to Daria Bębeniec for pointing this out to me)
p. 339, Procedure box:
(Post-hoc
->
Post-hoc
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 339, bottom:
clusters with bar (and
->
clusters (and
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 342, bulleted list (three times):
for aa and bb:
->
for aa and bb
(thanks to Jae-Woong Choe and colleagues for pointing this out to me)
p. 343, middle paragraph:
all other measures for ratio-scaled are:
->
all other measures for ratio-scaled variables are
(thanks to Daria Bębeniec for pointing this out to me)
p. 348, line 5 below Figure 81:
and the latter very little substructure
->
and the latter has very little substructure
(thanks to Daria Bębeniec for pointing this out to me)
p. 348, line 5 from bottom:
The function cluster.stats from the library fpc offers
->
The function cluster.stats from the library fpc offers
p. 349, line 5:
sould
->
should
(thanks to Daria Bębeniec for pointing this out to me)
p. 355: remove the two Divjak & Gries references.
(thanks to Daria Bębeniec for pointing this out to me)
#########################
last updated 29 Aug 2017
STG