Tools and Methods for open and reproducible research

The first week of the course covered basic tools and requirements for the rest of the weeks. I practiced basics of R by completing a Data Camp exercise called “R Short and Sweet”. In addition I brushed up git and rmarkdown by going through the first workshop as detailed below.

Rstudio Exercise 1

  1. I already had a github account and I just forked “IODS-project” from Tuomo Nieminen’s github.
  2. I modified chapter1.Rmd file under IODS-project in Rstudio. Link to my IODS-project repository is here.
  3. I opened the index.Rmd file and added my name. I commited the change and pushed to my github.
  4. I created a new Rmarkdown file with Rstudio. I wrote few sentences about the course and saved it as README.Rmd under IODS-project. The link to my course diary is here
  5. I replaced the default theme of my course diary web page with Time Machine theme.
  6. I uploaded all the changes to GitHub.
  7. I made sure that all changes could be seen.

Note1: Something that was not mentioned in the course that could be useful or could be included in future. Rstudio works hand-in-hand with github so R scripts or Rmd documents can be directly linked to github. The parent directory of the course (locally downloaded to the PC) can be imported as a github project that can be authorized by one’s username and password.

keywords: github, linux, rstudio, rmarkdown


Regression and Model validation

During the second week of this course, we have been delving deeper into R and statistics. We learned about regression models and the application of R in statistical modeling. The datacamp exercises along with the two embedded videos provided good background on the topics. Chapter three from “An Introduction to Statistical Learning with Applications in R” covered in-depth topics in linear regression.

RStudio Exercise 2

Data Wrangling

After going through the study materials, I attempted the RStudio exercise. The first part of the exercise was related to data wrangling where a subset of table was generated from a table with raw data (observations). The R script used to create the table can be found here.

Analysis

The R script for this part is available here. The data used in this exercise comes from an international survey of approaches to learning conducted by Kimmo Vehkalahti. The survey was funded by Teachers’ Academy funding (2013-2015) and the data was collected during December 2014 to January 2015. The survey was conducted in Finland with an aim to understand the relationship between learning approaches and students’ achievements in an introductory statistics course. A total of 183 individuals were included in the survey where the students were assessed for three different studying approaches - surface approach, deep approach and strategic approach. Additional details about the survey can be found here. After preprocessing in data wrangling step, we read the data into R and applied regression models.

lrn2014<-read.table("data/learning2014.csv")
str(lrn2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
##  $ age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ points  : int  25 12 24 10 22 21 21 31 24 26 ...
dim(lrn2014)
## [1] 166   7

The final table used for analysis consist data including seven different variables and 166 individuals (see above). Among the variables, gender is a factor variable, age and points are integers whereas attitude, deep, stra and surf variables include float values.

summary(lrn2014)
##  gender       age           attitude          deep            stra      
##  F:110   Min.   :17.00   Min.   :1.400   Min.   :1.583   Min.   :1.250  
##  M: 56   1st Qu.:21.00   1st Qu.:2.600   1st Qu.:3.333   1st Qu.:2.625  
##          Median :22.00   Median :3.200   Median :3.667   Median :3.188  
##          Mean   :25.51   Mean   :3.143   Mean   :3.680   Mean   :3.121  
##          3rd Qu.:27.00   3rd Qu.:3.700   3rd Qu.:4.083   3rd Qu.:3.625  
##          Max.   :55.00   Max.   :5.000   Max.   :4.917   Max.   :5.000  
##       surf           points     
##  Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.417   1st Qu.:19.00  
##  Median :2.833   Median :23.00  
##  Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :4.333   Max.   :33.00

The number of females (n=110) in this survey is almost two times the number of males (n=56). The age of students ranged from 17 years up to 55 years.

plot_lrn2014<-ggpairs(lrn2014, mapping = aes(col=gender, alpha = 0.3), lower=list(combo = wrap ("facethist", bins = 20)))
plot_lrn2014

The graphical overview of the data is shown above. Here, the overall goal of the survey is to identify how age of the students, attitude towards learning and three different learning methods are contributing towards the final points. In general, attitude towards learning has the highest impact for overall outcome of the study (i.e points scored) whereas deep learning method do not have any impact.

The explanatory variables were selected based on the absolute correlation values. The three explanatory variables for exam points (top correlated variables, also shown in the plot above) are student’s attitude towards learning (attitude), learning strategy (stra) and surface learning approach (surf). The model based on three dependent variables on exam points has the maximum residual value of 10.9 and median of 0.5. Here, residual value is the remaining value after the predicted value is substracted by the observed value. The model summary showed that attitude is highly significant (Pr=1.93e-08) variable that affects the student’s exam points. On the other hand, learning strategy and surface learning are not significant variables (Pr>0.01).

model<-lm(points ~ attitude + stra + surf, data = lrn2014)
summary(model)
## 
## Call:
## lm(formula = points ~ attitude + stra + surf, data = lrn2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.0171     3.6837   2.991  0.00322 ** 
## attitude      3.3952     0.5741   5.913 1.93e-08 ***
## stra          0.8531     0.5416   1.575  0.11716    
## surf         -0.5861     0.8014  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08

The summary of the model after removing insignificant variables is shown below. With regard to multiple r-squared value, we observed slight decrease in the value from 0.1927 (in earlier model) to 0.1856 (in updated model). However, other criteria for model evaluation such as F-Statistic (from 14.13 to 38.61) and p-value(3.156e-08 to 4.119e-09) have significantly improved. Thus, we can conclude that r-squared value alone may not determine the quality of the model. In this particular case, the lower r-squared value could be due to the outliers in the data.

model_sig<-lm(points ~ attitude, data = lrn2014)
summary(model_sig)
## 
## Call:
## lm(formula = points ~ attitude, data = lrn2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.6372     1.8303   6.358 1.95e-09 ***
## attitude      3.5255     0.5674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09
par(mfrow = c(2,2))
plot(model_sig, which = c(1,2,5))

The three different diagnostic plots are generated above.The assumptions behind all three models is linearity and normality. Based on the above plots, we can conclude that the errors are normally distributed (clearly observed in q-q plot). Similarly, residual versus fitted model showed that the errors are not dependent on the attitude variable. Moreover, we can see that even two points (towards the right) have minor influence to the assumption in case of residual vs leverage model. All the models have adressed the outliers nicely. Thus, assumptions in all models are more or less valid.


Logistic Regression

One way to move on from linear regression is to consider settings where the dependent (target) variable is discrete. This opens a wide range of possibilities for modelling phenomena beyond the assumptions of continuity or normality.

Logistic regression is a powerful method that is well suited for predicting and classifying data by working with probabilities. It belongs to a large family of statistical models called Generalized Linear Models (GLM). An important special case that involves a binary target (taking only the values 0 or 1) is the most typical and popular form of logistic regression.

We will learn the concept of odds ratio (OR), which helps to understand and interpret the estimated coefficients of a logistic regression model. We also take a brief look at cross-validation, an important principle and technique for assessing the performance of a statistical model with another data set, for example by splitting the data into a training set and a testing set.

The slides and videos related to logistic regression can be found below.
Video: Logistic regression: probability and odds
Video: Logistic regression: Odds ratios
Video: Cross-validation: training and testing sets
Slides: Logistic regression

After going through the videos, we practiced DataCamp exercises on Logistic regression and started to work on the workshop (RStudio Exercise 3.

RStudio Exercise 3

Data Wrangling

The data for Exercise 3 was downloaded from UCI Machine Learning Repository (link). The zipped file contained two tables, namely student-mat.csv and student-por.csv. In this data wrangling exercise, the main task was to join two data sets and create a data frame for logistic regression analysis. More detailed information about the data is present in the next section (Data Analysis) of this exercise. The R script associated with this exercise can be found here

Data Analysis

The joined student alcohol consumption data that was created during wrangling exercise was read into R.

alc<-read.table("data/alc.csv")
#head(alc)
colnames(alc)
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "nursery"    "internet"   "guardian"   "traveltime"
## [16] "studytime"  "failures"   "schoolsup"  "famsup"     "paid"      
## [21] "activities" "higher"     "romantic"   "famrel"     "freetime"  
## [26] "goout"      "Dalc"       "Walc"       "health"     "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"

The data set in this exercise is a collection of information that is associated with student’s performance in two Portugese high schools. Two subjects - Mathematics (mat) and Portugese language (por) were choosen in this study. The findings from this study were published in the Proceedings of 5th FUture BUsiness TEChnology Conference (FUBUTEC 2008) during April, 2008 in Porto, Protugal(link). Altogether 33 attributes were assessed covering different aspects of student’s life. More detailed attribute information can be found here

Here, the main goal of the analysis is to study how alcohol consumption is associated with other aspects in student’s life. After going through the background information, it is a bit easier to identify interesting variables that could be related to alcohol consumption. Personally, I believe that the following are the four interesting variables that are associated with alcohol consumption:

Weekly study time (studytime) : In my opinion, if a student spends more time studying, he will have less time for alcohol consumption.

Going out with friends (goout) : In general, students go out with friends for parties and get-togethers. Attending such partiies and gatherings will lead higher alcohol consumption compared to those who do not participate in such activities.

Number of school absences (absences) : We can think of two reasons in terms of alcohol consumption and school absences. The main reason is that when a student consumes alcohol (especially during the evening), he/she will have lesser desire to go school next day (depends on the level of consumption). Another reason might be that a student is absent from class because he has plan to drink alcholic beverages.

Quality of family relationships (famrel) : I think the quality of family relationship will also affect student’s attitude towards alcohol consumption and the student who has bad family relationship may consume more alcohol compare to one who has better family relationship.

In the following section, we will see in details how my hypotheses are explained by the data. First, let’s summarise the subset of the table which includes the variables I have chosen.

library(dplyr)
## 
## Attaching package: 'dplyr'
## The following object is masked from 'package:GGally':
## 
##     nasa
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
my_var<- c("studytime", "absences", "goout", "famrel", "high_use")

my_var_data <- select(alc, one_of(my_var))
str(my_var_data)
## 'data.frame':    382 obs. of  5 variables:
##  $ studytime: int  2 2 2 3 2 2 2 2 2 2 ...
##  $ absences : int  5 3 8 1 2 8 0 4 0 0 ...
##  $ goout    : int  4 3 2 2 2 2 4 4 2 1 ...
##  $ famrel   : int  4 5 4 3 4 5 4 4 4 5 ...
##  $ high_use : logi  FALSE FALSE TRUE FALSE FALSE FALSE ...

All my chosen variables have integer values whereas information about the alcohol consumption is logical i.e True or False. Moreover, the str function also revealed the dimension of the selected data i.e 382 observations for five variables. After getting the data types, we can proceed with summarizing the table as follows:

summary(my_var_data)
##    studytime        absences        goout           famrel     
##  Min.   :1.000   Min.   : 0.0   Min.   :1.000   Min.   :1.000  
##  1st Qu.:1.000   1st Qu.: 1.0   1st Qu.:2.000   1st Qu.:4.000  
##  Median :2.000   Median : 3.0   Median :3.000   Median :4.000  
##  Mean   :2.037   Mean   : 4.5   Mean   :3.113   Mean   :3.937  
##  3rd Qu.:2.000   3rd Qu.: 6.0   3rd Qu.:4.000   3rd Qu.:5.000  
##  Max.   :4.000   Max.   :45.0   Max.   :5.000   Max.   :5.000  
##   high_use      
##  Mode :logical  
##  FALSE:268      
##  TRUE :114      
##                 
##                 
## 

The summary provides basic statistics (see the table above) about each variables. If we pick a particular variable absences (i.e number of absences), we can see that some of the students are never absent (min = 0)in the class whereas there have been a student or two who was absent upto 45 days (max = 0). Overall, when we look at the summary of all variables, median vlaues reflect better than mean values to understand the natures of our hypotheses i.e more vs less (studytime and absences, goout), high vs low, good vs bad (famrel). According to that, studying more than three hours, going out more than three times a week, being absent in class more than 3 days a week and having a relationship scale of more than 4 lead the students to upper levels and vice versa.

We can have graphical representation of each of the variables as bar charts (see below).

library(tidyr)
library(ggplot2)


gather(my_var_data) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar()

The summary tables provide information for alcohol consumption in relation to the factors for each selected variables.

t1 <- table("Study Time" = alc$studytime, "Alcohol Usage" = alc$high_use)
round(prop.table(t1, 1)*100, 1)
##           Alcohol Usage
## Study Time FALSE TRUE
##          1  58.0 42.0
##          2  69.2 30.8
##          3  86.7 13.3
##          4  85.2 14.8
t2 <- table("Going Out" = alc$goout, "Alcohol Usage" = alc$high_use)
round(prop.table(t2, 1)*100, 1)
##          Alcohol Usage
## Going Out FALSE TRUE
##         1  86.4 13.6
##         2  84.0 16.0
##         3  81.7 18.3
##         4  50.6 49.4
##         5  39.6 60.4
t3 <- table("Absences" = alc$absences, "Alcohol Usage" = alc$high_use)
round(prop.table(t3, 1)*100, 1)
##         Alcohol Usage
## Absences FALSE  TRUE
##       0   80.0  20.0
##       1   74.5  25.5
##       2   72.4  27.6
##       3   80.5  19.5
##       4   66.7  33.3
##       5   72.7  27.3
##       6   76.2  23.8
##       7   75.0  25.0
##       8   70.0  30.0
##       9   50.0  50.0
##       10  71.4  28.6
##       11  33.3  66.7
##       12  50.0  50.0
##       13  50.0  50.0
##       14  14.3  85.7
##       16   0.0 100.0
##       17   0.0 100.0
##       18  50.0  50.0
##       19   0.0 100.0
##       20 100.0   0.0
##       21  50.0  50.0
##       26   0.0 100.0
##       27   0.0 100.0
##       29   0.0 100.0
##       44   0.0 100.0
##       45 100.0   0.0
t4 <- table("Family Relationship" = alc$famrel, "Alcohol Usage" = alc$high_use)
round(prop.table(t4, 1)*100, 1)
##                    Alcohol Usage
## Family Relationship FALSE TRUE
##                   1  75.0 25.0
##                   2  52.6 47.4
##                   3  60.9 39.1
##                   4  71.4 28.6
##                   5  76.5 23.5

Box plots provide more meaningful and summarized but more descriptive information for our variables as we can see the relationship of each four variables compared to alcohol consumption. Let’s look into more detail how the four variables I chose are affecting alcohol consumption in high school students using box plots.

library(ggpubr)
## Loading required package: magrittr
## 
## Attaching package: 'magrittr'
## The following object is masked from 'package:tidyr':
## 
##     extract
g1 <- ggplot(alc, aes(x = high_use, y = studytime, col = high_use))

p1=g1 + geom_boxplot() + xlab("Alcohol Consumption")+ ylab("Study Time") + ggtitle("Study hours and alcohol consumption")

g2 <- ggplot(alc, aes(x = high_use, y = absences, col = high_use))

p2=g2 + geom_boxplot() + xlab("Alcohol Consumption")+ ylab("Number of School Absences")  + ggtitle("School absences and alcohol consumption") 

g3 <- ggplot(alc, aes(x = high_use, y = goout, col = high_use))

p3=g3 + geom_boxplot() + xlab("Alcohol Consumption")+ ylab("Going Out With Friends")  + ggtitle("Going out with friends and alcohol consumption") 
g4 <- ggplot(alc, aes(x = high_use, y = famrel, col = high_use))

p4=g4 + geom_boxplot() + xlab("Alcohol Consumption")+ ylab("Quality Family Relationship")  + ggtitle("Family relationship and alcohol consumption") 

ggarrange(p1, p2, p3 , p4,  labels = c("A", "B", "C", "D"), ncol = 2, nrow = 2)

The four box plots above show how four of the chosen variables are associated with alcohol consumption. In each of the plots, x-axis shows the two different factors that measure the level of alcohol consumption i.e True for high consumption and False for low consumption and y-axis shows measurements for dependent variables i.e the four variables I have chosen. All of the box plots above show that what I hypothesized about the variables I selected in terms of alcohol consumption seem to be valid. We can see, they are valid but are these significantly valid observations? I will do a series of modeling and validations in the following sections.

Logistic Regression

Now we will do logistic regression where alcohol consumption (high_use) is target variable and four variables (studytime, goout, absences, famrel) I selected are the predictors.

m<-glm(high_use ~ studytime + goout + absences + famrel, data = alc, family = "binomial")
summary(m)
## 
## Call:
## glm(formula = high_use ~ studytime + goout + absences + famrel, 
##     family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.8701  -0.7738  -0.5019   0.8042   2.5416  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.28606    0.70957  -1.812  0.06992 .  
## studytime   -0.55089    0.16789  -3.281  0.00103 ** 
## goout        0.75953    0.12041   6.308 2.82e-10 ***
## absences     0.06753    0.02175   3.104  0.00191 ** 
## famrel      -0.33699    0.13681  -2.463  0.01377 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 384.07  on 377  degrees of freedom
## AIC: 394.07
## 
## Number of Fisher Scoring iterations: 4

Among four variables, going out with friends (goout) is strongly coorelated (Pr = 2.82e-10) with alcohol consumption whereas quality of the family relationship (famrel) has comparatively lower impact towards alcohol consumption. Moreover, all four variables are significantly associated with alcohol consumption. Out of four variables, weekly study time (studytime) and the quality of family relationship (famrel) are inversely related to alcohol consumption. In other words, the more the number of hours spent in studying and the better the quality of the samily relationship, the lower the alcohol consumption. On the other hand, the number of school absences and frequency of going out with friends is positively correlated with alcohol consumption. This means that, if a student has higher number of school absences and goes out frequently with friends, his alcohol consumption is higher.

I will further delve into my model by evaluating it in terms of coefficients, odds ratio and confidence intervals.

Coef<-coef(m)
OR<-Coef %>% exp
CI<-confint(m) %>% exp
## Waiting for profiling to be done...
cbind(Coef, OR, CI)
##                    Coef        OR      2.5 %    97.5 %
## (Intercept) -1.28606058 0.2763573 0.06723596 1.0961732
## studytime   -0.55089391 0.5764343 0.41040872 0.7941804
## goout        0.75953025 2.1372720 1.69853389 2.7261579
## absences     0.06753071 1.0698631 1.02591583 1.1187950
## famrel      -0.33699130 0.7139151 0.54460646 0.9331198

In general, If odds ratio is greater than 1, increase in explanotary variable will increase the response probability p whereas if odds ratio is less than 1, then increase in explanatory varialbe will decrease the response probability p. And if odds ratio equals to 1 then there is no effect of explanatory variable on response variable. According to these statements on odds ratios, the frequency of going out and the number of school absences have positive association with high alcohol usage. On the other hand, study time and family relationship seem to be negatively associated with high alcohol usage, because their odds ratio are smaller than one. With regards to confidence interval, the odds ratio for going out (goout) has the widest confidence interval i.e 2.73 (97.5%) and the study time has the narrowest confidence interval of 0.79 (97.5%). As none of the odds ratio have confidence interval of 1, we can claim that all of the explanatory variables have effect on the odds of outcome i.e. high alcohol usage.

Exploring predictive power of the model

To get insight into the predictive power of my model, I will compare model’s prediction with the probability of actual vaules.

 #predict the probability
 
 pred_prob <- predict(m, type = "response")
 
 #add predicted probabilities and mutate the model
 alc<-mutate(alc, probability = pred_prob)
 #we will use the probability to validate the probabilities of choses four variables
 alc<-mutate(alc, prediction = probability > 0.5)
 table(high_use = alc$high_use, prediction = alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   242   26
##    TRUE     65   49

Based on the 2x2 cross-tabulation, we can see that my model predicted 65 false positives and 26 true negatives. In other words, prediction for a total of (65+26) 91 students is not true. To be more precise, we can check the overall percentage that the model is giving wrong prediction.

#first we need to define loss function
LF<-function(class, prob){
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
#now we compute the average number of wrong predictions
LF(alc$high_use, alc$prob)
## [1] 0.2382199

Now we can say that up to 24% of the predictions made by my model are false.

Cross Validation

Here we will perform 10-fold cross validation of our model

#load required library
library(boot)

CV<-cv.glm(data = alc, cost = LF, glmfit = m, K = 10 )
#finally look at the average number of wrong predictions
CV$delta[1]
## [1] 0.2382199

Now, after performing 10-fold cross validation, the perfomance of my model slightly increased. And yes, I can proudly claim that my model has better test set performance (24%) than the one we practised in data camp exercise(26%).


Clustering and Classification

The list of materials and links related to clustering and classification can be found below.
course slides by Emma Kämäräinen
DataCamp exercise

RStudio Exercise 4

After solving the DataCamp exercise and going through the embedded links, I got a general overview on the topic. In the following sections, I will prepare a report based on the exercise instructions. Unlike earlier weeks, the data wrangling exercise will be done after the data analysis part. In fact, the data wrangling exercise is part of Dimensionality Reduction Techniques. In the following section, I will explain about clustering and classification of data sets using open data called Boston that belongs to MASS package.

Data

First and foremost, it is important to get an overview of the data being analysed. As mentioned earlier, Boston data from MASS package.

library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
data(Boston)
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506  14

The Boston data was collected to study the housing values in the suburbs of Boston. The table contains 506 observations for 14 different variables. The descriptions for each of the 14 variables are listed below.

Variables Description
crim per capita crime rate by town.
zn proportion of residential land zoned for lots over 25,000 sq.ft.
indus proportion of non-retail business acres per town.
chas Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).
nox nitrogen oxides concentration (parts per 10 million).
rm average number of rooms per dwelling.
age proportion of owner-occupied units built prior to 1940.
dis weighted mean of distances to five Boston employment centres.
rad index of accessibility to radial highways.
tax full-value property-tax rate per $10,000.
ptratio pupil-teacher ratio by town.
black 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town.
lstat lower status of the population (percent).
medv median value of owner-occupied homes in $1000s.

Data Summary

Now, let’s look at the summary of the boston data in the form of table (instead of default layout) using pandoc.table function of pander package.

library(pander)
## 
## Attaching package: 'pander'
## The following object is masked from 'package:GGally':
## 
##     wrap
pandoc.table(summary(Boston), caption = "Summary of Boston data", split.table = 120)
## 
## -----------------------------------------------------------------------------------------------------------------------
##        crim               zn             indus            chas              nox              rm              age       
## ------------------ ---------------- --------------- ----------------- ---------------- --------------- ----------------
##  Min.  : 0.00632     Min.  : 0.00    Min.  : 0.46    Min.  :0.00000    Min.  :0.3850    Min.  :3.561     Min.  : 2.90  
## 
##  1st Qu.: 0.08204   1st Qu.: 0.00    1st Qu.: 5.19   1st Qu.:0.00000   1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02 
## 
##  Median : 0.25651   Median : 0.00    Median : 9.69   Median :0.00000   Median :0.5380   Median :6.208   Median : 77.50 
## 
##   Mean : 3.61352     Mean : 11.36     Mean :11.14     Mean :0.06917     Mean :0.5547     Mean :6.285     Mean : 68.57  
## 
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000   3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08 
## 
##  Max.  :88.97620    Max.  :100.00    Max.  :27.74    Max.  :1.00000    Max.  :0.8710    Max.  :8.780    Max.  :100.00  
## -----------------------------------------------------------------------------------------------------------------------
## 
## Table: Summary of Boston data (continued below)
## 
##  
## ------------------------------------------------------------------------------------------------------------------
##       dis              rad              tax           ptratio          black            lstat           medv      
## ---------------- ---------------- --------------- --------------- ---------------- --------------- ---------------
##  Min.  : 1.130    Min.  : 1.000    Min.  :187.0    Min.  :12.60     Min.  : 0.32    Min.  : 1.73    Min.  : 5.00  
## 
##  1st Qu.: 2.100   1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38   1st Qu.: 6.95   1st Qu.:17.02 
## 
##  Median : 3.207   Median : 5.000   Median :330.0   Median :19.05   Median :391.44   Median :11.36   Median :21.20 
## 
##   Mean : 3.795     Mean : 9.549     Mean :408.2     Mean :18.46     Mean :356.67     Mean :12.65     Mean :22.53  
## 
##  3rd Qu.: 5.188   3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23   3rd Qu.:16.95   3rd Qu.:25.00 
## 
##  Max.  :12.127    Max.  :24.000    Max.  :711.0    Max.  :22.00    Max.  :396.90    Max.  :37.97    Max.  :50.00  
## ------------------------------------------------------------------------------------------------------------------

After getting a statistical summary of, it’s worthwhile to see to what extent each variables are correlated. For that, we use corr() function on Boston data.

library(corrplot)
## corrplot 0.84 loaded
library(dplyr)
corr_boston<-cor(Boston) %>% round(2)
pandoc.table(corr_boston, split.table = 120)
## 
## -------------------------------------------------------------------------------------------------------------------------------
##    &nbsp;      crim     zn     indus   chas     nox     rm      age     dis     rad     tax    ptratio   black   lstat   medv  
## ------------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- --------- ------- ------- -------
##   **crim**       1     -0.2    0.41    -0.06   0.42    -0.22   0.35    -0.38   0.63    0.58     0.29     -0.39   0.46    -0.39 
## 
##    **zn**      -0.2      1     -0.53   -0.04   -0.52   0.31    -0.57   0.66    -0.31   -0.31    -0.39    0.18    -0.41   0.36  
## 
##   **indus**    0.41    -0.53     1     0.06    0.76    -0.39   0.64    -0.71    0.6    0.72     0.38     -0.36    0.6    -0.48 
## 
##   **chas**     -0.06   -0.04   0.06      1     0.09    0.09    0.09    -0.1    -0.01   -0.04    -0.12    0.05    -0.05   0.18  
## 
##    **nox**     0.42    -0.52   0.76    0.09      1     -0.3    0.73    -0.77   0.61    0.67     0.19     -0.38   0.59    -0.43 
## 
##    **rm**      -0.22   0.31    -0.39   0.09    -0.3      1     -0.24   0.21    -0.21   -0.29    -0.36    0.13    -0.61    0.7  
## 
##    **age**     0.35    -0.57   0.64    0.09    0.73    -0.24     1     -0.75   0.46    0.51     0.26     -0.27    0.6    -0.38 
## 
##    **dis**     -0.38   0.66    -0.71   -0.1    -0.77   0.21    -0.75     1     -0.49   -0.53    -0.23    0.29    -0.5    0.25  
## 
##    **rad**     0.63    -0.31    0.6    -0.01   0.61    -0.21   0.46    -0.49     1     0.91     0.46     -0.44   0.49    -0.38 
## 
##    **tax**     0.58    -0.31   0.72    -0.04   0.67    -0.29   0.51    -0.53   0.91      1      0.46     -0.44   0.54    -0.47 
## 
##  **ptratio**   0.29    -0.39   0.38    -0.12   0.19    -0.36   0.26    -0.23   0.46    0.46       1      -0.18   0.37    -0.51 
## 
##   **black**    -0.39   0.18    -0.36   0.05    -0.38   0.13    -0.27   0.29    -0.44   -0.44    -0.18      1     -0.37   0.33  
## 
##   **lstat**    0.46    -0.41    0.6    -0.05   0.59    -0.61    0.6    -0.5    0.49    0.54     0.37     -0.37     1     -0.74 
## 
##   **medv**     -0.39   0.36    -0.48   0.18    -0.43    0.7    -0.38   0.25    -0.38   -0.47    -0.51    0.33    -0.74     1   
## -------------------------------------------------------------------------------------------------------------------------------

The table above shows the correlation matrix of all variables. Bird’s eye view on the matrix shows that tax (full-value property-tax rate) and rad (index of accessibility to radial highways) are the most positively correlated variables, whereas dis (weighted mean of distances to five Boston employment centres) and age (proportion of owner-occupied units built prior to 1940) are the most negatively correlated variables. Moreover, chas (Charles river dummy variable) and rad are the two variables that are least correlated.

The same information can be presented as a graphical overview. This time we will make a correlogram, a graphical representation of coorelation matrix. The corrplot() function of corrplot package wll be used to visualize the correlation between all the variables of the Boston dataset.

corrplot(corr_boston, method = "circle", tl.col = "black", cl.pos="b", tl.pos = "d", type = "upper" , tl.cex = 0.9 )

The above graph gives much quicker impression on which variables are more correlated to each other. In the graph, positive correlations are displayed in blue and negative correlations in red color with intensity of the color and circle size being proportional to the correlation coefficients. The same relationship as described above using correlation summary can be seen in the form of circles with different size (intensity of correlation i.e highly correlated or lowly correlated) and different colors (wheether positively or negatively correlated).

Data Standardization

Data scaling is useful for linear discriminant analysis. The scale() function will be used to scale the whole data. Here, the scaled value is generated by subtracting the column means from corresponding columns and then the difference is divided by standard deviation. i.e scaled(x)=(x-mean(x))/sd(x).

boston_scaled<-scale(Boston)
pandoc.table(summary(boston_scaled), caption = "Summary of  Scaled Boston data", split.table = 120)
## 
## --------------------------------------------------------------------------------------------------------------
##        crim                 zn               indus             chas               nox               rm        
## ------------------- ------------------ ----------------- ----------------- ----------------- -----------------
##  Min.  :-0.419367    Min.  :-0.48724    Min.  :-1.5563    Min.  :-0.2723    Min.  :-1.4644    Min.  :-3.8764  
## 
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681 
## 
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723   Median :-0.1441   Median :-0.1084 
## 
##   Mean : 0.000000     Mean : 0.00000     Mean : 0.0000     Mean : 0.0000     Mean : 0.0000     Mean : 0.0000  
## 
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823 
## 
##  Max.  : 9.924110    Max.  : 3.80047    Max.  : 2.4202    Max.  : 3.6648    Max.  : 2.7296    Max.  : 3.5515  
## --------------------------------------------------------------------------------------------------------------
## 
## Table: Summary of  Scaled Boston data (continued below)
## 
##  
## -----------------------------------------------------------------------------------------------------------
##        age               dis               rad               tax             ptratio            black      
## ----------------- ----------------- ----------------- ----------------- ----------------- -----------------
##  Min.  :-2.3331    Min.  :-1.2658    Min.  :-0.9819    Min.  :-1.3127    Min.  :-2.7047    Min.  :-3.9033  
## 
##  1st Qu.:-0.8366   1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049 
## 
##  Median : 0.3171   Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808 
## 
##   Mean : 0.0000     Mean : 0.0000     Mean : 0.0000     Mean : 0.0000     Mean : 0.0000     Mean : 0.0000  
## 
##  3rd Qu.: 0.9059   3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332 
## 
##  Max.  : 1.1164    Max.  : 3.9566    Max.  : 1.6596    Max.  : 1.7964    Max.  : 1.6372    Max.  : 0.4406  
## -----------------------------------------------------------------------------------------------------------
## 
## Table: Table continues below
## 
##  
## -----------------------------------
##       lstat             medv       
## ----------------- -----------------
##  Min.  :-1.5296    Min.  :-1.9063  
## 
##  1st Qu.:-0.7986   1st Qu.:-0.5989 
## 
##  Median :-0.1811   Median :-0.1449 
## 
##   Mean : 0.0000     Mean : 0.0000  
## 
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683 
## 
##  Max.  : 3.5453    Max.  : 2.9865  
## -----------------------------------
#corr_bostons<-cor(boston_scaled) %>% round(2)
#pandoc.table(corr_bostons, split.table = 120)

We can make important observations on the summary of scaled data. The summary of the scaled Boston data has changed from the non-scaled Boston data. Most importantly, all the mean values have become zero and other values such as minimum, maximum, median and quartiles (1st and 3rd) are also changed for all variables.

Next, we will create quantile vector for crime using quantile function on scaled dataframe of Boston dataset. The quantile vectors will be labeled with meaningful labels to explain the intensity of crime i.e low, medium low, medium high and high. Lastly, we will replace the Crim variable with newly created crime variable and create the required data frame.

boston_scaled<- data.frame(boston_scaled)
qvc<-quantile(boston_scaled$crim)
crime <- cut(boston_scaled$crim, breaks = qvc, label = c("low", "med_low", "med_high", "high"), include.lowest = TRUE)
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled<-data.frame(boston_scaled, crime)
#table(boston_scaled$crime)

After creating the customized dataset in earlier steps, we will now divide it into training and testing sets where 80% of the data will belong to training set and 20% will be used as testing set.

#library(MASS)
n<-nrow(boston_scaled)
ind <- sample(n, size = n*0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]

Now, as we have categorized the dataset into training and test set, we can fit linear discriminant analysis on the training set, where crime rate will be predicated based on all other variables.

Linear Discriminant Analysis

lda.fit <- lda(crime ~ ., data = train)
#add biplot arrows to an lda
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col=classes)
lda.arrows(lda.fit, myscale = 2)

# target classes as numeric
#classes <- as.numeric(train$crime)

# plot the lda results
#plot(lda.fit, dimen = 2, col = classes, pch = classes)

Based on the bi-plot, it can be seen that rad variable alone acts as a predictor of high crime rate in the Boston data. On the other hand, the remaining 12 variables are associated with low, medium low and medium high rate of crime. The grouping based on 12 variables is fuzzy and is difficult to classify if any of the variables can classify the associated observations.

Class Prediction

crime_cat<-test$crime
test<-dplyr::select(test, -crime)
lda.pred<-predict(lda.fit, newdata = test)
table(correct = crime_cat, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       12      13        0    0
##   med_low    1      18        5    0
##   med_high   0       7       16    2
##   high       0       0        1   27

I tried to grasp the concept of the above matrix, which is also referred to as confusion matrix going through this blog. Everytime the matrix is generated, the number of correct and predicted cases for each of the classes (low, med_low, med_high, high) changes. The change is expected because of the randomized classification of test and training set. However, it was also observed that prediction for the high class fluctuated much lesser than the other classes.

K-means Clustering

In order to practice K-means clustering, we will reload the Boston data, scale the data and calculate the distances between the observations.

data(Boston)
boston_scaled1<-as.data.frame(scale(Boston))
dist_eu<-dist(boston_scaled1)
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
#head(boston_scaled1)

We will use the scaled Boston data to perform K-means clustering. It’s always not trivial beforehand to identify how many clusters can classify our data. Therefore, we need to first randomly use certain number of clusters (if we can get any idea from the summary of the data or graphical summaries) but there are few other ways we can identify right number of clusters. This topics is more or less inspired by this R-blogger post and this Stackoverflow question

First we start with random cluster number. Let’s start with k=4 and apply k-means on the data.

#Let us apply kmeans for k=4 clusters 
kmm = kmeans(boston_scaled1,6,nstart = 50 ,iter.max = 15) #we keep number of iter.max=15 to ensure the algorithm converges and nstart=50 to ensure that atleat 50 random sets are choosen  

Elbow method is also one of the well known techniques that can be used to estimate number of clusters.

#Elbow Method for finding the optimal number of clusters
library(ggplot2)
set.seed(1234)
# Compute and plot wss for k = 2 to k = 15.
k.max <- 15
data <- boston_scaled1
wss <- sapply(1:k.max, 
              function(k){kmeans(data, k)$tot.withinss})
#wss
qplot(1:k.max, wss, geom = c("point", "line"), span = 0.2,
     xlab="Number of clusters K",
     ylab="Total within-clusters sum of squares")
## Warning: Ignoring unknown parameters: span

## Warning: Ignoring unknown parameters: span

Somehow the elbow plot shows that we may not see more than two clear clusters but it’s always nice to confirm such predictions using one more method because there is not shortage of methods for a number of analyses such as this. Therefore, we will additionally use NbClust package.

library(NbClust)
nb <- NbClust(boston_scaled1, diss=NULL, distance = "euclidean", 
              min.nc=2, max.nc=5, method = "kmeans", 
              index = "all", alphaBeale = 0.1)

## *** : The Hubert index is a graphical method of determining the number of clusters.
##                 In the plot of Hubert index, we seek a significant knee that corresponds to a 
##                 significant increase of the value of the measure i.e the significant peak in Hubert
##                 index second differences plot. 
## 

## *** : The D index is a graphical method of determining the number of clusters. 
##                 In the plot of D index, we seek a significant knee (the significant peak in Dindex
##                 second differences plot) that corresponds to a significant increase of the value of
##                 the measure. 
##  
## ******************************************************************* 
## * Among all indices:                                                
## * 12 proposed 2 as the best number of clusters 
## * 6 proposed 3 as the best number of clusters 
## * 3 proposed 4 as the best number of clusters 
## * 3 proposed 5 as the best number of clusters 
## 
##                    ***** Conclusion *****                            
##  
## * According to the majority rule, the best number of clusters is  2 
##  
##  
## *******************************************************************
#hist(nb$Best.nc[1,], breaks = max(na.omit(nb$Best.nc[1,])))

Now, it’s much clearer that the data is described better with two clusters. With that, we run k-means algorithm again.

#Let us apply kmeans for k=4 clusters 
km_final = kmeans(boston_scaled1, centers = 2) #we keep number of iter.max=15 to ensure the algorithm converges and nstart=50 to ensure that atleat 50 random sets are choosen  
pairs(boston_scaled1[3:9], col=km_final$cluster)

The clusters in the above plot are divided into two groups and represented by two colors - red and black. Some of the pairs are better grouped than others in the plot. One of the important observations can be made with chas variable where the observations in all the pairs formed by it are wrongly clustered. On the other hand, clusters formed by rad variable are better separated.

More LDA
In the following section, we will use random cluster number (k=6) and perform LDA. We follow the the basic steps of scaling and distance calculation. Finally we will see how the biplot looks like on the whole data set when we try to group them into six categories.

boston_scaled2<-as.data.frame(scale(Boston))
#head(boston_scaled2)
set.seed(1234)
km_bs2<-kmeans(dist_eu, centers = 6)
#head(km_bs2)
myclust<-data.frame(km_bs2$cluster)
boston_scaled2$clust<-km_bs2$cluster
#head(boston_scaled2)
lda.fit_bs2<-lda(clust~., data = boston_scaled2 )
lda.fit_bs2
## Call:
## lda(clust ~ ., data = boston_scaled2)
## 
## Prior probabilities of groups:
##          1          2          3          4          5          6 
## 0.10079051 0.19960474 0.09486166 0.20553360 0.12845850 0.27075099 
## 
## Group means:
##         crim          zn        indus       chas         nox          rm
## 1 -0.4149170  2.55535505 -1.228758914 -0.1951310 -1.21919439  0.78676843
## 2  0.3880377 -0.48724019  1.165421314 -0.2723291  0.98659851 -0.28553884
## 3 -0.3613809 -0.09419977 -0.474086929  1.5321752 -0.12487357  1.27068222
## 4 -0.3580718 -0.46023584 -0.003188584 -0.2723291 -0.09478548 -0.35414265
## 5  1.4172264 -0.48724019  1.069802298  0.4545202  1.34622349 -0.73713928
## 6 -0.4055840  0.02149547 -0.740804469 -0.2723291 -0.79649957  0.09099544
##          age        dis        rad        tax    ptratio       black
## 1 -1.4488239  1.7464736 -0.7048880 -0.5692695 -0.8353442  0.34924852
## 2  0.7651453 -0.7898745  1.1388129  1.2431405  0.6932747  0.04498348
## 3  0.2307707 -0.3386056 -0.4961654 -0.7220694 -1.1226766  0.32813467
## 4  0.4093998 -0.2612071 -0.5865335 -0.4342609  0.2608189  0.19191309
## 5  0.8557425 -0.9615698  1.2885597  1.2934457  0.4142248 -1.68787016
## 6 -0.8223904  0.7053125 -0.5694290 -0.7355910 -0.2013102  0.37698635
##        lstat       medv
## 1 -0.9773530  0.8760790
## 2  0.6734731 -0.5987824
## 3 -0.6138415  1.4407282
## 4  0.1508360 -0.2838601
## 5  1.1961180 -0.8078336
## 6 -0.5996059  0.2092896
## 
## Coefficients of linear discriminants:
##                 LD1         LD2         LD3         LD4         LD5
## crim     0.04811996 -0.28556378 -0.55488255  0.49400398  0.05329096
## zn      -0.13738829 -1.83004313  0.34546140 -0.26802062 -0.87758918
## indus    0.74925386 -0.10015651  0.61607026 -0.42031079  0.25109137
## chas     0.13287282 -0.13228082 -0.94523359 -0.16829634  0.04786106
## nox      1.21764057 -0.81216848 -0.12506389  0.27633410  0.13213424
## rm      -0.12060003 -0.04058521 -0.02502279 -0.75468374  0.21331834
## age      0.17397462  0.34382124 -0.07430813 -0.37956005 -0.95205471
## dis     -0.36273454 -0.54652248  0.11546588  0.26210162  0.59195828
## rad      0.61453519  0.40958433  0.29006265 -0.40963042  1.56473994
## tax      0.75124298 -1.03741454  0.22707980 -0.17126395 -0.61781814
## ptratio  0.36217649 -0.18603253  0.30060517  0.16017164 -0.53729844
## black   -0.27542772  0.27016025  0.77143821 -0.87012879  0.23445845
## lstat    0.48988940 -0.40861927 -0.53017288 -0.23295699 -0.06758426
## medv     0.22977036 -0.57759705 -0.86635437 -0.06977308 -0.10361245
## 
## Proportion of trace:
##    LD1    LD2    LD3    LD4    LD5 
## 0.7285 0.1498 0.0750 0.0298 0.0168
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}
plot(lda.fit_bs2, dimen = 2)
lda.arrows(lda.fit_bs2, myscale = 3)

I must admit that the number of clusters I chose was more than needed. I believe three to four clusters could group the whole data set. The top three most influential variables according to bi-plot are zn, nox and tax.

Better ways to visualize LDA

library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$crim)
#Second 3D plot where colors are defined by clusters of k-means
#k-means_matpro<-kmeans(matrix_product, )
#head(train)
#train$cl<-myclust
#boston_scaled2$cl<-myclust
#head(boston_scaled2)
#head(train)
#rownames(train)
#rownames(boston_scaled2)
train$cl <- boston_scaled2$clust[match(rownames(train), rownames(boston_scaled2))]
#head(train)
#nrow(train)

plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type = "scatter3d", mode="markers", color = train$cl)

According to my observation, clustering based on K-means have turned out to be more informative than the one based on crime classes.

Additional links (also included in the course slides)
Blog post by Jason Browniee on LDA
R-bloggers post on LDA R-bloggers post on K Means Clustering in R


Dimensionality Reduction Techniques

In this chapter, we will practice dimensionality reduction techniques using “human” data which was originated from the United Nations Development Programme (UNDP). Additional information about the data can be found here.

RStudio Exercise 5

Data Exploration

First we will load the data into R and get an overview of it.

human<-read.table("data/human.csv")
#head(human)
dim(human)
## [1] 155   8
str(human)
## 'data.frame':    155 obs. of  8 variables:
##  $ Edu2.FM  : num  1.007 0.997 0.983 0.989 0.969 ...
##  $ Labo.FM  : num  0.891 0.819 0.825 0.884 0.829 ...
##  $ Edu.Exp  : num  17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
##  $ Life.Exp : num  81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
##  $ GNI      : int  64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
##  $ Mat.Mor  : int  4 6 6 5 6 7 9 28 11 8 ...
##  $ Ado.Birth: num  7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
##  $ Parli.F  : num  39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...
colnames(human)
## [1] "Edu2.FM"   "Labo.FM"   "Edu.Exp"   "Life.Exp"  "GNI"       "Mat.Mor"  
## [7] "Ado.Birth" "Parli.F"

The subset of the data used in this exercise contain eight variables and 155 observations. Out of the eight variables, GNI and Mat.Mor are integer variables and the rest of the six variables are all numerical. In the following table, you may see what kind of information all these variables are telling or storing in the data frame.

Variables Description
Edu2.FM ratio of females and males with at least secondary education
Labo.FM ratio of females and males in labour force
Edu.Exp expected years of schooling
Life.Exp life expectancy at birth
GNI gross national income per capita
Mat.Mor maternal mortality ratio
Ado.Birth adolescent birth rate
Parli.F percentage of female representatives in parliament

Data Summary
As in earlier exercises, we will proceed with data summary. Let’s first take a look at the summary of the data.

library(pander)
pandoc.table(summary(human), caption = "Summary of Human data", split.table = 80)
## 
## -----------------------------------------------------------------
##     Edu2.FM          Labo.FM          Edu.Exp        Life.Exp    
## ---------------- ---------------- --------------- ---------------
##  Min.  :0.1717    Min.  :0.1857    Min.  : 5.40    Min.  :49.00  
## 
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:11.25   1st Qu.:66.30 
## 
##  Median :0.9375   Median :0.7535   Median :13.50   Median :74.20 
## 
##   Mean :0.8529     Mean :0.7074     Mean :13.18     Mean :71.65  
## 
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:15.20   3rd Qu.:77.25 
## 
##  Max.  :1.4967    Max.  :1.0380    Max.  :20.20    Max.  :83.50  
## -----------------------------------------------------------------
## 
## Table: Summary of Human data (continued below)
## 
##  
## ------------------------------------------------------------------
##       GNI            Mat.Mor         Ado.Birth         Parli.F    
## ---------------- ---------------- ---------------- ---------------
##   Min.  : 581      Min.  : 1.0      Min.  : 0.60    Min.  : 0.00  
## 
##  1st Qu.: 4198    1st Qu.: 11.5    1st Qu.: 12.65   1st Qu.:12.40 
## 
##  Median : 12040   Median : 49.0    Median : 33.60   Median :19.30 
## 
##   Mean : 17628     Mean : 149.1     Mean : 47.16     Mean :20.91  
## 
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95 
## 
##  Max.  :123124    Max.  :1100.0    Max.  :204.80    Max.  :57.50  
## ------------------------------------------------------------------
library(GGally)
library(ggplot2)
ggpairs(human, mapping = aes(alpha = 0.3), lower = list(combo = wrap("facethist")))

Data summary shows some interesting observations for the variables. For instance, adoloscent birth rate (Ado.Birth) is positively correlated (0.759) with maternal mortality ratio but negatively correlated (-0.857) with life expectancy at birth (Life.Exp). Similarly, ratio of females and males with secondary education (Edu2.FM) and expected years of schooling (Edu.Exp) are both positively correlated with life expectancy at birth (Life.Exp). On the other hand, there is very little correlation between the ratio of females and males in labour force (Labo.FM) with Edu.Exp and GNI.

Principal Component Analysis

In the following section, we will summarize the principal components and make a principal component analysis (PCA) plot. Firse, PCA is done on non-standardized data followed up by standardized data.

pca_human<-prcomp(human)
sum_pca_human<-summary(pca_human)
sum_pca_human
## Importance of components:
##                              PC1      PC2   PC3   PC4   PC5   PC6    PC7
## Standard deviation     1.854e+04 185.5219 25.19 11.45 3.766 1.566 0.1912
## Proportion of Variance 9.999e-01   0.0001  0.00  0.00 0.000 0.000 0.0000
## Cumulative Proportion  9.999e-01   1.0000  1.00  1.00 1.000 1.000 1.0000
##                           PC8
## Standard deviation     0.1591
## Proportion of Variance 0.0000
## Cumulative Proportion  1.0000
sum_pca_human_var<-sum_pca_human$sdev^2
sum_pca_human_var
## [1] 3.438860e+08 3.441836e+04 6.343853e+02 1.312035e+02 1.418457e+01
## [6] 2.452081e+00 3.655943e-02 2.531638e-02
pca_pr <- round(100*sum_pca_human$importance[2, ], digits = 1)
pc_lab<-paste0(names(pca_pr), " (", pca_pr, "%)")

biplot(pca_human, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2], main = "PCA plot of non-scaled human data")
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

#biplot(pca_human, choices = 1:2, cex = c(1, 1), col = c("grey40", "deeppink2"),sub = "PC1 & PC2 with non-standardised dataset")

The PCA biplot above does not provide meaningful interpration to the data as it shows that single variable GNI has dominant impact and greater weight. Moreover, GNI has larger variance compared to other variables.

Data Standardization

Next, we will scale the variables in the human data and compute principal components and plot the results.

pca_human_s<-prcomp(human, scale. = TRUE)
sum_pca_human_s<-summary(pca_human_s)
pca_pr_s <- round(100*sum_pca_human_s$importance[2, ], digits = 1)
pc_lab<-paste0(names(pca_pr_s), " (", pca_pr_s, "%)")

sum_pca_human_var_s<-sum_pca_human_s$sdev^2
sum_pca_human_var_s
## [1] 4.2883701 1.2989625 0.7657100 0.6066276 0.4381862 0.2876242 0.2106805
## [8] 0.1038390
biplot(pca_human_s, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2], main = "PCA plot of scaled human data")

Here, after standardization, we can see that the plots look different and thus the results are different. The results are different after scaling because PCA is more sensitive and informative when the original features are scaled. Moreover, PCA assumes that features with lareger variance are more important that those with smaller variance. In non-scaled pca plot, we observed that the variable with higher values have bigger influence as is the case in GNI variable. After scaling the data, the variance between the variables is more reasonable. The first principal component (PC1) explains 53% of the variation compared to 100% when the data was not scaled. Personal Interpretation on Biplot

My personal interpretation of the first two principal component dimensions based on the biplot drawn after PCA on the standardized human data is as follows:

  1. Correlation between variables: Smaller angle between the arrows explains greater correlation between the variables. With this assumption in mind, we can see that four variables, namely Edu.Exp, Life.Exp, GNU and EDU.FM are correlated out of which GNU and EDU2.FM have the highest correlation as explained by the arrows and the angles formed by those arrows. Similarly, Parli.F and Labo.FM are also correlated and so are the variables Mat.Mor and Ado.Birth. Furthermore, the plot also shows that Life.Exp and Ado.Birth are least correlated as they are farthest in the plot (notice the large angle between these two variables).

  2. Correlation between variables and Principal components: Here, the assumption is that the smaller the angle between the variables and principal components, the more positively correlated the variable is. According to this assumption, Parli.F and Labo.FM are positively correlated to PC1 (i.e they are contributing the direction of PC1) whereas other variables are positively correlated to PC2 and thus directing the arrows towards PC2. In addition, for PC2, Life.Exp, Edu2.FM, GNU and Ado.FM have comparatively higher weight than others.

Multiple Correspondence Analysis

We will use tea data from FactoMineR package to practice multiple correspondence analysis (MCA). In this data, there are 300 observations and 36 variables.

library(FactoMineR)
data("tea")
str(tea)
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
##  $ frequency       : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
dim(tea)
## [1] 300  36
summary(tea)
##          breakfast           tea.time          evening          lunch    
##  breakfast    :144   Not.tea time:131   evening    :103   lunch    : 44  
##  Not.breakfast:156   tea time    :169   Not.evening:197   Not.lunch:256  
##                                                                          
##                                                                          
##                                                                          
##                                                                          
##                                                                          
##         dinner           always          home           work    
##  dinner    : 21   always    :103   home    :291   Not.work:213  
##  Not.dinner:279   Not.always:197   Not.home:  9   work    : 87  
##                                                                 
##                                                                 
##                                                                 
##                                                                 
##                                                                 
##         tearoom           friends          resto          pub     
##  Not.tearoom:242   friends    :196   Not.resto:221   Not.pub:237  
##  tearoom    : 58   Not.friends:104   resto    : 79   pub    : 63  
##                                                                   
##                                                                   
##                                                                   
##                                                                   
##                                                                   
##         Tea         How           sugar                     how     
##  black    : 74   alone:195   No.sugar:155   tea bag           :170  
##  Earl Grey:193   lemon: 33   sugar   :145   tea bag+unpackaged: 94  
##  green    : 33   milk : 63                  unpackaged        : 36  
##                  other:  9                                          
##                                                                     
##                                                                     
##                                                                     
##                   where                 price          age        sex    
##  chain store         :192   p_branded      : 95   Min.   :15.00   F:178  
##  chain store+tea shop: 78   p_cheap        :  7   1st Qu.:23.00   M:122  
##  tea shop            : 30   p_private label: 21   Median :32.00          
##                             p_unknown      : 12   Mean   :37.05          
##                             p_upscale      : 53   3rd Qu.:48.00          
##                             p_variable     :112   Max.   :90.00          
##                                                                          
##            SPC               Sport       age_Q          frequency  
##  employee    :59   Not.sportsman:121   15-24:92   1/day      : 95  
##  middle      :40   sportsman    :179   25-34:69   1 to 2/week: 44  
##  non-worker  :64                       35-44:40   +2/day     :127  
##  other worker:20                       45-59:61   3 to 6/week: 34  
##  senior      :35                       +60  :38                    
##  student     :70                                                   
##  workman     :12                                                   
##              escape.exoticism           spirituality        healthy   
##  escape-exoticism    :142     Not.spirituality:206   healthy    :210  
##  Not.escape-exoticism:158     spirituality    : 94   Not.healthy: 90  
##                                                                       
##                                                                       
##                                                                       
##                                                                       
##                                                                       
##          diuretic             friendliness            iron.absorption
##  diuretic    :174   friendliness    :242   iron absorption    : 31   
##  Not.diuretic:126   Not.friendliness: 58   Not.iron absorption:269   
##                                                                      
##                                                                      
##                                                                      
##                                                                      
##                                                                      
##          feminine             sophisticated        slimming  
##  feminine    :129   Not.sophisticated: 85   No.slimming:255  
##  Not.feminine:171   sophisticated    :215   slimming   : 45  
##                                                              
##                                                              
##                                                              
##                                                              
##                                                              
##         exciting          relaxing              effect.on.health
##  exciting   :116   No.relaxing:113   effect on health   : 66    
##  No.exciting:184   relaxing   :187   No.effect on health:234    
##                                                                 
##                                                                 
##                                                                 
##                                                                 
## 

more

library(tidyr)
library(dplyr)
keep<- c("breakfast","tea.time","friends","frequency","Tea","sugar","sex","sophisticated")
my_tea <- dplyr::select(tea, one_of(keep))
gather(my_tea) %>% ggplot(aes(value)) + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8)) + facet_wrap("key", scales = "free")
## Warning: attributes are not identical across measure variables;
## they will be dropped

mca_tea <- MCA(my_tea, graph=FALSE)
summary(mca_tea, nbelements=Inf, nbind=5)
## 
## Call:
## MCA(X = my_tea, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6
## Variance               0.213   0.189   0.159   0.136   0.131   0.118
## % of var.             15.481  13.717  11.556   9.865   9.518   8.606
## Cumulative % of var.  15.481  29.198  40.754  50.619  60.137  68.743
##                        Dim.7   Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.112   0.093   0.091   0.072   0.061
## % of var.              8.150   6.766   6.644   5.254   4.444
## Cumulative % of var.  76.893  83.658  90.302  95.556 100.000
## 
## Individuals (the 5 first)
##                      Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                 |  0.359  0.202  0.071 |  1.116  2.201  0.686 | -0.040
## 2                 | -0.198  0.061  0.023 |  0.845  1.261  0.419 |  0.349
## 3                 | -0.484  0.367  0.226 | -0.243  0.105  0.057 | -0.211
## 4                 |  0.779  0.951  0.499 |  0.345  0.210  0.098 | -0.071
## 5                 | -0.065  0.007  0.003 |  0.816  1.176  0.480 | -0.026
##                      ctr   cos2  
## 1                  0.003  0.001 |
## 2                  0.255  0.071 |
## 3                  0.094  0.043 |
## 4                  0.011  0.004 |
## 5                  0.001  0.000 |
## 
## Categories
##                       Dim.1     ctr    cos2  v.test     Dim.2     ctr
## breakfast         |  -0.545   8.384   0.275  -9.060 |   0.576  10.563
## Not.breakfast     |   0.503   7.739   0.275   9.060 |  -0.532   9.750
## Not.tea time      |   0.663  11.263   0.340  10.090 |   0.345   3.447
## tea time          |  -0.514   8.730   0.340 -10.090 |  -0.268   2.672
## friends           |  -0.115   0.504   0.025  -2.721 |  -0.375   6.083
## Not.friends       |   0.216   0.950   0.025   2.721 |   0.706  11.465
## 1/day             |   0.296   1.631   0.041   3.487 |   0.609   7.774
## 1 to 2/week       |   1.072   9.899   0.198   7.686 |  -1.161  13.109
## +2/day            |  -0.727  13.148   0.388 -10.775 |   0.105   0.308
## 3 to 6/week       |   0.502   1.674   0.032   3.100 |  -0.589   2.607
## black             |  -0.394   2.246   0.051  -3.896 |   0.301   1.477
## Earl Grey         |   0.030   0.034   0.002   0.701 |  -0.174   1.295
## green             |   0.707   3.224   0.062   4.295 |   0.345   0.869
## No.sugar          |  -0.467   6.621   0.233  -8.352 |  -0.031   0.033
## sugar             |   0.499   7.078   0.233   8.352 |   0.033   0.035
## F                 |  -0.443   6.832   0.286  -9.249 |  -0.357   5.014
## M                 |   0.646   9.969   0.286   9.249 |   0.521   7.315
## Not.sophisticated |  -0.056   0.052   0.001  -0.606 |   0.786  11.599
## sophisticated     |   0.022   0.020   0.001   0.606 |  -0.311   4.586
##                      cos2  v.test     Dim.3     ctr    cos2  v.test  
## breakfast           0.306   9.573 |  -0.244   2.256   0.055  -4.060 |
## Not.breakfast       0.306  -9.573 |   0.226   2.082   0.055   4.060 |
## Not.tea time        0.092   5.254 |   0.157   0.844   0.019   2.386 |
## tea time            0.092  -5.254 |  -0.121   0.654   0.019  -2.386 |
## friends             0.265  -8.898 |  -0.294   4.448   0.163  -6.983 |
## Not.friends         0.265   8.898 |   0.554   8.382   0.163   6.983 |
## 1/day               0.172   7.164 |  -0.206   1.058   0.020  -2.426 |
## 1 to 2/week         0.232  -8.325 |   0.110   0.139   0.002   0.787 |
## +2/day              0.008   1.552 |   0.021   0.015   0.000   0.312 |
## 3 to 6/week         0.044  -3.642 |   0.355   1.123   0.016   2.194 |
## black               0.030   2.974 |   0.821  13.085   0.221   8.125 |
## Earl Grey           0.055  -4.047 |  -0.535  14.485   0.516 -12.424 |
## green               0.015   2.098 |   1.287  14.344   0.205   7.827 |
## No.sugar            0.001  -0.552 |   0.568  13.095   0.344  10.148 |
## sugar               0.001   0.552 |  -0.607  13.998   0.344 -10.148 |
## F                   0.186  -7.458 |   0.027   0.035   0.001   0.569 |
## M                   0.186   7.458 |  -0.040   0.051   0.001  -0.569 |
## Not.sophisticated   0.244   8.545 |  -0.564   7.100   0.126  -6.136 |
## sophisticated       0.244  -8.545 |   0.223   2.807   0.126   6.136 |
## 
## Categorical variables (eta2)
##                     Dim.1 Dim.2 Dim.3  
## breakfast         | 0.275 0.306 0.055 |
## tea.time          | 0.340 0.092 0.019 |
## friends           | 0.025 0.265 0.163 |
## frequency         | 0.449 0.359 0.030 |
## Tea               | 0.094 0.055 0.533 |
## sugar             | 0.233 0.001 0.344 |
## sex               | 0.286 0.186 0.001 |
## sophisticated     | 0.001 0.244 0.126 |
plot(mca_tea, invisible = c("ind"), habillage = "quali", sub = "MCA of tea dataset")

In general, the MCA plot grouped the categories that are mutually similar together and vice versa. Categories such as tea time, friends are grouped together and such are the categories such as Not friends and Not.tea time. In other words, friends tend to have tea time together and those do not have tea during other times (i.e not tea times) are not close friends. Also, the plot shows that Females are more social than their male counter parts because they have friends, and participate during the tea time. It also showed that females do not put sugar whereas males put sugar in tea.