The video that accompanies this notebook is available at https://ucdavis.box.com/v/sts-205-notebook-5.
In this notebook we will be using hierarchical clustering to group the State of the Union Addresses into an arbitrary number of groups based on similarity in word usage.
Start by loading packages (and installing the tm
package if you haven’t already done so), sourcing your functions, and building the sotu
data frame.
library(tidyverse)
Registered S3 methods overwritten by 'dbplyr':
method from
print.tbl_lazy
print.tbl_sql
── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
✓ ggplot2 3.3.3 ✓ purrr 0.3.4
✓ tibble 3.0.6 ✓ dplyr 1.0.4
✓ tidyr 1.1.2 ✓ stringr 1.4.0
✓ readr 1.4.0 ✓ forcats 0.5.0
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
x dplyr::filter() masks stats::filter()
x dplyr::lag() masks stats::lag()
library(tidytext)
#install.packages("tm")
library(tm)
Loading required package: NLP
Attaching package: ‘NLP’
The following object is masked from ‘package:ggplot2’:
annotate
source("functions.r")
sotu <- make_sotu()
── Column specification ────────────────────────────────────────────────────────
cols(
year = col_double(),
pres = col_character(),
use_last = col_logical()
)
To get a sense of how document clustering works, we are going to start by identifying the number of times the words “united” and “america” were used in each SOTU address.
united_america <- sotu_tokenize_words() %>% filter(gram %in% c("united", "america")) %>%
group_by(year) %>% count(gram) %>% spread(gram, n) %>%
mutate(america = replace_na(america, 0), united = replace_na(united, 0))
Once we have done that, we can plot all of the addresses in two dimensions, according to the frequency with which they use each of these two words.
ggplot(united_america, aes(x = united, y = america)) + geom_point()
You can see that some addresses use “united” much more than “America” and some use “America” much more than “united”. We can use geom_label()
, specifying label = year
instead of geom_point()
to get a sense of the time dimension.
ggplot(united_america, aes(x = united, y = america, label = year)) + geom_label()
Now we can calculate the Euclidean distance between each pair of documents (along the united and america dimensions) and hierarchically cluster them to identify any number of clusters of documents. We begin by converting the data frame from wide to narrow and then into a document-term matrix with the cast_dtm()
function.
ua_dtm <- united_america %>% gather("word", "n", -year) %>% cast_dtm(year, word, n)
After that we can use the dist()
function to calculate Euclidean distance between each pair of addresses and the hclust()
function to cluster. Finally, we can use the plot()
function to visualize the dendrogram of the cluster.
ua_cluster <- ua_dtm %>% dist() %>% hclust()
plot(ua_cluster)
The dendrogram shows how the SOTU addresses would be clustered into any number of groups, from 2 to 233. We can use the cutree()
function to see which address would be in which cluster, for any given number of clusters. For example, we can select ten clusters, and then graph the addresses by frequency of “united” and “america”, with color corresponding to cluster.
cutree(ua_cluster, 10)
1790 1790.5 1791 1792 1793 1794 1795 1796 1797 1798 1799
1 1 2 1 2 2 1 2 2 2 1
1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810
1 1 1 1 1 1 1 1 1 1 2
1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821
2 1 2 1 1 2 2 2 3 2 3
1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832
2 2 2 2 4 3 3 2 3 2 2
1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843
4 4 4 3 3 5 2 2 3 3 3
1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854
3 6 7 5 4 3 2 4 3 3 4
1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865
6 5 4 5 3 4 1 3 3 2 4
1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876
3 2 3 4 5 3 3 4 3 5 4
1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887
3 2 4 3 2 2 2 3 5 3 1
1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898
2 3 3 5 4 4 4 5 3 4 7
1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909
6 5 3 1 5 3 4 4 4 2 4
1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920
4 5 7 1 1 1 1 1 1 1 1
1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931
1 1 1 1 1 1 1 2 2 1 1
1932 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943
1 1 1 2 1 2 1 2 1 2 8
1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1953.5
1 2 7 1 2 1 1 1 1 3 2
1954 1955 1956 1957 1958 1959 1960 1961 1961.5 1962 1963
2 2 8 2 1 1 2 2 1 8 1
1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974
1 1 1 1 1 1 9 10 10 1 10
1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985
1 10 1 1 1 8 6 1 10 10 1
1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996
10 10 10 10 10 8 1 1 1 10 10
1997 1998 1999 2000 2001 2001.5 2002 2003 2004 2005 2006
9 9 9 9 1 8 9 10 9 1 9
2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017
10 9 10 10 10 9 10 9 9 10 10
2018 2019
10 8
united_america$cluster <- factor(cutree(ua_cluster, 10))
head(united_america)
ggplot(united_america, aes(x = united, y = america, color = cluster, label = year)) + geom_label()
As you see, the colors are grouped togethr, because we are using the same criteria to cluster documents as to graph them. We can write a for loop to graph any number of clusters.
for(i in 1:20) {
united_america$cluster <- factor(cutree(ua_cluster, i))
print(ggplot(united_america, aes(x = united, y = america, color = cluster, label = year)) + geom_label())
}
For this rather trivial clustering exercise, we grouped documents solely on the basis of the frequency of the words “united” and “America”. But we can take into account any number of words (or other features) when clustering.
Here we will cluster using the 1000 most frequent words across the whole corpus. Now we are clustering by distance in 1000-dimensional space.
#Identify the thousand most frequent words
top_thousand <- sotu_tokenize_words() %>% count(gram) %>% top_n(1000)
Selecting by n
#Make a document-term matrix of the thousand most frequent words
sotu_words_dtm <- sotu_tokenize_words() %>% filter(gram %in% top_thousand$gram) %>%
group_by(year) %>% count(gram) %>% cast_dtm(year, gram, n)
Once we have the distance matrix, we can plot it in two dimensions with multidimensional scaling (cmdscale()
) just to see what it looks like in two dimensions. This is similar to the kind of plots produced with principal components analysis
sotu_words_dist <- dist(sotu_words_dtm)
plot(cmdscale(sotu_words_dist, k = 2))
As you can see, most of the addresses are very tightly clustered together, but two are very different from the others. Now we can cluster and plot the dendrogram.
sotu_words_cluster <- hclust(sotu_words_dist)
plot(sotu_words_cluster)
As you can see, two addresses (1946 and 1981) are very different from the others. These are the two we saw in the bottom right corner of the two-dimensional plot.
Now let’s write a function to color the labels on our two-dimensional plot according to any number of clusters from our hierarchical cluster.
plot_cluster <- function(nclust) {
data.frame(cmdscale(sotu_words_dist, k = 2)) %>%
mutate(cluster = cutree(sotu_words_cluster, nclust), year = sotu$year) %>%
ggplot(aes(x = X1, y = X2, color = factor(cluster), label = year)) + geom_label()
}
for(i in 2:20) {
print(plot_cluster(i))
}
The scatterplot and the dendrogram provide more or less the same information, showing the distance between any two addresses and the clusters that they are assigned into at any level of hierarchy. But they don’t tell us anything about the contents of the clusters. The code below is a function that, for any number of clusters, identifies the top ten words by tf-idf. As you will see, instead of calculating tf-idf manually the way we did in Notebook 3, we are using the bind_tf_idf()
function from the tidytext
package.
#Function to identify the 10 most uniquely characteristic words of each cluster
cluster_words <- function(nclust) {
#Add a column to sotu indicating which cluster an address is in (for nclust clusters)
sotu %>% mutate(cluster = cutree(sotu_words_cluster, nclust)) %>%
#Unnest tokens and keep only words in the top thousand
unnest_tokens(gram, text) %>% filter(gram %in% top_thousand$gram) %>%
#Calculate tf-idf for each word by cluster
group_by(cluster) %>% count(gram) %>% bind_tf_idf(gram, cluster, n) %>%
#Collapse list of words for each cluster into a single string
group_by(cluster) %>% top_n(10, tf_idf) %>% summarize(words = str_c(gram, collapse = ", "))
}
cluster_words(3)
cluster_words(50)
Now for any number of clusters, we can plot the clusters and see which words are distinctive of each one.
plot_cluster(4)
cluster_words(4)
plot_cluster(20)
cluster_words(20)
We don’t have to cluster on words. We can also use bigrams, trigrams, or any other unit of analysis. This time, let’s use the top 100 bigrams as our clustering features.
#Find top 100 bigrams
top_hundred <- sotu_tokenize_bigrams() %>% count(gram) %>% top_n(100)
Selecting by n
#Make document term matrix from top 100 bigrams
sotu_bigrams_dtm <- sotu_tokenize_bigrams() %>% filter(gram %in% top_hundred$gram) %>%
group_by(year) %>% count(gram) %>% cast_dtm(year, gram, n)
#Calculate Euclidean distances
sotu_bigrams_dist <- dist(sotu_bigrams_dtm)
#Plot on two dimensions
plot(cmdscale(sotu_bigrams_dist, k = 2))
#Cluster hierarchically
sotu_bigrams_cluster <- hclust(sotu_bigrams_dist)
#Plot dendrogram
plot(sotu_bigrams_cluster)
Write a function to plot and view the words for any number of clusters.
plot_and_view <- function(nclust) {
graph <- data.frame(cmdscale(sotu_bigrams_dist, k = 2)) %>%
mutate(cluster = cutree(sotu_bigrams_cluster, nclust), year = sotu$year) %>%
ggplot(aes(x = X1, y = X2, color = factor(cluster), label = year)) + geom_label()
dataframe <- sotu %>% mutate(cluster = cutree(sotu_bigrams_cluster, nclust)) %>%
unnest_tokens(gram, text, token = "ngrams", n = 2) %>% filter(gram %in% top_hundred$gram) %>%
group_by(cluster) %>% count(gram) %>% bind_tf_idf(gram, cluster, n) %>%
group_by(cluster) %>% top_n(10, tf_idf) %>% summarize(bigrams = str_c(gram, collapse = ", "))
print(graph)
return(dataframe)
}
plot_and_view(10)
Clustering is a way of exploring your corpus. Different numbers of clusters and different clustering criteria will be more or less salient for different corpora. Remember that clustering is not magic. Documents are being grouped by the criteria you specify, so they can only show you patterns that you are looking for. Also remember that two documents can use almost the exact same words and still have very different meanings.