Question
Write a bash script to calculate the frequency of each word in a text file words.txt.
For simplicity sake, you may assume:
words.txt contains only lowercase characters and space ’ ’ characters.
Each word must consist of lowercase characters only.
Words are separated by one or more whitespace characters.
For example, assume that words.txt has the following content:
the day is sunny the the
the sunny is is
Your script should output the following, sorted by descending frequency:
the 4
is 3
sunny 2
day 1
Note:
Don’t worry about handling ties, it is guaranteed that each word’s frequency count is unique.
My Solution
using array in awk is common to use
awk ‘{for(i=1;i<=NF;i++){arr[$i] +=1;}} END{for(i in arr){print i,arr[i] | “sort -nr -k2”};}’ words.txt
If using pipeline,
awk ‘{for(i=1;i<=NF;i++){arr[$i] +=1;}} END{for(i in arr){print i,arr[i] }’ words.txt | sort -nr -k2
sort -r: in reverse order, -k2: sort in the second column
Other′s Solution
cat words.txt | tr -s ′ ′ ′\n′ | sort | uniq -c | sort -rn | awk ‘{print $2″ ″$1}’
tr -s ′ ′ ′\n′: substitute ′\n′ for ′ ′
uniq -c: count the same one