Friday, November 21, 2025

Information Cleansing on the Command Line for Newbie Information Scientists


Information Cleansing on the Command Line for Newbie Information Scientists
Picture by Creator

 

Introduction

 
When you concentrate on knowledge cleansing, you most likely consider spinning up a Jupyter Pocket book. However here is one thing which may shock you: among the strongest knowledge cleansing can occur proper in your terminal, utilizing instruments which might be already put in in your system.

On this article, you’ll discover ways to use fundamental command-line utilities to wash, rework, and discover knowledge recordsdata. No installations are required; simply your terminal and a few CSV recordsdata.

Earlier than we get began cleansing knowledge on the command line, let’s discuss why this issues:

  • Command-line instruments are simple to make use of, quick, and environment friendly, particularly for big recordsdata.
  • These instruments are constructed into Linux/macOS and can be found on Home windows.
  • They’re nice for getting a primary take a look at knowledge earlier than loading and analyzing it with Python.
  • It’s simple to chain instructions collectively in scripts and use these scripts for automation.

Now, let’s begin coding!

Word: You will discover all of the instructions on this Bash script on GitHub. To get probably the most out of this text, I encourage you to open your terminal and code alongside.

 

Setting Up Pattern Information

 
Let’s create a messy CSV file to work with. This simulates real-world knowledge points you’ll seemingly run into.

cat > messy_data.csv << 'EOF'
title,age,wage,division,e-mail
John Lee,32,50000,Engineering,john@instance.com
Jane Smith,28,55000,Advertising,jane@instance.com
   Bob Davis    ,35,60000,Engineering,bob@instance.com
Alice Williams,29,,Advertising,alice@instance.com
Charlie Brown,45,70000,Gross sales,charlie@instance.com
Dave Wilson,31,52000,Engineering,
Emma Davis,,58000,Advertising,emma@instance.com
Frank Miller,38,65000,Gross sales,frank@instance.com
John Lee,32,50000,Engineering,john@instance.com
Grace Lee,27,51000,Engineering,grace@instance.com
EOF

 

This dataset has a number of frequent points: main and trailing whitespace, lacking values, and duplicate rows. It’s nice for studying!

 

1. Exploring Your Information with head, tail, and wc

 
Earlier than cleansing your knowledge, you want to perceive what you’re working with. Let’s begin with the fundamentals.

# See the primary 5 rows (together with header)
head -n 5 messy_data.csv

# See the final 3 rows
tail -n 3 messy_data.csv

# Depend whole rows (together with header)
wc -l messy_data.csv

 

Here’s what is occurring:

  • head -n 5 exhibits the primary 5 strains, providing you with a fast preview.
  • tail -n 3 exhibits the final 3 strains (helpful for checking if knowledge is full).
  • wc -l counts strains — subtract 1 for the header to get your document rely.

Output:

title,age,wage,division,e-mail
John Lee,32,50000,Engineering,john@instance.com
Jane Smith,28,55000,Advertising,jane@instance.com
   Bob Davis    ,35,60000,Engineering,bob@instance.com
Alice Williams,29,,Advertising,alice@instance.com
Frank Miller,38,65000,Gross sales,frank@instance.com
John Lee,32,50000,Engineering,john@instance.com
Grace Lee,27,51000,Engineering,grace@instance.com
11 messy_data.csv

 

2. Viewing Particular Columns with lower

 
You don’t all the time must see all of the columns. Let’s extract solely the names and departments.

lower -d',' -f1,4 messy_data.csv

 

Breaking it down:

  • lower is a instrument for extracting sections from every line.
  • -d',' units the delimiter to a comma (for CSV recordsdata).
  • -f1,4 selects fields (columns) 1 and 4.
  • You may as well use ranges: -f1-3 for columns 1 via 3.

Right here is the output:

title,division
John Lee,Engineering
Jane Smith,Advertising
   Bob Davis    ,Engineering
Alice Williams,Advertising
Charlie Brown,Gross sales
Dave Wilson,Engineering
Emma Davis,Advertising
Frank Miller,Gross sales
John Lee,Engineering
Grace Lee,Engineering

 

3. Eradicating Duplicate Rows with kind and uniq

 
Discover that “John Lee” seems twice in our dataset. Let’s repair that.

# Save the header first
head -n 1 messy_data.csv > cleaned_data.csv

# Take away duplicates from the information (excluding header)
tail -n +2 messy_data.csv | kind | uniq >> cleaned_data.csv

 

Here’s what every command does: head -n 1 grabs simply the header row. tail -n +2 will get the whole lot ranging from line 2 (skipping the header). Then, kind types the strains. Please observe that uniq solely works on sorted knowledge, and uniq removes adjoining duplicate strains. Lastly, >> appends to the file (versus > which overwrites).

 

4. Looking and Filtering with grep

 
Let’s now do some looking out and filtering operations. Need to discover all engineers or filter out rows with lacking knowledge? grep is useful for all such duties.

# Discover all engineers
grep "Engineering" messy_data.csv

# Discover rows with empty fields (two consecutive commas)
grep ",," messy_data.csv

# Exclude rows with lacking knowledge
grep -v ",," messy_data.csv > no_missing.csv

 

Right here, grep "sample" searches for strains containing that sample. grep -v inverts the match (exhibits strains that DO NOT match). This can be a fast approach to filter out incomplete information, offered the lacking worth leads to a double comma (,,).

 

5. Trimming Whitespace with sed

 
See how the document of “Bob Davis” has additional areas? Let’s clear that up.

sed 's/^[ t]*//; s/[ t]*$//' messy_data.csv > trimmed_data.csv

 

Now let’s perceive the command: sed is a stream editor for textual content transformation. s/sample/substitute/ is the substitution syntax. ^[ t]* matches areas/tabs in the beginning of a line. [ t]*$ matches areas/tabs on the finish of a line. The semicolon separates two operations (trim the road begin, then trim the road finish).

 

6. Changing Values with sed

 
Generally you want to standardize values or repair typos. Let’s attempt to substitute all occurrences of “Engineering” with “Tech”.

# Exchange all "Engineering" with "Tech"
sed 's/Engineering/Tech/g' messy_data.csv

 

Subsequent, let’s fill empty e-mail fields (denoted by a comma on the finish of the road) with a default e-mail worth.

# Exchange empty e-mail fields with "no-email@instance.com"
sed 's/,$/,no-email@instance.com/' messy_data.csv

 

Run the above instructions and observe the output. I’ve excluded the output right here to keep away from being repetitive.

Breaking it down:

  • The g flag means “international” — substitute all occurrences on every line.
  • ,$ matches a comma on the finish of a line (indicating an empty final subject).
  • You’ll be able to chain a number of replacements with ; between them.

 

7. Counting and Summarizing with awk

 
awk is tremendous helpful for field-based operations. Let’s do some fundamental evaluation.

# Depend information by division
tail -n +2 messy_data.csv | lower -d',' -f4 | kind | uniq -c

# Calculate common age (excluding header and empty values)
tail -n +2 messy_data.csv | awk -F',' '{if($2) sum+=$2; if($2) rely++} END {print "Common age:", sum/rely}'

 

On this awk command, -F',' units the sphere separator to a comma, and $2 refers back to the second subject (age). The situation if($2) ensures solely non-empty values are processed, whereas sum += $2 accumulates the entire. Lastly, the END block executes in spite of everything strains are learn to calculate and print the typical age.

Output:

      5 Engineering
      3 Advertising
      2 Gross sales
Common age: 33

 

8. Combining Instructions with Pipes

 
You get extra helpful processing once you chain these command-line instruments collectively.

# Get distinctive departments, sorted alphabetically
tail -n +2 messy_data.csv | lower -d',' -f4 | kind | uniq

# Discover engineers with wage > 55000
tail -n +2 messy_data.csv | grep "Engineering" | awk -F',' '$3 > 55000' | lower -d',' -f1,3

# Depend staff per division with counts
tail -n +2 messy_data.csv | lower -d',' -f4 | kind | uniq -c | kind -rn

 

Right here, every | passes the output of 1 command as enter to the following. This allows you to construct complicated knowledge transformations step-by-step. The final step types by rely in reverse numerical order (-rn).

This outputs:

Engineering
Advertising
Gross sales
   Bob Davis    ,60000
      5 Engineering
      3 Advertising
      2 Gross sales

 

9. Changing Information Codecs

 
Generally you want to work with completely different delimiters. Right here, we attempt to use a tab because the separator as an alternative of a comma.

# Convert CSV to TSV (tab-separated)
sed 's/,/t/g' messy_data.csv > knowledge.tsv

# Add a brand new column with a hard and fast worth
awk -F',' 'BEGIN{OFS=","} {print $0, "2024"}' messy_data.csv > data_with_year.csv

 

On this awk command, BEGIN{OFS=","} units the output subject separator to a comma. $0 represents your complete enter line, and print $0, "2024" appends “2024” as a brand new column to every line of output.

 

10. A Full Cleansing Pipeline

 
Let’s put all of it collectively into one helpful command that cleans our messy knowledge:

# Save header
head -n 1 messy_data.csv > final_clean.csv

# Clear the information: take away duplicates, trim whitespace, exclude lacking values
tail -n +2 messy_data.csv | 
  sed 's/^[ t]*//; s/[ t]*$//' | 
  grep -v ",," | 
  kind | 
  uniq >> final_clean.csv

echo "Cleansing full! Test final_clean.csv"

 

This pipeline first saves the header to protect the column names, then skips it whereas processing the information rows. It trims main and trailing whitespace from every line, removes any rows containing empty fields (particularly double commas), types the information, and eliminates duplicate entries. Lastly, it appends the cleaned knowledge to the output file.

 

Conclusion

 
Command-line knowledge cleansing is a robust but underrated ability for knowledge scientists. These instruments are quick and dependable. Whereas you’ll nonetheless use Python for complicated evaluation, mastering these fundamentals will make you extra environment friendly and offer you choices when Python is not splendid.

The perfect half is that these expertise switch to knowledge engineering, DevOps, and system administration roles. Studying to control knowledge on the command line makes you a extra versatile developer.

Begin practising with your individual datasets, and you’ll be shocked how typically you attain for these instruments as an alternative of spinning up Python notebooks. Completely satisfied knowledge cleansing!
 
 

Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, knowledge science, and content material creation. Her areas of curiosity and experience embody DevOps, knowledge science, and pure language processing. She enjoys studying, writing, coding, and low! At present, she’s engaged on studying and sharing her information with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates partaking useful resource overviews and coding tutorials.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

PHP Code Snippets Powered By : XYZScripts.com