Powershell's built in tools for dealing with text files tend to be very slow.
E.g. import-csv (for CSV files) and get-content both take many minutes on a
large file that would take Linux grep around 0 to 3 seconds.

Get-Content has the "-readcount" option, which can speed it up significantly,
but unfortunately this buggers its return type (despite the help page saying
that it does not do this).

When using the -readcount option to speed up operations on large files,
get-content passes objects of readcount lines at a time down the pipeline.
This is often not what you want. You need to wrap these in *two* nested
foreach-object loops, to get line-by-line, e.g.:

# count the number of lines in a file
$count = 0
gc -readcount 2048 .\bigfile.csv | %{ $_ | %{ 
        $count++
    }
}
$count

("%{}" is short for "foreach-object {}". The first one goes one block at a time; the
nested one goes one line at a time within each block.)

The nested loops are slow, but not as slow as Import-CSV. A slightly faster
way is to use a streamreader:

# count the number of lines in a file
$count = 0
$reader = New-Object IO.StreamReader 'bigfile.csv'
while(($line = $reader.ReadLine()) -ne $null) {
    $count++
}
$reader.Close()
$count

This still takes about 4 times as long as grep or wc, but at least it's
getting somewhere and is easier to think about than the nested loops.