Java Stream Methods and Unix Pipeline Commands: A Dictionary
While preparing my class notes for functional programming in Java
I was struck between the neat correspondence between many Java Stream
methods and Unix commands.
I decided to organize the most common of these in a dictionary form
that allows the mapping between the two.
I’d very much welcome comments regarding common patterns that I’ve missed.
Continue reading "Java Stream Methods and Unix Pipeline Commands: A Dictionary"Last modified: Thursday, December 6, 2018 9:42 pm
Debugging had to be discovered!
I start my Communications of the ACM article titled
Modern debugging techniques: The art of finding a needle in a haystack
(accessible from this page without a paywall)
with the following remarkable quote.
“As soon as we started programming, […] we found to our surprise that
it wasn’t as easy to get programs right as we had thought it would be.
[…] Debugging had to be discovered.
I can remember the exact instant […] when I realized that a large part of
my life from then on was going to be spent in finding mistakes
in my own programs.”
A Google search for this phrase
returns close to 3000 results, but most of them are cryptically
attributed as
“Maurice Wilkes, discovers debugging, 1949”.
For a scholarly article I knew I had to do better than that.
Continue reading "Debugging had to be discovered!"Last modified: Friday, November 16, 2018 5:38 pm
How I slashed a SQL query runtime from 380 hours to 12 with two Unix commands
I was trying to run a simple join query
on MariaDB (MySQL) and its performance was horrendous.
Here’s how I cut down the query’s run time from over
380 hours to under 12 hours by executing part of it
with two simple Unix commands.
Continue reading "How I slashed a SQL query runtime from 380 hours to 12 with two Unix commands"Last modified: Sunday, August 5, 2018 8:20 pm
The Unix sort command can efficiently handle files of arbitrary size
(think of terabytes).
It does this
by loading into main memory all the data that can fit into it (say 16GB),
sorting that data efficiently using an O(N log N) algorithm,
and then merge-sorting the chunks with a linear complexity O(N) cost.
If the number of sorted chunks is higher than the number of file descriptors
that the merge operation can simultaneously keep open
(typically more than 1000),
then sort will recursively merge-sort intermediate merged files.
Once you have at hand sorted files with unique elements,
you can efficiently perform set operations with them through linear
complexity O(N) operations.
Here is how to do it.
Continue reading "How to Perform Set Operations on Terabyte Files"Last modified: Tuesday, April 3, 2018 8:44 pm
Earlier today I submitted the camera-ready version of a
technical briefing on
mining Git repositories,
which Georgios Gousios
and I will be presenting at the
2018 International Conference on Software Engineering.
I was struck by the complexity and inefficiency of the administrative process.
Continue reading "The Shoemaker’s Children Go Barefoot"Last modified: Tuesday, February 13, 2018 10:09 am