I've been working a little over a couple months now. At first it was super rough - the language barrier has been non-trivial to work through, but reasonable thanks to Google-sensei; I was working on PHP despite my best efforts to escape that world for the past several years; and the processes at my current company are kind of garbage.
The good news for my sanity as well as this blog post is that last one, being that the processes are terrible, but not (completely) irreconciliably so! Insert applause here. So I'm gonna talk about how I made it a little less stab me in the face and punch my eyes out and a little more touch me touch me.
Though not by that much, unfortunately. Here's a case study on the case for automation.
In the Established Process we'll be looking at here, the basic flow of development is:
* Yeah, it's annoying having to download a CSV export of the test data and then fiddle with it to make it a bunch of insert statements. Just kidding, it's like five vim commands! ez
Now, by no means is this fundamentally what I'd call a bad process. The spec always acts as the Single Source of Truth for the feature, the concept of testing is present, and the process of testing is also fine I think (says the man who has never had to do integration testing in any form before this).
Where it starts to break down in our implementation, however:
So that's a lot of silly points. And sadly, as a developer with little say in the matter because I'm junior my entire team by several years' minimum and Japan caring about that and just generally receiving the response しょうがない when I try and bring it up, I can't change much.
But I can at least improve what goes on on my machine! Since that's pretty much development and the integration testing, and I don't have many complaints about my development as long as I have vim and a directory with permissions for my plugins, that leaves testing!
To look at the testing process as I do it in slightly more detail:
If you've done a bit of sysadmin stuff, you might note that all of these are...pretty simple. Simple enough they could be scripted, perhaps.
Which is exactly what I did, one by one.
For this section I'll just go through each of the above steps I've automated, provide a similar snippet to what I used to automate it, and provide any additional notes that might help.
mysql db_name < add_test_AAA_table.sql
Basically I just have a bunch of SQL scripts in a directory off of ~
run
together with a simple shell script. All the SQL scripts:
Nothing too fancy, but writing the table create statements is a little bothersome.
mysqldump db_name > db_pre_dump.sql
Nothing complicated.
find /var/log/path/to/logs/parent -name "*$(get_log_date).log" -delete
The logs live in a directory somewhere under /var/log that I have forgotten, but
each category of log also has its own directory somewhere in there, hence the
name pattern and using find
in the first place. get_log_date
is just a
script that's an invocation of date
with the format matching our log pattern.
Other than that, pretty simple.
cp /path/to/app/config/env.yml /tmp/evidence/configs/
Again, easy. Nothing tricky.
find /var/log/path/to/logs/parent name "*$(get_log_date).log" -exec cp {} /tmp/evidence/log_captures \;
We basically need the "log clear" command, but this time we copy to our evidence log folder instead.
mysqldump db_name > db_post_dump.sql
Nothing fancy here. The diff is similarly simple:
diff db_pre_dump.sql db_post_dump.sql > db_diff.txt cp db_diff db_pre_dump.sql db_post_dump.sql /tmp/evidence/DB_dumps
Again, nothing too fancy.
For grabbing the SQL queries out of the logs, we just need to look at the app logs. They're all marked by "SQL:" followed by the query, and each log line is tab-separated:
grep "SQL:" /var/log/path/to/logs/app/app.$(get_log_date).log | cut -d"\t" -f4 | cut -d":" -f2
It may or may not be the 4th column in the first cut
but yenno. This command
gets the query itself, which in practice I output to something like
/tmp/evidence/SQL/SQL.txt
as well as capture to a variable.
If you perhaps need to filter them such that you ignore some queries on a
certain table or something, you can do a second grep (probably right after the
first one) that would look like grep -v "ignore_table"
or similar.
For getting the explain plans, you need to take the captured variable and use that in a command-line call to MySQL:
mysql db_name -t -e "${SQL_QUERY}"
This call as well as the output then get sent to a file like
/tmp/evidence/Explain/Explain.txt
. The -t
makes it output in table-format
regardless of if it's a standalone call on the shell (where it defaults to
table-format) or if it's used as part of a sequence of commands/redirected
output (where it looks less pretty).
The result of this is that where I once had 11 steps for each execution path being tested, all of which are highly prone to human error and accelerate my acquisition of carpal tunnel, there are now 5 steps:
And two of them change, at most, once per function under test. There's not much
I care to figure out for the screenshot capture, but the source copying could be
made less manual with some playing around with the git status -s
output and
executing the function could probably be done via CLI or Selenium in the web
browser. But I am lazy, and this feels "good enough" to me for the moment at
least.
A lot of long-time developers probably don't need this post, as doing something
manually thrice is about twice too many as I've heard from some of my peers. But
if you get stuck doing a lot of manual, repetitive tasks, you should definitely
think about the possibility of automation for several reasons. For one, it saves
time and brain cycles. You'll also probably learn something about the pieces of
your toolkit - two months ago I basically knew nothing about the MySQL command
line options besides the basic mysql -u username -p db_name
, and now I know
the basics of how to integrate it into a manual testing flow.
But most importantly, it means you have the time to read allthoseStackOverflow community questions you don't have time to read but read anyway.
This page brought to you by Stephen Hara.