Monday, October 12, 2015

Good Design is Simple

I work for a software company, 1/3 of our staff are engineers and developers and we face real world problems that require a lot of strategic work and planning. Lately I’ve been doing a lot of research on development methodologies, project management processes and workflow proficiency. There is a lot out there and most of it is subjective, opinionated and you're left to determine whats best for your situation. But there is one thing that can be applied to almost anything and that is...
Good design is simple. You hear this from math to painting. In math it means that a shorter proof tends to be a better one. Where axioms are concerned, especially, less is more. It means much the same thing in programming. For architects and designers it means that beauty should depend on a few carefully chosen structural elements rather than a profusion of superficial ornament. (Ornament is not in itself bad, only when it's camouflage on insipid form.) Similarly, in painting, a still life of a few carefully observed and solidly modelled objects will tend to be more interesting than a stretch of flashy but mindlessly repetitive painting of, say, a lace collar. In writing it means: say what you mean and say it briefly.
It seems strange to have to emphasize simplicity. You'd think simple would be the default. Ornate is more work. But something seems to come over people when they try to be creative. Beginning writers adopt a pompous tone that doesn't sound anything like the way they speak. Designers trying to be artistic resort to swooshes and curlicues. Painters discover that they're expressionists. It's all evasion. Underneath the long words or the "expressive" brush strokes, there is not much going on, and that's frightening.
When you're forced to be simple, you're forced to face the real problem. When you can't deliver ornament, you have to deliver substance.
The above was written by Paul Graham and pulled from his site

Thursday, October 8, 2015

myisamchk --sort-index and --analyze happy together?

Why use myisamchk

myisamchk is a tool used to check, repair, and optimize MyISAM tables. My company uses MyISAM tables to quickly update a large shared read only reference database. Normal dynamic data is kept in InnoDB tables and the MyISAM reference data is joined in queries. However, it’s worth noting that there are well documented issues, here and here, with mixing MyISAM and InnoDB tables together.

Cardinality is Key

The work flow of building the tables are:

  1. Pull data for many sources (third party)
  2. Compile data (app)
  3. build tables (lots of inserts)
  4. optimize (analyze and sort indexes)
  5. push to production (read only)

From this point we’re going to focus on the step 4, optimizing the table data.

Why Optimize?

After large amounts of data is inserted into a table, step 3 above, it is crucial to refresh the table indexes and give MySQL the best possible chance for the best query execution plan. To do this, you must run OPTIMIZE TABLE, ANALYZE TABLE or use the myisamchk tool when it is safe to use.

In this case the tables are being optimized outsize of MySQL, so the myisamchk command line comes in hand here. The command being ran to do this work:

[root@host]# myisamchk -vvv --analyze --sort-index test.MYI

When executed this output is displayed:

- Sorting index for MyISAM-table 'test.MYI'

At first this output looks normal, but anyone who has used myisamchk knows that the --analyze option outputs much more as show here:

[root@hoss]# myisamchk -vvv --analyze test.MYI
Checking MyISAM file: test.MYI
Data records:   60771   Deleted blocks:       0
- check file-size
- check record delete-chain
No recordlinks
- check key delete-chain
block_size 1024:
- check index reference
- check data record references index: 1
- check data record references index: 2
- check data record references index: 3

The Problem / Bug

If the --sort-index option is used with  the --analyze option, --sort-index is ignored without an error or warning from the program that the option will be ignored and only a sort index will be performed.

From the myisamchk source code, see line 961 and then line 1054

if (param->testflag & (T_REP_ANY | T_SORT_RECORDS | T_SORT_INDEX))
else if ((param->testflag & T_CHECK) || !(param->testflag & T_AUTO_INC))

If param->testflag is either T_SORT_RECORDS or T_SORT_INDEX, then the else if block that handles the T_CHECK / analyze is never execute.

Possible Fixes

  1. Update the documentation to note  the --sort-index and --analyze options cannot be ran together.
  2. Update myisamchk to ignore and show a warning that both options cannot be ran together.
  3. Update myisamchk to allow for --sort-index and --analyze to be executed together.

Friday, August 10, 2012

Privacy Policies are Worthless

I'm going to go off topic here and talk about a privacy experience I had.  I've chosen to keep the name of the company anonymous.

Recently my company changed payroll services and all pay stubs are now stored electronically, no more paper mail whoopee!  Each employee was sent their temporary login credentials via snail mail, and I somehow miss placed mine.  So I did what any other person would do, e-mail the company about how to reset your username and password, to my shock it was way to easy.

As you can see my initial e-mail was very basic, simply asking how to obtain my login credentials.  I figured the response would be something like, 1) go to this site and do these six steps, or 2) call this number and we'll mail you a new login.  To my surprise I received a reply back within 20 minutes containing my login credentials, I was not asked to verify my name, mailing address or last four digits of SSN.  I replied back, and gave my concerns and today I received a response back from the payroll company's president:

I'm glad he apologized and recognized the security risk as I did, and I'm sure they will correct the issue internally.

However - his just shows that information my be encrypted then put on an encrypted disk, in data center with locked cages, multiple keycard passes and gates in a bunker under a mountain thats monitored by hundreds of people.  But it doesn't mean that the human sitting at the help desk answering e-mails with access to that highly protected information knows how to handle it.

I find this almost hilarious - its never the computer that says, "Oops! I left my tape backup in the car unencrypted and the car was stolen!"  It is us humans who make the mistakes and it always seems to take one bad breach of protected information before things change...why are we more reactive other than proactive?

To top it off, I found this pocket size book to help me remember my secret passwords at the Hallmark checkout line when buying a birthday card.   I love the fact that it says, "a confidential handbook" and "keep in a secure place", its like whoever gets their hands on it will see those words and not read it!  

Wednesday, April 11, 2012

MySQL 2012 Conference Key Note

I just came from the 2012 MySQL Conference Key Note inspired.

Peter ZaitsevCo-founder and CEO, kicked off the key note giving a "state of the union" of MySQL and how the 2012 MySQL Conference almost didn't happen with the acquisition of MySQL by Oracle and then O'Reilly dropping sponsorship.  Read more here.

Baron Schwartz, Chief Performance Architect, then followed with a lesser technical and more personal presentation of his own roll in the MySQL communicate and how he got there.  Leaving a "Office Space" like job programming VB6 and ASP to work for a smaller startup company using open source software.  Baron encouraged the attendees to be inspired and work within the community to solve everyday problems by building open source software.

MÃ¥rten MickosCEO, Eucalyptus Systems and previously as CEO of MySQL AB, discussed the history of database servers and his perspective of where MySQL is going and its roll in the cloud. 

Brian AkerFellow at HP, previously the CTO of Data Differential, creator of Drizzle, a Sun Microsystems Distinguished Engineer, and the former Director of Architecture for MySQL, then gave an overview of “Servicing Databases for the Cloud” and announced HP's Open Cloud running OpenStack.  When Brain speaks you want to listen, his views and opinions typically become reality and rules.

Monday, April 9, 2012

2012 MySQL Conference

I'm writing this blog post from a Boeing 737 30,000 feet above our wonderful planet that is wifi-enabled, oh how far we have come, filled with excitement for the rest of the week as I have a free ticket to the MySQL 2012 Conference present by Percona.

A few weeks ago I was notified by Baron Schwarts that I was one of the Percona ticket winners!  My company, thank you Doug, was gracious enough to put me on a plane and allow me to spend the week in sunny California sharping my MySQL DBA skills.  Unfortunate for my wife I'm gone and she has to deal with our two kids all by her lonely self - ok the kids are really dogs but still  - sorry Laura.

This morning I spent about an hour going over the conference schedule and tutorials and I do have to say what a show.  If Percona had a motion picture trailer to promote the conference it would be a block buster showing of movie stars, effects and promisses of huge explosions, drama and romance!  I have high expectations and I know the tutorials and conference will be a huge success just because of the individuals involved.  Percona and the MySQL community have put a lot of effort into planning and promoting the event, so thank you Percona and MYSQL community!

A few events I'm planing on attending:

Tuesday Tutorials: 
  • Innodb and XtraDB Architecture and Performance Optimization
    I believe the more I understand about the inner workings of InnoDB the better DBA I will be.  I do hope InnoDB's global kenel mutex locking is truly fixed in 5.6. 
  • Linux and H/W Optimization for MySQL
    To me you have to have a solid fondation to start with and H/W is key to better performance, hope to learn a little bit about SSD here.
  • BoF: Percona XtraDB Cluster
    Ground breaking? - maybe, I'm skeptical about XtraDB Cluster because it falls short just like a lot of the other clustering solutions for MySQL.  For example, only InnoDB, no memory temporary table support.  I could be wrong - but I haven't found a solution that is just plug-in-play for MySQL, meaning I don't have to change my app to make it work.  Hope to learn more here and get some insight of the roadmap.
Wednesday Day 1 Conference:
  • MySQL Plugins - why should I bother?
    I've heard of them, I use them but have no idea how plugins could be used further.  I have my own ideas on what they can do, lets see how easy it is to build my own plugin...I see a blog post here.
  • Getting InnoDB Compression ready for Facebook
    I use compression in my own application for HIPAA audit logs and its been great, however I know there are issues regarding performance and have not moved to mail application data yet - maybe FB has some tricks up there sleeve.
  • Diagnosing Intermitted Performance Problems
    We all have been there - those Zabbix or Nagios pages that say high load or to many threads for a brief period of time.  Baron gives great talks about collecting, aggregating, visually and processing data to diagnose server problems.  Looking forward to this one.  
I will be trying to blog the tutorials and talks but no promisses. 

Monday, February 27, 2012

Problems with CentOS CFQ Default IO Scheduler

Don't get burned by RedHat/CentOS default I/O Scheduler of CFQ.  A few weeks ago this exact thing happened to me.  My company is starting to standardize on CentOS as our default install of new servers and in the past we have always built custom Linux Kernels and packages, one of the defaults for us was to use the deadline scheduler.  However - this approach did not fit well into what the rest of the community was using, we found our self compiling packages that were readily available in repositories such as yum.

Before we put a server into production a set of benchmarks are ran, typically sysbench fileio and OLTP.  The baseline benchmark results were outstanding and showed no bottlenecks for the any of the test workloads within our thread count range.  However - once the server was put into production the server started to stalls at times.  I switch back to a tried and true slave server and the problems disappeared.

I was perplexed, what is going on here?  At first it the issue appeared to be related to the well known InnoDB Global Kernel Mutex issue in MySQL 5.0, 5.1 and 5.5 but as I started looking into our Cacti graphing I noticed that the the InnoDB I/O Pending stats on the new server (db3) were much higher than our tried and true server (db1).

Here is the Cacti graph on MySQL InnoDB I/O Pending:

During the same peak time, but on a different day, db1 had less Pending IO than db3.  Something must be different between the two servers but what?  The best way I know of to get server config info is to run pt-summary on each server and then compare the results.  If you are not familier with pt-summary then your missing out!  Perocna's pt-summary made the problem obvious and that being db3 was running the default CentOS CFQ IO Scheduler!  After making the switch to the deadline scheduler the server's performance has been stable.

There are a plethora of blog post to why CFQ is bad for MySQL workloads and here are a few I found that convenced me that this was the issue:

But why did this happen in the first place?

When I first initially benchmarked the server, I explicitly set the IO scheduler to deadline, its just something in my benchmark script that happens automatically.  As a new user of CentOS, I wasn't aware the default scheduler was CFQ.  When the server was rebooted, the I/O scheduler was switched back to the default CFQ scheduler...BURN!


If you are running CentOS for a dedicated MySQL server, be sure to set the default I/O scheduler to deadline or noop in your /boot/grub/grub.conf kernel paramaters.  Simply add the following line to the end.


Tuesday, January 24, 2012

The Never Ending Query

Last night, I got a page from zabbix warning me of a thread count threshold hit. I was cooking dinner, left the stovetop and walked over into my home office. I run innotop pretty much all of the time on our master server.

Here is a small screenshot of what I saw:

These three queries were running for over 5 hours, and I would be willing to bet that these queries would NEVER finish.  But why?

More to come.