Friday, August 10, 2012

Privacy Policies are Worthless

I'm going to go off topic here and talk about a privacy experience I had.  I've chosen to keep the name of the company anonymous.

Recently my company changed payroll services and all pay stubs are now stored electronically, no more paper mail whoopee!  Each employee was sent their temporary login credentials via snail mail, and I somehow miss placed mine.  So I did what any other person would do, e-mail the company about how to reset your username and password, to my shock it was way to easy.



As you can see my initial e-mail was very basic, simply asking how to obtain my login credentials.  I figured the response would be something like, 1) go to this site and do these six steps, or 2) call this number and we'll mail you a new login.  To my surprise I received a reply back within 20 minutes containing my login credentials, I was not asked to verify my name, mailing address or last four digits of SSN.  I replied back, and gave my concerns and today I received a response back from the payroll company's president:


I'm glad he apologized and recognized the security risk as I did, and I'm sure they will correct the issue internally.

However - his just shows that information my be encrypted then put on an encrypted disk, in data center with locked cages, multiple keycard passes and gates in a bunker under a mountain thats monitored by hundreds of people.  But it doesn't mean that the human sitting at the help desk answering e-mails with access to that highly protected information knows how to handle it.

I find this almost hilarious - its never the computer that says, "Oops! I left my tape backup in the car unencrypted and the car was stolen!"  It is us humans who make the mistakes and it always seems to take one bad breach of protected information before things change...why are we more reactive other than proactive?

To top it off, I found this pocket size book to help me remember my secret passwords at the Hallmark checkout line when buying a birthday card.   I love the fact that it says, "a confidential handbook" and "keep in a secure place", its like whoever gets their hands on it will see those words and not read it!  


Wednesday, April 11, 2012

MySQL 2012 Conference Key Note

I just came from the perconalive.com 2012 MySQL Conference Key Note inspired.


Peter ZaitsevCo-founder and CEO, kicked off the key note giving a "state of the union" of MySQL and how the 2012 MySQL Conference almost didn't happen with the acquisition of MySQL by Oracle and then O'Reilly dropping sponsorship.  Read more here.


Baron Schwartz, Chief Performance Architect, then followed with a lesser technical and more personal presentation of his own roll in the MySQL communicate and how he got there.  Leaving a "Office Space" like job programming VB6 and ASP to work for a smaller startup company using open source software.  Baron encouraged the attendees to be inspired and work within the community to solve everyday problems by building open source software.


MÃ¥rten MickosCEO, Eucalyptus Systems and previously as CEO of MySQL AB, discussed the history of database servers and his perspective of where MySQL is going and its roll in the cloud. 


Brian AkerFellow at HP, previously the CTO of Data Differential, creator of Drizzle, a Sun Microsystems Distinguished Engineer, and the former Director of Architecture for MySQL, then gave an overview of “Servicing Databases for the Cloud” and announced HP's Open Cloud running OpenStack.  When Brain speaks you want to listen, his views and opinions typically become reality and rules.

Monday, April 9, 2012

2012 MySQL Conference

I'm writing this blog post from a Boeing 737 30,000 feet above our wonderful planet that is wifi-enabled, oh how far we have come, filled with excitement for the rest of the week as I have a free ticket to the MySQL 2012 Conference present by Percona.

A few weeks ago I was notified by Baron Schwarts that I was one of the Percona ticket winners!  My company, thank you Doug, was gracious enough to put me on a plane and allow me to spend the week in sunny California sharping my MySQL DBA skills.  Unfortunate for my wife I'm gone and she has to deal with our two kids all by her lonely self - ok the kids are really dogs but still  - sorry Laura.

This morning I spent about an hour going over the conference schedule and tutorials and I do have to say what a show.  If Percona had a motion picture trailer to promote the conference it would be a block buster showing of movie stars, effects and promisses of huge explosions, drama and romance!  I have high expectations and I know the tutorials and conference will be a huge success just because of the individuals involved.  Percona and the MySQL community have put a lot of effort into planning and promoting the event, so thank you Percona and MYSQL community!

A few events I'm planing on attending:

Tuesday Tutorials: 
  • Innodb and XtraDB Architecture and Performance Optimization
    I believe the more I understand about the inner workings of InnoDB the better DBA I will be.  I do hope InnoDB's global kenel mutex locking is truly fixed in 5.6. 
  • Linux and H/W Optimization for MySQL
    To me you have to have a solid fondation to start with and H/W is key to better performance, hope to learn a little bit about SSD here.
  • BoF: Percona XtraDB Cluster
    Ground breaking? - maybe, I'm skeptical about XtraDB Cluster because it falls short just like a lot of the other clustering solutions for MySQL.  For example, only InnoDB, no memory temporary table support.  I could be wrong - but I haven't found a solution that is just plug-in-play for MySQL, meaning I don't have to change my app to make it work.  Hope to learn more here and get some insight of the roadmap.
Wednesday Day 1 Conference:
  • MySQL Plugins - why should I bother?
    I've heard of them, I use them but have no idea how plugins could be used further.  I have my own ideas on what they can do, lets see how easy it is to build my own plugin...I see a blog post here.
  • Getting InnoDB Compression ready for Facebook
    I use compression in my own application for HIPAA audit logs and its been great, however I know there are issues regarding performance and have not moved to mail application data yet - maybe FB has some tricks up there sleeve.
  • Diagnosing Intermitted Performance Problems
    We all have been there - those Zabbix or Nagios pages that say high load or to many threads for a brief period of time.  Baron gives great talks about collecting, aggregating, visually and processing data to diagnose server problems.  Looking forward to this one.  
I will be trying to blog the tutorials and talks but no promisses. 

Monday, February 27, 2012

Problems with CentOS CFQ Default IO Scheduler

Don't get burned by RedHat/CentOS default I/O Scheduler of CFQ.  A few weeks ago this exact thing happened to me.  My company is starting to standardize on CentOS as our default install of new servers and in the past we have always built custom Linux Kernels and packages, one of the defaults for us was to use the deadline scheduler.  However - this approach did not fit well into what the rest of the community was using, we found our self compiling packages that were readily available in repositories such as yum.

Before we put a server into production a set of benchmarks are ran, typically sysbench fileio and OLTP.  The baseline benchmark results were outstanding and showed no bottlenecks for the any of the test workloads within our thread count range.  However - once the server was put into production the server started to stalls at times.  I switch back to a tried and true slave server and the problems disappeared.

I was perplexed, what is going on here?  At first it the issue appeared to be related to the well known InnoDB Global Kernel Mutex issue in MySQL 5.0, 5.1 and 5.5 but as I started looking into our Cacti graphing I noticed that the the InnoDB I/O Pending stats on the new server (db3) were much higher than our tried and true server (db1).

Here is the Cacti graph on MySQL InnoDB I/O Pending:


During the same peak time, but on a different day, db1 had less Pending IO than db3.  Something must be different between the two servers but what?  The best way I know of to get server config info is to run pt-summary on each server and then compare the results.  If you are not familier with pt-summary then your missing out!  Perocna's pt-summary made the problem obvious and that being db3 was running the default CentOS CFQ IO Scheduler!  After making the switch to the deadline scheduler the server's performance has been stable.

There are a plethora of blog post to why CFQ is bad for MySQL workloads and here are a few I found that convenced me that this was the issue:


But why did this happen in the first place?

When I first initially benchmarked the server, I explicitly set the IO scheduler to deadline, its just something in my benchmark script that happens automatically.  As a new user of CentOS, I wasn't aware the default scheduler was CFQ.  When the server was rebooted, the I/O scheduler was switched back to the default CFQ scheduler...BURN!

Conclusion

If you are running CentOS for a dedicated MySQL server, be sure to set the default I/O scheduler to deadline or noop in your /boot/grub/grub.conf kernel paramaters.  Simply add the following line to the end.

elevator=deadline

Tuesday, January 24, 2012

The Never Ending Query

Last night, I got a page from zabbix warning me of a thread count threshold hit. I was cooking dinner, left the stovetop and walked over into my home office. I run innotop pretty much all of the time on our master server.

Here is a small screenshot of what I saw:



These three queries were running for over 5 hours, and I would be willing to bet that these queries would NEVER finish.  But why?

More to come.

Tuesday, January 10, 2012

OOM - Who gets thrown off the plane first?

Just today, one of our production servers exhaust all of its 48G of memory and OOM killed the largest offender, which was mysqld at 38G, off the plane.

So - how do we avoid this, how can we tell OOM that mysqld is the pilot of the plane and if you throw it off then we'll crash for sure?

I've researched this once before, here, here and Linux has an option in proc to adjust the oom condition level.

Users and system administrators have often asked for ways to control the behavior of the OOM killer. To facilitate control, the /proc/<pid>/oom_adj knob was introduced to save important processes in the system from being killed, and define an order of processes to be killed. The possible values of oom_adj range from -17 to +15. The higher the score, more likely the associated process is to be killed by OOM-killer. If oom_adj is set to -17, the process is not considered for OOM-killing.

So, we want OOM to ignore mysqld by setting the value to -17.
echo -17 > /proc/`pgrep ^mysqls$`/oom_adj

Searching for "oom_adj" I ran across this from kernel.org, which suggest that sometime in 2012 or later that the oom_adj option will be replaced in favor of oom_score_adj.

My easiest fix to my problem with OOM is to create a cronjob that runs every 5 minutes and checks the oom_adj value of the mysqld process and set it to -17. I believe Percona may add a daemon to do this for us in the future.

Wednesday, January 4, 2012

Selecting fewer columns yields faster queries

While writing about ORDER BY and LIMIT in my last post, I start to notice an increase in speed when the number columns requests was limited to just a few. So I decided to run some test to see if request only a few columns yields faster results. My gut is telling its a no-brainer and the answer is yes, but lets run a few test just to show how much faster.
mysql> flush status;
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT p.pat_id,p.revision_number,p.gurantor_id,p.extern_id1,p.first_name,p.first_name_phn,p.last_name,p.last_name_phn,p.middle_name,p.preferred_first_name,p.title,p.degree,p.suffix,p.address1,p.address2,p.address3,p.county,p.city,p.state,p.zip_code,p.country,p.home_phone,p.work_phone,p.cell_phone,p.fax_number,p.alternate_phone,p.email,p.race,p.sex,p.ssn,p.sin,p.marital_status,p.spouse_name,p.spouse_birthdate,p.emergency_contact,p.emergency_phone,p.employment_status,p.employer_uid,p.employer_name,p.employer_addr1,p.employer_addr2,p.employer_city,p.employer_state,p.employer_zipcode,p.employer_country,p.interface,p.birth_date,p.death_indicator,p.death_date,p.chart_online,p.active,p.is_patient,p.is_tmp, p.signature_date,p.create_date,p.edit_date,p.revised_by, pp.name, pp.partition, pm.mrnumber, IF(pl.user_id IS NULL OR pul.user_id IS NOT NULL, 0, 1) AS locked 
    -> FROM patient_mrns AS pm 
    -> LEFT JOIN patient_partitions AS pp ON pm.partition=pp.partition 
    -> LEFT JOIN patients AS p ON p.pat_id=pm.pat_id 
    -> LEFT JOIN patient_restrictions AS pl ON p.pat_id=pl.pat_id 
    -> LEFT JOIN patient_restrictions AS pul ON p.pat_id=pul.pat_id AND pul.user_id=8554 
    -> WHERE pp.echart_opts != 'H' AND pm.mrnumber LIKE '1%' AND pp.echart_opts!='H' 
    -> GROUP BY pm.partition,pm.mrnumber ORDER BY p.last_name,p.first_name,pp.part_order ASC LIMIT 10;

...

10 rows in set (3.17 sec)

mysql> show status like 'Handler%';
+----------------------------+--------+
| Variable_name              | Value  |
+----------------------------+--------+
| Handler_commit             | 1      |
| Handler_delete             | 0      |
| Handler_discover           | 0      |
| Handler_prepare            | 0      |
| Handler_read_first         | 1      |
| Handler_read_key           | 575855 |
| Handler_read_next          | 760721 |
| Handler_read_prev          | 0      |
| Handler_read_rnd           | 10     |
| Handler_read_rnd_next      | 255326 |
| Handler_rollback           | 0      |
| Handler_savepoint          | 0      |
| Handler_savepoint_rollback | 0      |
| Handler_update             | 0      |
| Handler_write              | 255325 |
+----------------------------+--------+
15 rows in set (0.00 sec)

mysql> show status like 'Created_tmp_%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 1     |
| Created_tmp_files       | 3     |
| Created_tmp_tables      | 1     |
+-------------------------+-------+
3 rows in set (0.00 sec)

mysql> show status like 'Sort_%';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| Sort_merge_passes | 4     |
| Sort_range        | 0     |
| Sort_rows         | 10    |
| Sort_scan         | 1     |
+-------------------+-------+
4 rows in set (0.00 sec)
Now with just one selected column:
mysql> flush status;
Query OK, 0 rows affected (0.00 sec)
 
mysql> SELECT p.pat_id
    -> FROM patient_mrns AS pm 
    -> LEFT JOIN patient_partitions AS pp ON pm.partition=pp.partition 
    -> LEFT JOIN patients AS p ON p.pat_id=pm.pat_id 
    -> LEFT JOIN patient_restrictions AS pl ON p.pat_id=pl.pat_id 
    -> LEFT JOIN patient_restrictions AS pul ON p.pat_id=pul.pat_id AND pul.user_id=8554 
    -> WHERE pp.echart_opts != 'H' AND pm.mrnumber LIKE '1%' AND pp.echart_opts!='H' 
    -> GROUP BY pm.partition,pm.mrnumber ORDER BY p.last_name,p.first_name,pp.part_order ASC LIMIT 10;

...

10 rows in set (1.91 sec)

mysql> 
mysql> show status like 'Handler%';
+----------------------------+--------+
| Variable_name              | Value  |
+----------------------------+--------+
| Handler_commit             | 1      |
| Handler_delete             | 0      |
| Handler_discover           | 0      |
| Handler_prepare            | 0      |
| Handler_read_first         | 1      |
| Handler_read_key           | 575852 |
| Handler_read_next          | 760721 |
| Handler_read_prev          | 0      |
| Handler_read_rnd           | 10     |
| Handler_read_rnd_next      | 192145 |
| Handler_rollback           | 0      |
| Handler_savepoint          | 0      |
| Handler_savepoint_rollback | 0      |
| Handler_update             | 0      |
| Handler_write              | 192144 |
+----------------------------+--------+
15 rows in set (0.00 sec)

mysql> show status like 'Created_tmp_%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 0     |
| Created_tmp_files       | 3     |
| Created_tmp_tables      | 1     |
+-------------------------+-------+
3 rows in set (0.00 sec)

mysql> show status like 'Sort_%';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| Sort_merge_passes | 4     |
| Sort_range        | 0     |
| Sort_rows         | 10    |
| Sort_scan         | 1     |
+-------------------+-------+
4 rows in set (0.00 sec)

Conclusion

  • Query 1: 10 rows in set (3.17 sec)
  • Query 2: 10 rows in set (1.91 sec)
Thats 1.26 faster by simply reducing the number of columns in the result.

You can also see that on the second query the Created_tmp_disk_tables=0 and Handler_write=192144, (255325 - 192144) = 63181 less writes than the first query.

So - the next time you decided to return all the columns in several tables while doing GROUP BY and ORDER BY, think twice. If your schema is in 3rd normal form, then one method of fixing this is by using a DERIVED table that only returns a unique or primary key and if you can add a LIMIT. This will make the tmp table very small for sorting. Then join the table with the columns needed for the result using the primary key.

MySQL LIMIT slows down query with ORDER BY

Over the last few months I've had a slow query show up that was unexplained to why the results were so slow. Just today I finally had sometime to investigate the query. First, all of my servers are using the Percona Build with full slow query logging turn on, this is nice because you get details about why the query showed up in the slow query log. Before we look at the query, let me tell you what its for. This query is used for Dr name autocomplete within a X mile radius of a given zip code. Typically we have last and first name, however the cardinality at first is low. We start searching when the user types at least two characters. This doesn't do well with you last name is 'smith'.
Here is the query:
# Time: 120104  9:51:19
# User@Host: xxx[xxx] @  [xxxx]
# Thread_id: 4896448  Schema: xxx  Last_errno: 0  Killed: 0
# Query_time: 9.399905  Lock_time: 0.000081  Rows_sent: 1  Rows_examined: 2845320  Rows_affected: 0  Rows_read: 2
# Bytes_sent: 3987  Tmp_tables: 1  Tmp_disk_tables: 0  Tmp_table_sizes: 127800
# InnoDB_trx_id: D2B50B88
# QC_Hit: No  Full_scan: No  Full_join: No  Tmp_table: Yes  Tmp_table_on_disk: No
# Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0
#   InnoDB_IO_r_ops: 4  InnoDB_IO_r_bytes: 65536  InnoDB_IO_r_wait: 0.015564
#   InnoDB_rec_lock_wait: 0.000000  InnoDB_queue_wait: 0.000000
#   InnoDB_pages_distinct: 38054

SELECT 1 FROM users u INNER JOIN rxdb_v1.zip_100 z ON LEFT(u.zip,5)=z.toz AND z.fromz='46804' WHERE u.first_name LIKE 'Lisa%' AND last_name LIKE 'Gu%' AND u.realm IN ('Fax-MD/Clinics','Fax-Pharmacy (All)','Fax-Pharmacy (Local)','Fax-Hospitals','Fax-Locations','Fax-Dentists','Fax-Schools') AND u.status!=0 GROUP BY u.user_id ORDER BY z.distance ASC, u.last_name ASC, u.first_name, u.user_id ASC LIMIT 50;
Lets look at query handler stats:
mysql> flush status;
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT u.user_id,u.revision_number,u.universal_id,u.dea_number,u.nat_pro_id,u.license_number,u.tax_id_number,u.status,u.username,u.alias,u.pin,u.password,u.pincode,u.realm,u.organization,u.last_name,u.last_name_phone,u.first_name,u.first_name_phone,u.middle_name,u.title,u.suffix,u.degree,u.address1,u.address2,u.city,u.state,u.zip,u.tz_id,u.home_phone,u.work_phone,u.cell_phone,u.cellco_id,u.fax_number,u.pager,u.email,u.preferred_alert_method,u.edited_by,u.edit_date,u.create_date,u.pwd_expire,u.login_expire,u.login_failcount,u.login_lastfailure,u.security_role_id,u.preferred_route,u.route_comment,u.preferred_route_detail,u.sig_storage_id,u.pic_storage_id,u.securityquestion,u.securityanswer,u.termschecked,u.terms_version,u.terms_dt,u.user_role_id,u.spi FROM users u INNER JOIN rxdb_v1.zip_100 z ON left(u.zip,5)=z.toz AND z.fromz='46804' WHERE u.first_name LIKE 'Lisa%' AND last_name LIKE 'G%' AND u.realm IN ('Fax-MD/Clinics','Fax-Pharmacy (All)','Fax-Pharmacy (Local)','Fax-Hospitals','Fax-Locations','Fax-Dentists','Fax-Schools') AND u.status!=0 GROUP BY u.user_id ORDER BY z.distance ASC, u.last_name ASC, u.first_name, u.user_id ASC LIMIT 50;

...

10 rows in set (10.94 sec)

mysql>  show status like 'Handler%';
+----------------------------+---------+
| Variable_name              | Value   |
+----------------------------+---------+
| Handler_commit             | 1       |
| Handler_delete             | 0       |
| Handler_discover           | 0       |
| Handler_prepare            | 0       |
| Handler_read_first         | 1       |
| Handler_read_key           | 623     |
| Handler_read_next          | 2845328 |
| Handler_read_prev          | 0       |
| Handler_read_rnd           | 10      |
| Handler_read_rnd_next      | 11      |
| Handler_rollback           | 0       |
| Handler_savepoint          | 0       |
| Handler_savepoint_rollback | 0       |
| Handler_update             | 0       |
| Handler_write              | 10      |
+----------------------------+---------+
15 rows in set (0.00 sec)
  • There are 623 Handler_read_key, which means 623 index range scans where initiated.
  • There are 2845328 Handler_read_next, which means 2845328 rows were analyzed during this scan, ouch!
  • Handler_read_rnd=10 means there were 10 direct rows read for sorting.
  • and Handler_read_rnd_next=11 means that there were 11 times MySQL had to read the next row. This is an indication of poor indexing.
Why so many rows, lets do an explain on the query:
mysql> EXPLAIN SELECT 1 FROM users u INNER JOIN rxdb_v1.zip_100 z ON LEFT(u.zip,5)=z.toz AND z.fromz='46804' WHERE u.first_name LIKE 'Lisa%' AND last_name LIKE 'Gu%' AND u.realm IN ('Fax-MD/Clinics','Fax-Pharmacy (All)','Fax-Pharmacy (Local)','Fax-Hospitals','Fax-Locations','Fax-Dentists','Fax-Schools') AND u.status!=0 GROUP BY u.user_id ORDER BY z.distance ASC, u.last_name ASC, u.first_name, u.user_id ASC LIMIT 50;
+----+-------------+-------+--------+------------------------------------------------------+---------+---------+------------+---------+----------------------------------------------+
| id | select_type | table | type   | possible_keys                                        | key     | key_len | ref        | rows    | Extra                                        |
+----+-------------+-------+--------+------------------------------------------------------+---------+---------+------------+---------+----------------------------------------------+
|  1 | SIMPLE      | u     | index  | idx5,idx4,first_name_idx,status_name,idx2,status_fax | PRIMARY | 4       | NULL       | 2881512 | Using where; Using temporary; Using filesort |
|  1 | SIMPLE      | z     | eq_ref | PRIMARY,distidx                                      | PRIMARY | 14      | const,func |       1 | Using where                                  |
+----+-------------+-------+--------+------------------------------------------------------+---------+---------+------------+---------+----------------------------------------------+
2 rows in set (0.00 sec)

The user's table has a plethora of indexes on it, 30 to be exact, to satisfy most if not all of the name autocomplete queries. However - why is this particular query using the users table Primary Key, which is the auto increment column named user_id? I can't help but think this query is doomed from the start, especially when its examining almost 3 million rows and looking at the Extra column, we have "Using where; Using temporary: Using filesort". MySQL is doing a lot of extra work to just return 1 row in the result.

How do we proceed? Personally I start by making the query as simple as possible by removing GROUP BY, ORDER BY and LIMIT's. (I can always hit ctrl+c to stop a large query result)

Lets start by removing the ORDER BY:
mysql> SELECT 1 FROM users u INNER JOIN rxdb_v1.zip_100 z ON LEFT(u.zip,5)=z.toz AND z.fromz='46804' WHERE u.first_name LIKE 'Lisa%' AND last_name LIKE 'Gu%' AND u.realm IN ('Fax-MD/Clinics','Fax-Pharmacy (All)','Fax-Pharmacy (Local)','Fax-Hospitals','Fax-Locations','Fax-Dentists','Fax-Schools') AND u.status!=0 GROUP BY u.user_id LIMIT 50;
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (2.78 sec)
Much better, we're only examining 6125 rows - however the query still takes 2.78 sec to complete, its better than 9 sec but not good enough yet. Lets remove the GROUP BY:
mysql> SELECT 1 FROM users u INNER JOIN rxdb_v1.zip_100 z ON LEFT(u.zip,5)=z.toz AND z.fromz='46804' WHERE u.first_name LIKE 'Lisa%' AND last_name LIKE 'Gu%' AND u.realm IN ('Fax-MD/Clinics','Fax-Pharmacy (All)','Fax-Pharmacy (Local)','Fax-Hospitals','Fax-Locations','Fax-Dentists','Fax-Schools') AND u.status!=0 LIMIT 50;
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (0.05 sec)
Interesting, more rows examined but the query returned 5/100 of a sec...What Gives? Lets run the query with the GROUP BY again, but without the LIMIT:
mysql> EXPLAIN SELECT 1 FROM users u INNER JOIN rxdb_v1.zip_100 z ON LEFT(u.zip,5)=z.toz AND z.fromz='46804' WHERE u.first_name LIKE 'Lisa%' AND last_name LIKE 'Gu%' AND u.realm IN ('Fax-MD/Clinics','Fax-Pharmacy (All)','Fax-Pharmacy (Local)','Fax-Hospitals','Fax-Locations','Fax-Dentists','Fax-Schools') AND u.status!=0 GROUP BY u.user_id;
+----+-------------+-------+--------+------------------------------------------------------+---------+---------+------------+-------+----------------------------------------------+
| id | select_type | table | type   | possible_keys                                        | key     | key_len | ref        | rows  | Extra                                        |
+----+-------------+-------+--------+------------------------------------------------------+---------+---------+------------+-------+----------------------------------------------+
|  1 | SIMPLE      | u     | range  | idx5,idx4,first_name_idx,status_name,idx2,status_fax | idx5    | 204     | NULL       | 22344 | Using where; Using temporary; Using filesort |
|  1 | SIMPLE      | z     | eq_ref | PRIMARY,distidx                                      | PRIMARY | 14      | const,func |     1 | Using where; Using index                     |
+----+-------------+-------+--------+------------------------------------------------------+---------+---------+------------+-------+----------------------------------------------+
2 rows in set (0.00 sec)

mysql> SELECT 1 FROM users u INNER JOIN rxdb_v1.zip_100 z ON LEFT(u.zip,5)=z.toz AND z.fromz='46804' WHERE u.first_name LIKE 'Lisa%' AND last_name LIKE 'Gu%' AND u.realm IN ('Fax-MD/Clinics','Fax-Pharmacy (All)','Fax-Pharmacy (Local)','Fax-Hospitals','Fax-Locations','Fax-Dentists','Fax-Schools') AND u.status!=0 GROUP BY u.user_id;
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (0.05 sec)
Same result, is the GROUP BY + LIMIT the Problem? Lets now add the ORDER BY but exclude the LIMIT clause:
mysql> SELECT 1 FROM users u INNER JOIN rxdb_v1.zip_100 z ON LEFT(u.zip,5)=z.toz AND z.fromz='46804' WHERE u.first_name LIKE 'Lisa%' AND last_name LIKE 'Gu%' AND u.realm IN ('Fax-MD/Clinics','Fax-Pharmacy (All)','Fax-Pharmacy (Local)','Fax-Hospitals','Fax-Locations','Fax-Dentists','Fax-Schools') AND u.status!=0 GROUP BY u.user_id ORDER BY z.distance ASC, u.last_name ASC, u.first_name, u.user_id ASC;
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (0.05 sec
Same result! - must be the LIMIT.

So this is very interesting, LIMIT is causing some bad performance problems here why? I search around, and Peter Zaitsev blogged about ORDER BY ... LIMIT Performance Optimization

Rather than using a UNION like Peter suggest, lets rewrite this query as a derived table. What I'll do first is get a list of users base on the autocomplete search criteria. Note - we want to make the derived table as small as possible to reduce the required amount of memory needed for an in memory tmp table. I do this by only selecting the user_id and zip code. Then, I'll join to the zip_100 to limit the result and then left join to the users table again to get all of the column data.

The new and improved query:
mysql> flush status;
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT u.user_id,u.revision_number,u.universal_id,u.dea_number,u.nat_pro_id,u.license_number,u.tax_id_number,u.status,u.username,u.alias,u.pin,u.password,u.pincode,u.realm,u.organization,u.last_name,u.last_name_phone,u.first_name,u.first_name_phone,u.middle_name,u.title,u.suffix,u.degree,u.address1,u.address2,u.city,u.state,u.zip,u.tz_id,u.home_phone,u.work_phone,u.cell_phone,u.cellco_id,u.fax_number,u.pager,u.email,u.preferred_alert_method,u.edited_by,u.edit_date,u.create_date,u.pwd_expire,u.login_expire,u.login_failcount,u.login_lastfailure,u.security_role_id,u.preferred_route,u.route_comment,u.preferred_route_detail,u.sig_storage_id,u.pic_storage_id,u.securityquestion,u.securityanswer,u.termschecked,u.terms_version,u.terms_dt,u.user_role_id,u.spi 
FROM
(SELECT u.user_id,LEFT(u.zip,5) AS zip FROM users u 
 WHERE 
 u.first_name LIKE 'Lisa%'
 AND last_name LIKE 'G%' 
 AND u.realm IN ('Fax-MD/Clinics','Fax-Pharmacy (All)','Fax-Pharmacy (Local)','Fax-Hospitals','Fax-Locations','Fax-Dentists','Fax-Schools') 
 AND u.status!=0
 ORDER BY u.last_name, u.first_name
) AS ud
INNER JOIN rxdb_v1.zip_100 z ON z.toz=ud.zip AND z.fromz='46804'
LEFT JOIN users u ON u.user_id=ud.user_id
GROUP BY u.user_id ORDER BY z.distance LIMIT 50;

...

10 rows in set (0.07 sec)

mysql>  show status like 'Handler%';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| Handler_commit             | 1     |
| Handler_delete             | 0     |
| Handler_discover           | 0     |
| Handler_prepare            | 0     |
| Handler_read_first         | 0     |
| Handler_read_key           | 636   |
| Handler_read_next          | 14909 |
| Handler_read_prev          | 0     |
| Handler_read_rnd           | 10    |
| Handler_read_rnd_next      | 592   |
| Handler_rollback           | 0     |
| Handler_savepoint          | 0     |
| Handler_savepoint_rollback | 0     |
| Handler_update             | 0     |
| Handler_write              | 590   |
+----------------------------+-------+
15 rows in set (0.00 sec)

Much better, only 7/100 of a sec.

Conclusion

In my case, to fix the problem in the application we simply removed the GROUP BY. However - there are cases were the GROUP BY is needed and the only real fix is to 1) do the sub select or 2) change the application to do two queries.