Recompile Dovecot with Vpopmail on Debian

Here's a script I use to quickly recompile dovecot with vpopmail support on Debian 6.0. I'm using the backports repository to get the 2.1 version of dovecot.

Now I'm using the following script every time I update and I get a new version of the dovecot packages.

  1. #!/bin/bash
  2.  
  3. BDIR=./dovecot.$(date +%Y-%m-%d_%H-%M-%S)
  4.  
  5. mkdir $BDIR
  6. cd $BDIR
  7.  
  8. sudo apt-get source dovecot-core
  9. sudo apt-get build-dep dovecot-core
  10. dpkg-source -x *.dsc
  11.  
  12. cd $(find ./ -type d |grep dovecot|head -1)
  13.  
  14. sed -r -e 's/with-sqlite \\/with-sqlite \\\n\t\t--with-vpopmail \\/' debian/rules > debian/rules.mod
  15. cat debian/rules.mod > debian/rules
  16.  
  17. DEB_BUILD_OPTIONS="--with-vpopmail" fakeroot debian/rules binary
  18.  
  19. find $BDIR -type f -name "*.deb"|grep -v 'dbg' |xargs dpkg -i
  20.  
  21.  

The last line also installs all the dovecot packages except for the debug one. You might want to change it if you don't want all.

grub2 notes and tricks

Grub2 configuration is in /etc/defaut/grub ( on Debian at least ).

Set GRUB_DEFAULT=saved in /etc/default/grub if you want to start with the last good boot.

update-grub - detects kernels automatically and writes boot lines to /boot/grub/grub.conf

grub-set-default - set the default entry

grub-reboot - set the boot entry for the next reboot only ( for testing new kernels or other boot stuff especially when you're working remotely )

First boot entry is number 0

Howto check MySQL replication consistency

If you want to be sure the data on the slave is the same as the data on the master ( yes sometimes is can happen to bbe diffeent ) you can use pt-table-checksum ( part of percona-toolkit ) to compute checksums for the data in the tables and then compare the checksums from the master with the ones on the slave(s).

Some options

Use with replication

pt-table-checksum can be used to compare any two databases/tables but if you want to compare everything on master and it's slaves you can use the
--replicate option to connect only on the master and compute checksums. The checksums will then be computed on slaves too by replicating the checksum statements.

Detecting slaves

If you have slave hosts running on non standard ports use option --recursion-method=hosts to tell pt-table-checksum how to detect the slaves. Tell the slaves to report their hostname and and port with report-host and report-port in my.cnf . This will make them show in "show slave hosts" issued on the master.

If the "hosts" method doesn't work, try with --recursion-method=dsn=t=dbname.dsns_table . Create a table with the name "dsns_table" and the following structure in the "dbname" database:

CREATE TABLE `dsns` (
 `id` int(11) NOT NULL AUTO_INCREMENT,
 `dsn` varchar(255) NOT NULL,
 PRIMARY KEY (`id`)
)

And put the dsns for accessing the slaves in the dsn field.
Example:

insert into dsns values ('','h=1.1.1.2,u=root,P=3306,p=slavepassword');

For non innodb plugin

--lock-wait-time is required if using a version of mysql without innodb plugin.

Database for storing checksums

pt-table-checksum stores the checksums in mysql so you have to create a database where to store the checksums table. I named mine "mk" since the tool was called mk-table-checksum before it became part of percona toolkit.

Use the --create-replicate-table option to create the checksums table if it doesn't already exist.

Example usage

pt-table-checksum --recursion-method=hosts --lock-wait-time=50 --defaults-file=/home/mihai/mysql.pass -u root --create-replicate-table --replicate-check --replicate mk.checksums 127.0.0.1

Example output

            TS ERRORS  DIFFS     ROWS  CHUNKS SKIPPED    TIME TABLE
09-13T18:25:13      0      0      361       1       0   0.019 mydb.accounts
09-13T18:25:13      0      0       91       1       0   0.079 mydb.announcements

XtreemFS server on MacOSX

Some time ago I tried to get the XtreemFS server to work on MacOSX (Lion).

I had to patch it a bit to make it compile and run. So here's the patch if you want to give it a try:

XtreeFS server MacOSX patch-0.1 (2.95 kB)

I wanted to use it to have a synchroneous replicated filesystem over a WAN but in the end I gave up this idea and switched to unison.

Btw. I also tried to get gluster fs to work on MacOSX lion and partially succeeded. You can see my changes on github

Which email client for Linux?

I've been a thunderbird user since firefox was named firebird. I was happy with it, I like the way you can easily search over all accounts, how you can archive messages in folders by year by simply hitting the "a" key, I like the threads and even the "gmail" conversations extension even if I don't use it.

One problem with thunderbird is it became unusable when I added an old account which was accumulating mail up to over 170k messages. I know 170k messages is not realistic and no one should have so many messages in inbox but still this made me look for something better.

First let's see what I'm looking for in an email client:

  1. Easy global search like thunderbird
  2. Archive folders and 1 key hit archiving like thudnerbird :)
  3. Easy to mark messages as Spam/Junk with a single key hit, would be great to be able to mark multiple messages with a single key ( thunderbird doesn't have this )
  4. Message threads
  5. And last but not least, be able to handle a lot of messages in a folder, don't have to be super fast but at least it shouldn't lock

Here's what I tested so far:

Evolution

This was a nice surprise.
Pro: Evolution downloaded 87k message headers in just a few seconds and had no problems with selecting all and moving around.
Cons:

  • No archives
  • It has global search but it's a bit harder to access
  • Harder junk marking ( shift+ctrl+j instead of just j )
  • Always asks if I want to accept a certificate that's not issued for the exact domain of my email server ( at startup )
  • I could accept all the cons except for the lack of archiving.
    Anyone knows of a plugin that does archiving like thunderbird ? Please let me know!

    Kmail

    Becomes almost unausable with a big inbox ( 87k), hard to select all messages

    Pro: message list groups messages by year ( but this still doesn't beat yearly archives )
    Cons:

    • very slow with a lot of messages, takes a lot of time to fetch message headers
    • adding an account doesn't show it immediatelly in the acounts list, unless you restart kmail
    • no archives
    • no keyboard shortcut for junk/spam marking ( seriously ???)
    • global search harder to access

    sylpheed

    Pro: It's fast an ligtweight but that also means it lacks what I want
    Cons:

    • No archiving
    • No keybboard shortcut for marking junk mail
    • No global search

    Conclusion: there's no client that meets all my neads so I'm going to have to stay with Thunderbird and just use evolution when I want to open a big folder.

    What email client are you using ?

    MacOSX command line tricks

    Here's a list of MacOSX commands I had to search for all over the internet because I needed to use them lately and I'm sure I'm going to forget since I'm not a big OSX user. So here they are for when I'll need them again ... :)

    In linux when you want to know which ports are opened and what applications listen on those ports you use netstat -lnp
    In MacOSX you get the listening ports with this:

    1. lsof -i |grep LISTEN

    Want to see what system call am application is making use strace, for MacOSX thats:

    1. dtruss

    Here's how to install an application that comes packaged in a .dmg:

    1. hdiutil attach App.dmg
    2. installer -package /Volumes/App/App.pkg -target /Volumes/MainDisk
    3.  

    uninstall a package

    1.  
    2. lsbom -fls /private/var/db/receipts/package.name.bom|tr '\n' '\0' |xargs -0 rm
    3. rm /private/var/db/receipts/package.name.*
    4.  

    pidof required by some mysql scripts

    1.  
    2. #!/bin/sh
    3. ps axc|awk "{if (\$5==\"$1\") print \$1}";
    4.  

    remount with noatime

    1. mount -u -o noatime /dev/disk0s2 /Volumes/HD2

    More to come ...

    Better FIX for Inspiron N7110 touchpad

    Seth Forshee created a kernel patch and now the ALPS touchapd on this laptop and probably others is recognized as a touchpad instead of falling back to a psmouse.

    So now you can use the Touchpad tab in the "Mouse and touchpad settings" (gnome) to control the "click to tap", scrolling and other features and you don't have to use the patched syndaemon from my previous post.

    To install this fix on ubuntu just download this deb package, install and reboot. ( tested on Ubuntu 11.10 x86_64 ).

    If you want to know all the details go through the comments on this bug report #545307

    Fix inspiron N7110 ALPS Touchpad in Ubuntu

    I recently purchased a new dell inspiron N7110. The laptop is great and Ubuntu 11.04 works quite well but there is one important problem.

    The problem with most touchpads on laptops is that you'll often touch them accidentally while typing, this gets recorded as a tap/click and the typing cursor might move to another location and thus you might and up tying to a whole different place.

    With synaptics touchpads or ALPS touchpads ( this is what N7110 has - ALPS Glidepoint) you can use syndaemon, a program that would run in background, monitor the keyboard and disable the touchpad while you type. But this program only works for touchpads which are being recognized as synaptics or ALPS touchpads. The touchpad on N7110 was recognized as a simple mouse, Xorg loaded the evdev driver instead of synaptics.

    So I thought that maybe I can modify syndaemon to make it work with mice too. And I did. I noticed a lot of other people have the same problem so this could be useful even if you have a different touchpad that's also recognized as a mouse.
    Download the patch for syndaemon here:
    syndaemon mouse support patch- (1.95 kB)

    To apply the patch:

    1.  
    2. apt-get install xorg-dev
    3. mkdir synaptics && cd synaptics
    4.  
    5. #install the synaptics driver source package
    6. apt-get source xserver-xorg-input-synaptics
    7.  
    8. # cd to the code directory , your version might be different depending on when you do this
    9. cd xserver-xorg-input-synaptics-1.3.99+git20110116.0e27ce3abe/tools
    10.  
    11. # apply the patch
    12. patch < syndaemon-mouse.diff
    13.  
    14. # go to source main folder
    15. cd ..
    16.  
    17. # configure, compile and install, by default in /usr/local so it will not override the system installed syndaemon
    18. ./configure && make && make install
    19.  

    The patch adds a new option to syndaemon to tell it to disable the mouse instead of a touchpad, without this the program will just exit when it can't find the touchpad.

    1.  
    2. # run syndaemon with the -s option this enables mouse support
    3. /usr/local/bin/syndaemon -i 1 -K -d -s
    4.  

    Don't forget to start it every time you start X.

    Of course this is more like a quick hack then a real fix. A real fix would make Xorg or the kernel ( not sure exactly where the problem really is ) recognize this touchpad as a touchpad not as a mouse.

    How to restore mysql replication

    Something went wrong and your mysql replication broke, I'm talking here about problems with the sql thread, not connection problems.

    The sql thread shows you an error, what do you do to fix it and resume replication?

    Here are 3 ways to fix it, each has advantages and disadvantages, pick the one that fits best to your problem.

    1. Skip over the problem

    You can try to just skip over the statement that broke the replication by changing the position in log file.

    There are two ways to do this:

    a) you can skip gradually

    slave stop;
    SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
    slave start;
    show slave status \G
    

    That would skip the next 1 statement but you can set the counter higher to skip more the one.
    Do it until the slave status shows the SQL thead is running.

    b) skip to the current position

    Use this is the first method keeps showing other statements that break replication and you don't have time to gradually skip statements.

    First go on the master and type: show master status to find which is the current bin log file and the current position within the file.

    Then go on the slave, stop it with "slave stop" and change the file name and position. Something like:

    slave stop; 
    CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.001958', MASTER_LOG_POS=52562937;
    slave start;
    

    But do that with your own file name and position taken from the master.

    Check the replication status with "show slave status".
    If the results are good ( both Slave_IO_Running and Slave_SQL_Running are Yes ) then you can go to the next step otherwise skip to next methods.

    At this point you have a working replication but probably the data on the slave is not the same as on master since you skipped a few sql statements.

    To fix it you can use maakit ( mk-table-checksum and mk-table-sync )

    2. Full Dump and Restore

    Connect to master, dump everything in a sql file, copy to replication slave and load it in mysql.

    Use --master-data so the replication position is set in the dump file and the slave will know where to start.

    Use --disable-keys so the slave will not try to build indexes after each insert and only built them at the end of the import.

    Use --add-locks to surround each table dump with lock table/unlock table - this makes the inserts faster in the slave.

    Problem:
    --master-data will put a read lock on tables so operations on your master will lock waiting for the dump to finish. On large databases this can take a long time and it's unacceptable.

    Possible fix:
    If you have innodb tables add --single-transaction so a single global lock will be used only for a short time at the beginning of the transaction.

    The problem is not so big if you can have filesystem snapshots on the master like the ones created by lvm.

    3. Inconsistent Full Dump

    This is just another fix for the problem at #1. Dump the data just like before but without using --master-data. This means no locks so the master can still work normally.
    But because you don't use --master-data you will have to set the position in the slave yourself.
    on the master type:

    show master status \G

    Take the file name and position and use them in the CHANGE MASTER statement on the slave ( after you load the dump file ) . Something like:

    CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.001958', MASTER_LOG_POS=52562937;

    Of course all of this will create an inconsistent slave but you can fix this easily with maakit.

    If you know other methods I'd love to hear about them. Let me know in the comments.

    Wammu backup to CSV for Gmail

    This is a modification to the wammu2csv.pl script that you can still find in google cache if you look for it.

    The problem with the original script was that wammu ( version 0.35 anyway ) seems to generate backup files encoded in UTF-16 and the regular expressions in that script will not work unless the content is first converted from UTF-16.

    Another benefit of the modified script is that now you don't have to convert the backup file from DOS to UNIX anymore.

    To use this script:

    1. Connect to your phone using wammu
    2. Retrieve contacts from phone and save them in a file using the backup function
    3. download Wammu2CSV-0.1 (4.74 kB) and run:
      ./wammu2csv.pl <your-backup-file> > <your-csv-file>

      This will generate a CSV file that you can import in Gmail