Squid Statistics and Info Web Interface


I have been looking for a squid statistics web interface for a long time, and i finally got one. It was fairly easy to install and configure.

The name of the app is SARG and can be downloaded freely from http://sarg.sourceforge.net/sarg.php

Depending on your architecture, the installation command is

rpm -Uvh http://pkgs.repoforge.org/sarg/sarg-2.3.1-1.el6.rft.x86_64.rpm

Before using it, you might wish to configure the output generation of sarg by

vi /etc/sarg/sarg.conf

and change the output_dir /var/www/html/squid/OUT-ONE to your www directory

To use, run the following in a terminal or add it to cron to run it daily

sarg

This command will generate html files for stats in the specified output_dir

That’s all folks.

Screenshot of SARG Generated files:

Read More

How-to: Local Backup with rsync


A backup using the rsync command can be very useful because it:

Takes less time to replicate

rsync will replicate the whole content between the source and the destination directory only once. Consecutive rsync’s will transfer only the changed blocks or bytes, which makes it very fast

uses Less bandwidth usage

rsync will use compression of data block by block at the sending end and decompress at the receiving end

is Secure

rsync uses ssh protocol during transfers and hence allows encryption of data.

The script broken down

I will create a script which will backup my current web directory /data to the same server on a disk mounted at /backup. The script if broken down, can be viewed as:

rsync -azvu –progress /data/ /backup

wher options

  • z is for compress mode
  • v is for verbose mode
  • a is to preserve symbolic links, permissions, timestamp and to be recursive
  • u is to preserve unmodified files at the destination
  • progress is to show the progress during transfer

An extract of the transfer start is:

sending incremental file list
data/
data/log/
data/log/gulshan.beejan.log
8713 100%    0.00kB/s    0:00:00 (xfer#1, to-check=1004/1010)
data/lost+found/

ending with:

sent 284122142 bytes  received 230478 bytes  1914832.46 bytes/sec
total size is 379003084  speedup is 1.33

Now, if I perform a second transfer, the result at the end will be much faster. Here’s a extract of that transfer:

[root@sdb backup]# rsync -avzu –progress /data /backup
sending incremental file list

sent 278200 bytes  received 1778 bytes  50905.09 bytes/sec
total size is 379003084  speedup is 1353.69

 The script

Let’s convert that into a nice and clean script, shall we?

Create a file in

vi /scripts/localrsync.sh

Paste the following:

#!/bin/bash

# declare variables

SOURCE_DIR=/data

DESTINATION_DIR=/backup

#rsync command, make sure the user you run this file as has the required permissions to the source/dest folder

#we dont need progress or verbose, this runs in background mode

rsync -azu  $SOURCE_DIR $DESTINATION_DIR

Chmod 777 the sh file

chmod 777 /scripts/localrsync.sh

Test the file

sh /scripts/localrsync.sh

[root@sdb ~]# sh /scripts/localrsync.sh
sending incremental file list

sent 278200 bytes  received 1778 bytes  62217.33 bytes/sec
total size is 379003084  speedup is 1353.69

Add it to crontab,

 crontab -e

Add this line at the end to run it daily, every 2AM

*    2    *    *    *    sh /scripts/localrsync.sh

 

Read More

How-to: Null Route IP in Linux


It does get quite annoying when you are trying to work(which most of the time, my work involves a lot of research online) and someone(Mr. X) in your network is hogging everything down by his movie downloads. Now, our network is secured by ClearOs, which in my opinion is a really good solution. But.. it does not work when Mr. X is using IDM via an SSH Tunnel, which is not really easy to detect.

So today, after the pain of waiting for 5minutes to wait for my OWN web site to load.. I decided to null route the SSH Tunnel’s IP to the gateway.

[root@clearos ~]# cat /etc/redhat-release
CentOS release 5.4 (Final)

ClearOS is running on CentOS 5.4 32-bit, which is great for the command below as root:

route add that_ip_that_i_hate gw 127.0.0.1 lo

use netstat -nr to verify

To delete the routing;

route delete that_ip_that_i_hate

VOILA! Mr.X is gone and my post is here!

Read More

Linux: Deleting old logs


We all know how annoying it can get to have old log files that are not really being used on our production server. Hence, it would be great to implement a script to perform the task on a cronjob so that we can relax as lazy system engineers 🙂 🙂

The script to delete files older than 15 days would be as follows:

#!/bin/bash

# File is saved as root, in /scripts/delete_old_logs.sh

# File needs to be chmod 777 to work

DAYS=15

PATH_TO_LOGS=/var/log

find $PATH_TO_LOGS -type f -mtime +$DAYS -exec rm -rf {} ;

Next, we can schedule it to run on a daily basis, every 11pm:

crontab -e

add this line

*    23    *    *    *    /scripts/delete_old_logs.sh

Read More

How to Install & Configure FTP Server on CentOS 6.2


I recently upgraded the server to CentOS 6.2 x64, which involved backing up to another server, clean re-install with cherokee’s latest version and some other configurations. Among those, was the ftp configuration which has been made by vsftpd.

The installation is quite straight-foward, run the following in a terminal as root:

yum install vsftpd

You will need to modify a few parameters in your configuration file to get it running smoothly.

vi /etc/vsftpd/vsftpd.conf

 

Set the following:

anonymous_enable=NO

uncomment the following:

ascii_upload_enable=YES

ascii_download_enable=YES

chroot_local_user=YES

chroot_list_enable=YES

chroot_list_file=/etc/vsftpd/chrootlist

ls_recurse_enable=YES

 

In this process, the user’s home directory will become FTP home directory, which is what we want really.

You need to create an ftp chroot list in /etc/vsftpd/chrootlist by:

vi /etc/vsftpd/chrootlist

Add the ftp created user in this file.

service vsftpd start

 

Read More