Monday, September 27, 2010

Faster Internet Browsing Thru. Local DNS Cache

A local DNS cache can help for faster browsing since you’re caching the DNS request instead of attempting that request multiple times. The internet speed will not get any faster, but the browsing speed will improve, because on each website there are usually quite a few DNS requests for which the local DNS cache will be used, bringing the query time to almost 0.

To see how fast your current domain name servers (DNS) are, open a terminal execute below command.
]# dig yahoo.com

You Should get something like this.
*************************************************************************
; <<>> DiG 9.6.1-P1 <<>> yahoo.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42045 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION: ;yahoo.com. IN A

;; ANSWER SECTION: yahoo.com. 20142 IN A 69.147.114.224 yahoo.com. 20142 IN A 209.131.36.159 yahoo.com. 20142 IN A 209.191.93.53

;; Query time: 50 msec

;; SERVER: 208.67.220.220#53(208.67.220.220)
;; WHEN: Wed Dec 9 13:21:48 2009
;; MSG SIZE rcvd: 75

*************************************************************************

Notice the "Query time" in bold. It's usually somewhere near 50 msec. (it depends on your domain name servers).

Run this one more time. If the query time decreases to less than 5
msec, it means your internet service provider DNS already uses some
caching method and you do not need to follow this how-to. If the
response time is almost the same and you are using a cable (broadband)
internet connection, you can use this guide to cache the DNS for faster
internet browsing.

Now, Let's Start The Practical.

Manually configuring the local DNS cache
1. Install DNSMasq:
yum install dnsmasq


2. Edit "Dnsmasq" configuration file.
vim /etc/dnsmasq.conf

3.
Now search for "listen-address" (it's on line 90 on my Ubuntu Karmic installation), remove the "#" character in front of "listen-address" and add "127.0.0.1" after the "=" (all without the quotes). Basically, this is how the "listen-address" line should look like after editing it :

listen-address=127.0.0.1


4. You can also edit the cache size if you want. Search for this in the same file: "#cache-size=150" (it's on line 432 on my Ubuntu Karmic installation), remove the "#" character in front of the line (this uncomments it) and change "150" with the size you want for you DNS cache. This is how the line should look after editing it :

cache-size=500


Note :- "500" can be any number you want.

5. Edit "/etc/resolv.conf" file & modify First Line.

nameserver 127.0.0.1
nameserver ISP_DNS1
nameserver ISP_DNS2


6. Finally "service network restart" & "service dnsmasq restart"

7.
Testing
To see the performance improvement, open a terminal and type:


dig yahoo.com
************************************************************
; <<>> DiG 9.6.1-P2 <<>> yahoo.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57501
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;yahoo.com. IN A

;; ANSWER SECTION:
yahoo.com. 20982 IN A 209.131.36.159
yahoo.com. 20982 IN A 69.147.114.224
yahoo.com. 20982 IN A 209.191.93.53

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Wed Dec 9 14:43:41 2009
;; MSG SIZE rcvd: 75
************************************************************
0 msec. query time, because the domains are now cached.

That's it.

Tuesday, September 21, 2010

Limit CPU Usage Per Process in Linux

This practical is tested successfully on Fedora 11 i386 & CentOS 5.4 Only.

Download "cpulimit" setup file first.

wget '
http://downloads.sourceforge.net/cpulimit/cpulimit-1.1.tar.gz'

Extract it & go inside the directory.
tar -zxvf cpulimit-1.1.tar.gz

cd cpulimit-1.1

make

cp cpulimit /usr/local/sbin/
rm -rf cpulimit*

Command to run cpulimit.


To limit CPU usage of the process called firefox to 30%, enter:
# cpulimit -e firefox -l 30

To limit CPU usage of the process to 30% by using its PID, enter:

# cpulimit -p 1313 -l 30
To find out PID of the process use any of the following:

# ps aux | less


# ps aux | grep firefox


# pgrep -u nnv php-cgi


# pgrep lighttpd


You can also use absolute path name of the executable, enter:

# cpulimit -P /opt/firefox/firebox -l 30
Where,

* -p : Process PID.
* -e : Process name.
* -l : percentage of CPU allowed from 0 to 100.
* -P: absolute path name of the executable program file.

Thanks,
Nishith N.Vyas.
Call : +91 9879597301

Root Filesystem Definition.

The root filesystem is the filesystem that is contained on the same partition on which the root directory is located, and it is the filesystem on which all the other filesystems are mounted (i.e., logically attached to the system) as the system is booted up (i.e., started up).

A partition is a logically independent section of a hard disk drive (HDD). A filesystem is a hierarchy of directories (also referred to as a directory tree) that is used to organize files on a computer system. On Linux and and other Unix-like operating systems, the directories start with the root directory, which contains a series of subdirectories, each of which, in turn, contains further subdirectories, etc. A variant of this definition is the part of the entire hierarchy of directories (i.e., of the directory tree) that is located on a single partition or disk.

The exact contents of the root filesystem will vary according to the computer, but they will include the files that are necessary for booting the system and for bringing it up to such a state that the other filesystems can be mounted as well as tools for fixing a broken system and for recovering lost files from backups. The contents will include the root directory together with a minimal set of subdirectories and files including /boot, /dev, /etc, /bin, /sbin and sometimes /tmp (for temporary files).

Only the root filesystem is available when a system is brought up in single user mode. Single user mode is a way of booting a damaged system that has very limited capabilities so that repairs can be made to it. After repairs have been completed, the other filesystems that are located on different partitions or on different media can then be mounted on (i.e., attached to) the root filesystem in order to restore full system functionality. The directories on which they are mounted are called mount points.

The root filesystem should generally be small, because it contains critical files and a small, infrequently modified filesystem has a better chance of not becoming corrupted. A corrupted root filesystem will generally mean that the system becomes unbootable (i.e., unstartable) from the HDD, and must be booted by special means (e.g., from a boot floppy).

A filesystem can be mounted anywhere in the directory tree; it does not necessarily need to be mounted on the root filesystem. For example, it is possible (and very common) to have one filesystem mounted at a mount point on the root filesystem, and another filesystem mounted at a mount point contained in that filesystem.

Saturday, May 15, 2010

"rsync" configuration in Linux/Unix

What "rsync" can do ?

"rsync" can perform differential uploads and downloads (synchronization) of files across the network, transferring only data that has changed. The rsync remote-update protocol allows rsync to transfer just the differences between two sets of files across the network connection.

Hand's ON Practical :-

Recommended : Install "rsync" on both linux or unix machines. (only for redhat/fedora)
yum install rsync


Note: Always use rsync over ssh
Since rsync does not provide any security while transferring data it is recommended that you use rsync over ssh . This allows a secure remote connection. Now let us see some examples of rsync.
Task 1: Copy file from a local computer to a remote server. Copy file from "/data/office.tar.gz" to a remote server called "192.168.1.1"

$ rsync -v -e ssh
/data/office.tar.gz rohit@192.168.1.1:/home/nishith

Task 2: Copy file from a remote server to a local computer

Copy file "/home/nishith/data.txt" from a remote server "192.168.1.1" to a local computer "/tmp" directory:
$ rsync -v -e ssh nishith@192.168.1.1:/home/nishith/data.txt /tmp

Give Password:

Task: Synchronize a local directory with a remote directory

$ rsync -r -a -v -e "ssh -l nishith" --delete 192.168.1.1:/home/nishith/ /data

Task: Synchronize a remote directory with a local directory

$ rsync -r -a -v -e "ssh -l nishith" --delete /data 192.168.1.1:/home/nishith/

Task: Synchronize a local directory with a remote rsync server

$ rsync -r -a -v --delete rsync://192.168.1.1/data /home/nishith/

"rsync" command common options.

  • --delete : delete files that don't exist on sender (system)
  • -v : Verbose (try -vv for more detailed information)
  • -e "ssh options" : specify the ssh as remote shell
  • -a : archive mode
  • -r : recurse into directories
  • -z : compress file data



Use "Iptables" for ssh & http connections per IP Address.

Allow "3 ssh" connections per client host:
/sbin/iptables -A INPUT -p tcp --syn --dport 22 -m connlimit --connlimit-above 3 -j REJECT

Allow "20 http" connections per IP (MaxClients is set to 60 in httpd.conf)
/sbin/iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 20 -j REJECT --reject-with tcp-reset




Mount a remote folder through SSH

Subject : Mount a remote folder through SSH Service.

Command is :
$ sshfs user@server:/path/to/folder /mount/point

Syntax :
$sshfs nishith@192.168.1.21:/home/nishith /data/

You can mount a remote directory locally via SSH! You'll first need to install below programs however:

  • FUSE that allows to implement filesystems in userspace programs. (yum install fuse)
  • sshfs client that uses FUSE
  • sftp (secure ftp - comes with OpenSSH, and is on your system already) to access the remote host.

And that's it, now you can use sshfs to mount remote directories via SSH.

To Unmount, use:

fusermount -u "mount point"


Regards,
Nishith Vyas

Monday, May 3, 2010

Login as a root from GUI RHEL6 BETA

WARNING :- Its not at all good to login as root from GUI. It’ DANGEROUS. BUT if some one wants to know that how to login as a root from GUI then follow the instructions.

In RHEL6 Beta You cannot login as a root from GUI. By Default, Only Normal users are allowed in GUI Mode.

Follow these steps and you will able to login as a root from GUI on RHEL6 Beta.

Main Configurable File is "/etc/pam.d/".

Have a look of below steps.

Open your Terminal from Applications -> System Tools -> Terminal

Now Login as a root from your terminal

Step 1 :- [user@localhost]$ su – root
Password:-

Step 2:- Now go to your /etc/pam.d/ directory.

[root@localhost]# cd /etc/pam.d/

Then first take a backup of gdm file

cp gdm gdm.bkp ( always take backup if anything goes wrong you can correct it by original file)

Step 3 :- Now Open gdm file in your favorite editor. I am using vi as my editor.

[root@localhost]# vi gdm

Find and Comment or remove this line into your gdm file
auth required pam_succeed_if.so user != root quiet

Step 4 :- Save & Exit From that File.

Step 5 :- Here is the additional file that you need to edit and that file name is gdm-password. Open "gdm-password" file in your favorite editor. I am using vi as my editor.

Then first take a backup of gdm-password file

cp gdm-password gdm-password.bkp ( always take backup if anything goes wrong you can correct it by original file)

[root@localhost]# vi gdm-password

Find and Comment or remove this line into your gdm file
auth required pam_succeed_if.so user != root quiet



Step 6 :- Save & Exit from File. Now Logout and Try to Login as a root user. Now you are able to Login as a root user from GUI in RHEL6 Beta.

Thanks,

RHEL6 Beta Edition.

Thursday, March 25, 2010

Mirroring entire directory with "mirrordir" command

Note :- "mirrordir" setup has been tested on CentOS / RHEL 5x versions only.

Introduction :

mirrordir copies files that are different between the directories control and mirror to the directory mirror. Files whose modification times or sizes differ are copied. File permissions, ownerships, modification times, access times (only if --access-times is used), sticky bits, and device types are duplicated. Symlinks are duplicated without any translation. Symlink modification and access times (of the symlink itself, not the file it points to) are not preserved. Hard linked files are merely copied.

mirrordir command supports strong stream cipher encryption and Diffie-Hellman key exchanges with several possible key sizes.

Hand's On Exercise :

yum install mirrordir

Create a new directory named "backup"
cd /
mkdir backup

Now, mirror existing data onto "backup" directory.For Example, We will take backup of "nishith" directory.

mirrordir -v nishith nvbkup
Output is as shown below.

mirrordir: ---verbose--- copying file: /nvbkup/file5
mirrordir: ---verbose--- copying file: /nvbkup/file1
mirrordir: ---verbose--- copying file: /nvbkup/file2
mirrordir: ---verbose--- copying file: /nvbkup/file3
mirrordir: ---verbose--- copying file: /nvbkup/file4
mirrordir: ---verbose--- all hardlinks located
mirrordir: Total mirrored: 5kB

If you rerun the "mirrordir" command, only the updated files are copied.

mirrordir -v nishith nvbkup
Output is as shown below.

mirrordir: ---verbose--- copying file: /nvbkup/file6
mirrordir: ---verbose--- copying file: /nvbkup/file1
mirrordir: ---verbose--- all hardlinks located
mirrordir: Total mirrored: 5kB

For More Information, "man mirrordir"

You can use "cron" script for automated backup of your data directories.




Tuesday, February 9, 2010

HOWTO- Apache "httpd" authentication in Linux


Create a directory on "apache" document root,which is "/var/www/html"

mkdir /var/www/html/nishith
cd /var/www/html/nishith

Create simple "index.html" page.

Now, Open "/etc/httpd/conf/httpd.conf" file and add/modify following lines.

Options Indexes Includes
AllowOverride AuthConfig

Create ".htaccess" file in "/var/www/html/nishith/.htaccess" & add below lines.
AuthType Basic
AuthName "My Private Page"
AuthUserFile /etc/httpd/conf/.htaccess
require valid-user


Now, Create/Add new user to access "/var/www/html/nishith" page

htpasswd -mc /etc/httpd/conf/htpasswd nishith
New password:
Re-type new password:
Adding password for user nishith

View the content of "htpasswd" file
cat /etc/httpd/conf/htpasswd
nishith:$apr1$akwCX...$c3uo.k4oHIQNzlSEDQYMh0

Note:-
To add few more user we should use only -m, the -c option will create the file always, but as the file exist so do not use -cm option once the file was created.

htpasswd -m /etc/httpd/conf/htpasswd alex
New password:
Re-type new password:
Adding password for user alex

cat /etc/httpd/conf/htpasswd
nishith:$apr1$akwCX...$c3uo.k4oHIQNzlSEDQYMh0
alex:$apr1$70g94/..$m8QyD4gQisd265nLW7pbR0

Finally, access your webpage in your browser by typing,
http://ip address/nishith (from remote pc)

OR

http://localhost/nishith ( from local pc only)

That's it.

Increase "swap" space in Linux

How to increase "swap" memory in Linux.

1) Create a new hard drive partition. I will use "/dev/sda" for this practical.

To verify your hard drive identity, use "fdisk -l"

2) Follow these steps.

- fdisk /dev/sda
Press "n" for new partition.
You'll get First Cylinder Name :- Press Enter
You'll get Last Cylinder Name :- type "+500M" (Adding 500MB as a swap space)

- Press "t" to assign "swap" id for a newly created partition. Here,redhat uses "82" as a swap id.
- Press "w" to write changes to disk & exit.

3) Use "partprobe" command

4) mkfs.ext3 /dev/sda6 Making File System on a newly created partition,i.e. /dev/sda6

5) mkswap /dev/sda6 Make this partition as a "swap"

6) swapon /dev/sda6 Enable "swap" partition.

7) Use "top" command to check the total memory size.

8) Make permanent entry of "swap" space during reboot, Write a new line in "/etc/fstab"
/dev/sda6 swap swap defaults 0 0

save & exit (:wq)

9) Reboot Linux & Check.

That's it

Software RAID in Linux (RHEL,CentOS)

This RAID Practical is tested on RHEL & CentOS 5x

Remember :-
For RAID 5, minimum 3 physical partitions.
For RAID 0, minimum 2 physical partitions.(Disk data striping across both drives)
For RAID 1, minimum 2 physical partitions.(Disk Mirroring)

* Use fdisk /dev/sd* OR fdisk /dev/hd* command to create more then two (whatever you need) partitions. (where sd*/hd* means sda,sdb,hda,hdb......)

(sd : SATA/SCSI Drive ; hd : IDE Drive)

* To check your drive identity, use "fdisk -l" & check the partition identity.

I am taking "sda" as physical drive & creating 4 partitions. So, the command will be

fdisk /dev/sda


(Note : Create 4 partitions, 250MB each for practical purpose only,which is necessary to create data redundancy. In real world,you can assign whatever size available in your server/desktop)


Press "n" for new partition.
Press "t" to change the partition id.

Select the partition to want to assign "RAID" id.

Press "L" to select from the available list. In our case, select "fd"

Press "w" to write changes to disk & exit

* use "partprobe" command

Now, to create RAID 5,

mdadm -C /dev/md0 -a yes -l 5 -n 4 /dev/sda{2,3,4,5}

( -l means RAID Level which is 5 here)
( -n means number of physical drives,which is 4 here)


mkfs.ext3 /dev/md0 (To make file system)

mdadm --detail /dev/md0 (To check whether RAID has created or not)

Create a directory,i.e. "data1" on "/" partition.& mount /dev/md0 on it. The commnd is
mount /dev/md0 /data


Finally, mount RAID drives permanently during Linux Reboot,
/etc/fstab, make following entry.

/dev/md0 /data ext3 defaults 0 0

That's it.

Friday, February 5, 2010

"User Quota" in Linux

Please Note:
I have tested this practical on RedHat/CentOS 5x & Fedora 7 linux platform.

This article will show you how to create "User Quota" in Linux.


1) First,Create a user named "eric" & give password.(You can create with any name you like)
useradd eric
passwd eric

2) Open "/etc/fstab" file to enable userquota on your system
vim /etc/fstab

3) Write down below word,indicated in "bold red"
/home/ swap ext3 default,usrquota 0 0
Save & Exit (:wq)

4) Then, use below command to activate "User Quota"
mount -o,remount /home

5) Now, follow all command step by step given below.

- quotacheck -cvu /home
- cd to "/home/" directory & you'll find "aquota.user" file. If not found, create it manually.
touch auota.user
- quotaon /home
- edquota -u ; in our case,the name is "eric"
It'll open a file,which shown all numeric "0" entries.

Now, Please understand the quota f
ile give below.

/home mount point soft hard soft hard
/dev/sda6 0 0 0 0 0

Note : First "soft & hard" columns are used to restrict quota in "file size"
For Example User can't create file more then 100KB.

Second "soft & hard" columns are used to restrict quota in "file numbers"
For Example User can't create file more then 70.

Practically,it can be implemented as given below
/dev/sda6 0 30 100 50 70

100 = File Max. size 100 KB Only.
70 = File Max. number 70 Only.

Note :
If user is exceeding soft limit, the Quota System of Linux will send a warning message to "eric" user.

Finally, use "mount -a" command.

Login with "eric" user & try to create file in it's home directory,which is "/home/eric"

dd if=/dev/zero of=/home/eric/example.txt bs=100 count=70


To check the quota of "eric" user; use
repquota /home

That's it.


Thursday, February 4, 2010

Squid Guard Configuration.

1. Unpack the source
tar xvzf squidGuard-1.2.1.tar.gz

2. Compiling

Let's assume it is squidGuard-1.2.1 we are trying to install:
cd squidGuard-1.2.1
./configure
make

If no errors occurred squidGuard is now installed in /usr/local/. There are a couple of option you can use when running ./configure. For example:

Installing in a different location


./configure --prefix=/some/other/directory
BerkeleyDB not in /usr/local/BerkeleyDB installed

./configure --with-db=/directory/of/BerkeleyDB/installation
When installed from the sources the BerkeleyDB will be located in /usr/local/BerkeleyDBx.y with x.y denoting the version number.
Annotation: Make sure that the shared library of your BerkeleyDB installation is known by your system (check /etc/ld.so.conf, add your BerkeleyDB library path if it is not already there and run ldconfig).

See all ./configure options
./configure --help

3. Installing
make install


4. Installing the blacklists

Download the “Black List” file from http://www.squidguard.org/blacklists.html

Copy your blacklists into the desired blacklist directory (default: /usr/local/squidGuard/db) and unpack them. In the table below we assume that the default location is used. Make sure that you have the proper permissions to write to that directory.

cp /path/to/your/blacklist.tar.gz /usr/local/squidGuard/db cd /usr/local/squidGuard/db gzip -d blacklist.tar.gz tar xfv blacklist.tar

Now the blacklists should be ready to use.
Congratulation.!!!!
You have just completed the installation of squidGuard. The next step is to configure the software according to your needs. After this you should verify your installation before you finally modify your squid configuration to work with squidGuard.

Basic Configuration of squidGuard

Once SquidGuard is successfully installed, you want to configure the software according to your needs. A sample configuration has been installed in the default directory /usr/local/squidGuard (or whatever directory you pointed your installation to).Below you find three examples for the basic configuration of SquidGuard.

1. Most simple configurationMost simple config uration: one category, one rule for all

CONFIG FILE FOR SQUIDGUARD
***************************************************************************
dbhome /usr/local/squidGuard/db
logdir /usr/local/squidGuard/logs
dest porn {
domainlist porn/domains
urllist porn/urls
}

acl {
default {
pass !porn all
redirect http://localhost/block.html
}
}
***************************************************************************
Make always sure that the very first line of your squidGuard.conf is not empty!The entries have the following meaning:

dbhome = Location of the blacklists
logdir = Location of the logfiles
dest = Definition of a category to block. You can enter the domain and url file along with a regular expression list (talk about regular expressions later on).
acl = The actual blocking defintion. In our example only the default is displayed. You can have more than one acl in place. The category porn you defined in dest is blocked by the expression !porn. You have to add the identifier all after the blocklist or your users will not be able to surf anyway.The redirect directive is madatory! You must tell SquidGuard which page to display instead of the blocked one.


2. Choosing more than one category to block

First you define your categories. Just like you did above for porn. For example:

Defining three categories for blocking

dest adv {
domainlist adv/domains
urllist adv/urls
}

dest porn {
domainlist porn/domains
urllist porn/urls
}

dest warez {
domainlist warez/domains
urllist warez/urls
}

Now your acl looks like that:
acl {
default {
pass !adv !porn !warez all
redirect http://localhost/block.html
}
}

3. White listing

Sometimes there is a demand to allow specific URLs and domains although they are part of the blocklists for a good reason. In this case you want to whitelist these domains and URLs.
Defining a whitelist
dest white {
domainlist white/domains
urllist white/urls
}
acl {
default {
pass white !adv !porn !warez all
redirect http://localhost/block.html
}
}

In this example we assumed that your whitelists are located in a directory called white whithin the blacklist directory you specified with dbhome.

Make sure that your white identifier is the first in the row of the pass directive. It must not have an exclamation mark in front (otherwise all entries belonging to white will be blocked, too).

4. Initializing the blacklists

Before you start up your squidGuard you should initialize the blacklists i.e. convert them from the textfiles to db files. Using the db format will speed up the checking and blocking.
The initialization is performed by the following command:
Initializing the b lacklists
* squidGuard -C all
* chown -R /usr/local/squidGuard/db/*

The second command ensures that your squid is able to access the blacklists. Please for the uid of your squid.
Depending on the size of your blacklists and the power of your computer this may take a while. If anything is running fine you should see something like the following output in your logfile:

2006-01-29 12:16:14 [31977] squidGuard 1.2.0p2 started (1138533256.959)2006-01-29 12:16:14 [31977] db update done2006-01-29 12:16:14 [31977] squidGuard stopped (1138533374.571)

If you look into the directories holding the files domains and urls you see that additional files have been created: domains.db and urls.db. These new files must not be empty!
Only those files are converted you specified to block or whitelist in your squidGuard.conf file.

Verification of your squidGuard Configuration

Now that you have installed and configured your squidGuard you just check a couple of things before going online.

1. Permissions
Ensure that the blacklist and db files belong to your squid user. If squid cannot access (or modify) them blocking will not work.

2. SquidGuard dry-run

To verify that your configuration is working run the following command (changed to reflect your configuration):
Dry-run squidGuard
echo "http://www.example.com 10.0.0.1/ - - GET" | squidGuard -c /tmp/test.cfg -d

3. If the redirector works properly you should see the redirection URL for the blocked site. For sites not being part of your blacklists the output should end with:
2007-03-25 16:18:05 [30042] squidGuard ready for requests (1174832285.085)2007-03-25 16:18:05 [30042] squidGuard stopped (1174832285.089)

4. Some remarks about the different entries of the echoed line:

* The first entry is the URL you want to test.
* The second entry is the client IP address. If you configured access control based on IP
addresses make sure to test allowed and not allowed IP addresses to ensure proper working.
* In the third entry (the first - ) you can specify a username. This is only of importance if you
have access control based on user names. Make sure to check different names with different
access to verify your configuration.

Finalizing the installation by configuring squid
If everything is working properly add the following line to your squid.conf (assuming that your squidGuard is installed in /usr/local; make sure to change the paths to match your installation accordingly):

url_rewrite_program /usr/local/bin/squidGuard -c /usr/local/squidGuard/squidGuard.conf


EOF


Wednesday, February 3, 2010

Who Created Linux

In 1991 Linus Torvalds was studying UNIX at a university, where he was using a special educational experimental purpose operating system called Minix (a small version of UNIX to be used in the academic environment). However, Minix had its limitations and Linus felt he could create something better. Therefore, he developed his own version of Minix, known as Linux. Linux was Open Source right from the start.

Linux is a kernel developed by Linus. The kernel was bundled with system utilities and libraries from the GNU project to create a usable operating system. Sometimes people refer to Linux as GNU/Linux because it has system utilities and libraries from the GNU project. Linus Torvalds is credited for creating the Linux Kernel, not the entire Linux operating system[1].

Linux distribution = Linux kernel + GNU system utilities and libraries + Installation scripts + Management utilities etc.

Please note that Linux is now packaged for different uses in Linux distributions, which contain the sometimes modified kernel along with a variety of other software packages tailored to different requirements such as:

1. Server
2. Desktop
3. Workstation
4. Routers
5. Various embedded devices
6. Mobile phones
You can use Linux as a server operating system or as a stand alone operating system on your PC. As a server operating system it provides different services/network resources to a client. A server operating system must be:

* Stable
* Robust
* Secure
* High performance

Linux offers all of the above characteristics plus it is free and open source. It is an excellent operating system for:

* Desktop computer
* Web server
* Software development workstation
* Network monitoring workstation
* Workgroup server
* Killer network services such as DHCP, Firewall, Router, FTP, SSH, Mail, Proxy, Proxy Cache server etc.