14.1 Access Control Strategies There are a variety of techniques that are being employed today to control access to web-based information: • Restricting access by using URLs that are "
Trang 1Another advantage of rcp is that you can use a rhost file, which allows you to perform authentication based
on IP addresses Although this makes your computer vulnerable to IP spoofing—an attack that happens when one computer sends out IP packets that claim to be from another—the risk of password sniffing is
considerably greater There is only one widely publicized case in which IP spoofing was used to break into a computer, while there are literally thousands of recorded instances in which password sniffers were used by crackers to break into systems Furthermore, you can configure your network's routers to automatically reject incoming IP packets that claim to be from your internal network, greatly improving your site's resistance to IP spoofing attacks (Of course, this doesn't help you if the web server is out on the Internet.)
Using a distributed file system such as NFS to provide content to your web server is an intriguing idea You can have the web server mount the NFS file system read-only The NFS server should likewise export the file system read-only, and it should only export the file system that contains web server files The advantage of this system is that, it gives you an easy way to update the web server's content without actually logging in to the web server Another advantage is that you can have multiple web servers access the same NFS file
system
The primary disadvantage of using a read-only NFS file system to provide files to your web server is that there are significant performance penalties using NFS This may not be an issue with new generations of web servers that read the entire document directory into memory and then serve the web documents out of a cache The speed of NFS is also not a factor for web pages that are programmatically generated: the
overhead of the CGI scripts far outweighs the overhead of NFS
Transferring the files using physical media is very attractive No network capable services are required, and thus none are vulnerable On the downside, such transfers require physical access to both the server and the development system for each installed change
Providing for NetBIOS (SMB) traffic to NT-based web servers will let you take advantage of web tools that depend on shares The trick is to make sure that necessary ports (137/tcp, 138/udp, and 139/tcp) are
invisible to anyone else on the Internet You can ensure this with address filtering and appropriate checking, or by conducting traffic within a VPN tunnel The danger with NetBIOS export is that you may
spoof-expose more than you intended: printing, access to default shares, other logon and system registry
information become visible, too
Whether or not you plan to connect to a remote NT-based web server with NetBIOS, there are a few
precautions you should take before wheeling the web server out past the moat:
• Disable guest logins altogether Guest logins are enabled by default on NT workstation, and may be enabled by an administrator on the server version Likewise, toss out any extra logins that you don't absolutely need
• Disable administrative logins from the network, if possible If you must administer the server
remotely, then create a substitute for the "Administrator" account, giving it the same permissions, but choosing an unlikely name.77
77 Because the remote machine may not be available to participate in WINS (and it certainly won't be answering broadcasts), you may need
Trang 213.6 Physical Security
Physical security is almost everything that happens before you (or an attacker) start typing commands on the
keyboard It's the alarm system that calls the police department when a late -night thief tries to break into your building It's the key lock on the computer's power supply that makes it harder for unauthorized people
to turn the machine off And it's the surge protector that keeps a computer from being damaged by power surges
Assuring the physical security of a web site is similar to assuring the physical security of any other computer
at your location As with other security measures, you must defend your computer against accidents and intentional attacks You must defend your computer against both insiders and outsiders
It is beyond the scope of this chapter to show you how to develop a comprehensive physical security plan Nevertheless, you may find the following recommendations helpful:
• Create a physical security plan, detailing what you are protecting and what you are protecting it against Make a complete inventory
• Make sure that there is adequate protection against fire, smoke, explosions, humidity, and dust
• Protect against earthquake, storms, and other natural disasters
• Protect against electrical noise and lightning
• Protect against vibration
• Provide adequate ventilation
• Keep food and drink away from mission-critical computers
• Restrict physical access to your computers
• Physically secure your computers so that they cannot be stolen or vandalized Mark them with
indelible inventory control markings
• Protect your network cables against destruction and eavesdropping
• Create a list of standard operating procedures for your site These procedures should include
telephone numbers and account numbers for all of your vendors; service contract information; and contact information for your most critical employees This information should be printed out and made available in two separate locations Do not have your online copy as your only copy
For a much more comprehensive list, replete with explanations, we suggest that you consult one of the
comprehensive guides to computer security listened in Appendix E
Trang 3Chapter 14 Controlling Access to Your Web Server
Organizations run web servers because they are an easy way to distribute information to people on the
Internet But sometimes you don't want to distribute your information to everybody Why not?
• You might have information on your web server that is intended only for employees of your
organization
• You might have an electronic publication that contains general-interest articles that are free, and detailed technical articles that are only available to customers who have paid a monthly subscription fee
• You might have confidential technical information that is only for customers who have signed
nondisclosure agreements
• You might have a web-based interface to your order-entry system: you can save money by letting your nationwide sales force access the web site using local Internet service providers, rather than having every person make long-distance calls every day, but you need a way of prohibiting
unauthorized access
All of these scenarios have different access control requirements Fortunately, today's web servers have a variety of ways to restrict access to information
14.1 Access Control Strategies
There are a variety of techniques that are being employed today to control access to web-based information:
• Restricting access by using URLs that are "secret" (hidden) and unpublished
• Restricting access to a particular group of computers based on those computers' Internet addresses
• Restricting access to a particular group of users based on their identity
Most web servers can use these techniques to restrict access to HTML pages, CGI scripts, and API-invoking files These techniques can be used alone or in combination You can also add additional access control
mechanisms to your own CGI and API programs
http://simson.vineyard.net/sonia and put the photographs inside Then he sent the name of the URL to his
father, his in-laws, and a few other networked friends
Hidden URLs are about as secure as a key underneath your door mat Nobody can access the data unless they know that the key is there Likewise, with hidden URLs, anybody who knows the URL's location has full access to the information that it contains Furthermore, this information is transitive You might tell John about the URL, and John might tell Eileen, and Eileen might post it to a mailing list of her thousand closest friends Somebody might put a link to the URL on another web page
Trang 4Another possible form of disclosure comes from web "spiders"—programs that sweep through all the pages on
a Web server, adding keywords from each page to a central database The Lycos and AltaVista servers78 are two well-known (and useful) index servers of this kind The disclosure comes about if there is any link to your
"secret" page anywhere on a page indexed by the spider If the automated search follows the link, it will add
the URL for your page, along with identifying index entries, to its database Thereafter, someone searching for the page might be able to find it through the index service We've found lots of interesting and "hidden"
pages by searching with keywords such as secret, confidential, proprietary, and so forth
In general, you should avoid using secret URLs if you really care about maintaining the confidential nature of your page
If you are a user on an Internet service provider, using a hidden URL gives you a simple way to get limited access control for your information However, if you want
true password protection, you might try creating a htaccess file (described in a
later section) and seeing what happens
Instead of specifying computers by IP address, most web servers also allow you to restrict access on the
basis of DNS domains For example, your company may have the domain company.com and you may
configure your web server so any computer that has the a name of the form *.company.com can access your
web server Specifying client access based on DNS domain names has the advantage that you can change your IP addresses and you don't have to change your web server's configuration file as well (Of course, you will have to change your DNS server's configuration files, but you would have to change those anyway.)
Although the standard Domain Name System protocol is subject to spoofing, security can be dramatically increased by the use of public key encryption as specified in the DNSSEC protocol (described in Chapter 11) Implementations of DNSSEC are now available from a variety of sources, including ftp://ftp.tis.com/ To improve the overall security of the Internet's Domain Name System, DNSSEC should be deployed as rapidly as possible
Trang 5
Host-based restrictions are largely transparent to users If a user is working from a host that is authorized and she clicks on a URL that points to a restricted directory, she sees the directory If the user is working from a host that is not authorized and she clicks on the URL that points to a restricted directory, the user sees
a standard message that indicates that the information may not be viewed A typical message is shown in Figure 14.1
Figure 14.1 Access denied
Host-based addressing is not foolproof IP spoofing can be used to transmit IP packets that appear to come from a different computer from the one they actually
do come from This is more of a risk for CGI scripts than for HTML files The reason why has to do with the nature of the IP spoofing attack When an attacker sends out packets with a forged IP "from" address, the Reply packets go to the forged address, and not to the attacker With HTML files, all an attacker can do is request that the HTML file be sent to another location But with CGI scripts, an attacker using IP spoofing might actually manage to get a program to run with a chosen set
of arguments
Host-based addressing that is based on DNS names requires that you have a secure DNS server Otherwise, an attacker could simply add his own computer to your DNS domain, and thereby gain access to the confidential files on your web server
Trang 614.1.2.1 Firewalls
You can also implement host-based restrictions using a firewall to block incoming HTTP connections to
particular web servers that should only be used by people inside your organization Such a network is
illustrated in Figure 14.2
Figure 14.2 Using a firewall to implement host-based restrictions; access to the internal web
server is blocked by the firewall
14.1.3 Identity-Based Access Controls
Restricting access to your web server based on usernames is one of the most effective ways of controlling access Each user is given a username and a password The username identifies the person who wishes to access the web server, and the password authenticates the person
When a user attempts to reference an access-controlled part of a web site, the web server requires the web browser to provide a username and password The web browser recognizes this request and displays a
request, such as the one shown in Figure 14.3
Figure 14.3 Prompt for user's password
Because passwords are easily shared or forgotten, many organizations are looking for alternatives to them One technique is to use a public key certificate Another approach is to give authorized users a physical
token, such as a smart card, which they must have to gain access Most of these systems merely require that the users enter their normal username and a different form of password For example, users of the Security Dynamics SecureID card enter a password that is displayed on their smart cards; the password changes every minute
Trang 7One of the advantages to user-based access controls over host-based controls is that authorized users can access your web server from anywhere on the Internet A sales force that is based around the country or around the world can use Internet service providers to access the corporate web site, rather than placing long-distance calls to the home office Or you might have a sales person click into your company's web site from a high-speed network connection while visiting a client
User-based access can also be implemented through the use of "cookies" (see Chapter 5)
14.2 Implementing Access Controls with <Limit> Blocks
One of the most common ways to restrict access to web-based information is to protect it using usernames and passwords Although different servers support many different ways of password-protecting web
information, one of the most common techniques is with the <Limit> server configuration directive
The <Limit> directive made its debut with the NCSA web server Using <Limit>, you can control which files
on your web server can be accessed and by whom The NCSA server gives you two locations where you can place your access control information:
• You can place the restrictions for any given directory (and all of its subdirectories) in a special file
located in that directory Normally, the name of this file is htaccess, although you can change the
name in the server's configuration file
• Alternatively, you can place all of the access control restrictions in a single configuration file In the NCSA web server, this configuration file is called access.conf The Apache server allows you to place
access control information in the server's single httpd.conf file
Whether you choose to use many access files or a single file is up to you It is certainly more convenient to have a file in each directory It also makes it easier to move directories within your web server, as you do not need to update the master access control file Furthermore, you do not need to restart your server whenever you make a change to the access control list—the server will notice that there is a new htaccess file, and behave appropriately
On the other hand, having an access file in each directory means that there are more files that you need to check to see whether or not the directories are protected There is also a bug with some versions of NCSA and Apache web servers that allows the access file to be directly fetched; although this doesn't ruin your system's security, it gives an attacker information that might be used to find other holes
Here is a simple file that restricts access to registered users whose usernames appear in the file
/ws/adm/users:
AuthType Basic AuthName Web Solutions
AuthUserFile /ws/adm/users
<Limit GET POST> require valid-user </Limit>
%
As you can see, the file consists of two parts At the beginning of the file is a set of commands that allow you
to specify the authorization parameters for the given directory The second half of the file contains a <Limit > </Limit> block containing security parameters that are enforced for the HTTP GET and POST
commands
The htaccess file can be placed directly in the directory on the web server that you wish to protect For
example, if your web server is named www.ex.com and has a document root of /usr/local/etc/httpd/htdocs, naming this file in the directory /usr/local/etc/httpd/htdocs/internal/.htaccess would restrict all information prefixed by the URL http://www.ex.com/internal/ so that it could only be accessed by authorized users
Trang 8Alternatively, the access restrictions described in the htaccess file can be placed in the configuration file of
some kinds of web servers In this case, the commands would be enclosed within a pair of <Directory
directoryname > and </Directory> tags The directoryname parameter should be the full directory name and
not the directory within the web server's document root For example:
<Directory /usr/local/etc/httpd/htdocs/internal>
AuthType Basic AuthName Web Solutions
AuthUserFile /ws/adm/users
<Limit GET POST> require valid-user </Limit>
</Directory>
The format of the user account files (/ws/adm/users in the above example) is similar to the UNIX password
file, but only contains usernames and encrypted passwords It is described in detail below
14.2.1 Commands Before the <Limit> </Limit> Directive
The following commands can be placed before the <Limit> .</Limit> block of most web servers:
AllowOverride what
Specifies which directives can be overridden with directory-based access files This command is only
used for access information placed in system-wide configuration files such as conf/access.conf or
conf/httpd.conf
AuthName name
Sets the name of the Authorization Realm for the directory The name of the realm is displayed by the web browser when it asks for a username and password It is also used by the web browser to cache usernames and passwords
AuthRealm realm
Sets the name of the Authorization Realm for the directory; this command is used by older web
servers instead of AuthName
AuthType type
Specifies the type of authentication used by the server Most web servers only support "basic", which
is standard usernames and passwords
Limit methods to limit
Begins a section that lists the limitations on the directory For more information on the Limit section, see the next section
Trang 9Options opt1 opt2 opt3
The Options command for turning on or off individual options within a particular directory Options available are listed in the following table
Option Meaning
ExecCGI Allows CGI scripts to be executed within this directory
FollowSymLinks Allows the web server to follow symbolic links within
this directory
Includes Allows server-side include files
Indexes Allows automatic indexing of the directory if an index
file (such as index.html) is not present
IncludesNoExec Allows server-side includes, but disables CGI scripts in
the includes
SymLinksIfOwnerMatch Allows symbolic links to be followed only if the target of
the file or the directory containing the target file
matches the owner of the link
14.2.2 Commands Within the <Limit> </Limit> Block
The <Limit> directive is the heart of the NCSA access control system It is used to specify the actual hosts and/or users that are to be allowed or denied access to the directory
The format of the <Limit> directive is straightforward:
<Limit HTTP commands>
directives
</Limit>
Normally, you will want to limit both GET and POST commands
The following directives may be present within a <Limit> block:
order options
Specifies the order in which allow and deny statements are evaluated Specify "order deny,allow" to cause the deny entries to be evaluated first; servers that match both the "deny" and "allow" lists are allowed
Specify "allow,deny" to check the allow entries first; servers that match both are denied
Specify "mutual-failure" to cause hosts on the allow list to be allowed, those on the deny list to be denied, and all others to be denied
allow from host1 host2
Specifies hosts that are allowed access
deny from host1 host2
Specifies hosts that are denied access
require user user1 user2 user
Only the specified users user1, user2, and user3 are granted access
Trang 10require group group1 group2
Any user who is in one of the specified groups may be granted access
require valid-user
Any user that is listed in the AuthUserFile will be granted access
Hosts in the allow and deny statements may be any of the following:
• A domain name, such as vineyard.net (note the leading character)
• A fully qualified host name, such as nc.vineyard.net
• An IP address, such as 204.17.195.100
• A partial IP address, such as 204.17.195, which matches any host on that subnet
• The keyword "all", which matches all hosts
14.2.3 <Limit> Examples
If you wish to restrict access to a directory's files to everyone on the subnet 204.17.195., you could add the
following lines to your access.conf file:
If you then wanted to allow only the authenticated users wendy and sascha to access the files, and only when
they are on subnet 204.17.195, you could add these lines:
If you wish to allow the users wendy and sascha to access the files from anywhere on the Internet, provided
that they type the correct username and password, try this:
require user sascha wendy
</Limit>
If you wish to allow any registered user to access files on your system in a given directory, place this
.htaccess file in that directory:
Trang 1114.2.4 Manually Setting Up Web Users and Passwords
To use authenticated users, you will need to create a password file You can do this with the htpasswd
program, using the "-c" option to create the file For example:
# /htpasswd -c /usr/local/etc/httpd/pw/auth sascha
Adding password for sascha
Re-type new password:deus333
#
You can add additional users and passwords with the htpasswd program When you add additional users, do
not use the "-c" option, or you will erase all of the users who are currently in the file:
# /htpasswd /usr/local/etc/httpd/pw/auth wendy
Adding password for wendy
Using Digital Certificates for User Management
Instead of using a username and password to authenticate a user, you can use a digital certificate
that is stored on the user's hard disk
To make use of digital certificates, a web site user must first create a public key and a secret key
The public key is then signed by a certification authority, which returns to the user a certificate that
consists of the user's public key, a distinguished name (DN), and the certification authority's
signature When the user attempts to contact your web site, your web server gives the user's web
browser a random number challenge The user's web browser then signs this random number with
the user's secret key The browser then sends to the web server the signed random number, the
user's public key, and the user's certificate
Unfortunately, whereas the htaccess file is somewhat standardized between web servers, the use of
digital certificates is not For example, Netscape's NSAPI allows programmers to grab a web browser's SSL certificate and make access control decisions based upon its contents, while the Apache SSL
server uses a completely different system Therefore, if you wish to use digital certificates to
authenticate your web site's users, you must read your web server's documentation
For further information on digital certificates, see the chapters in Part III
Trang 1214.3 A Simple User Management System
In this example, we will present a simple web-based user account management system This system consists
of the following parts:
• A user authorization file that lists the authorized users In this example, the file is kept in
/etc/users.simple
• A directory that contains documents that the authorized users are allowed to access In this
example, the directory is /usr/local/etc/httpd/htdocs/simple The matching URL for the directory is
http://www.ex.com/simple
• A directory that contains the CGI scripts that are used to manage the user accounts In this
example, the directory is /usr/local/etc/httpd/cgi-bin/simple The matching URL for this directory is
http://www.ex.com/cgi-bin/simple
• A script that adds new users to the system It can only be run by the user administrator
• A script that allows users to change their passwords
One problem with simple password-based authentication on many web servers is that the password file must
be readable by the web server's effective UID Most site administrators have solved this problem by making
the password file world-readable, which obviously leads to problems if anyone other than the system
administrator has or can get access to the computer A better approach is to set the file permissions on the
password file so that it can only be read by the web server users or group, as we do here
The next section contains step-by-step instructions for setting up this system on a computer running the
UNIX operating system with the NCSA or Apache web server Small changes are necessary for having these
scripts run on Windows NT
This simple user management system is presented for demonstration purposes only If you need a real system for a production web server, please refer to Lincoln Stein's passwd system, located at
ftp://www.genome.wi.mit.edu/ftp/pub/software/www/passwd
14.3.1 The newuser Script
1 Create a UNIX user who will be the "owner" of the file /etc/users.simple In our example, the user
will be simple It has an entry in /etc/passwd that looks like this:
simple:*:13:13:Simple User Account Management:/:nologin
The user does not need a password or a login shell, because the account will never be logged into It
exists only so that it can have ownership of the file /etc/users.simple and so that two Perl scripts can
be SUID simple
2 Create the user authorization file /etc/users.simple by manually using the htpasswd program with
the "-c" option Then set the owner of the password file to be the simple user and the file mode to
be 640 and the group changed to http, so that the web server can read the contents of the file but
other users cannot For example:
# htpasswd -c /etc/users.simple admin
Adding password for admin
New password:
Re-type new password:
# # chown simple /etc/users.simple
# chgrp http /etc/users.simple
# chmod 640 /etc/users.simple # ls -l /etc/users.simple -rw-r - 1 simple http 20 Sep 27 01:54 /etc/users.simple
# cat /etc/users.simple
admin:w6UczI6b6C2Bg #
Trang 133 Create the directory for the CGI scripts:
4 Create a .htaccess file and place it in both the CGI scripts directory and in the documents directory
Here is what the file should contain:
AuthType Basic
AuthName Simple Demonstration
AuthUserFile /etc/users.simple
<Limit GET POST>
require valid-user
</Limit> 5 Place the CGI script newuser (see Example 14.1) in the directory and make sure that it is SUID simple: # chown simple newuser
# chmod 4755 newuser
# ls -l newuser
-rwsr-xr-x 1 simple www 1582 Sep 27 02:23 newuser*
6 Now try to run the CGI script by running the URL http://server/simple/newuser You should first be
prompted to type a password (Figure 14.4)
Figure 14.4 Network password prompt (Internet Explorer)
Type the same password that you provided above Now you will see the user creation form (Figure
14.5)
Figure 14.5 Add new user form (Internet Explorer)
Trang 147 Enter the username test with a password of your choosing Click "create."
8 Now check in the file /etc/users.simple You'll see that a new user has been created:
admin:w6UczI6b6C2Bg test:PbbKQn0Yh6jlk
$userfile = "/etc/users.simple";
$adminuser= "admin";
$htpasswd = "/usr/local/etc/httpd/support/htpasswd";
require "cgi-lib.pl";
{ $ENV{'PATH'} = "/usr/bin:/bin";
$ENV{'IFS'} = ' ';
$| = 1; # turn off buffering print &PrintHeader,"<title>Add new users</title>\n";
$tuser = $ENV{'REMOTE_USER'};
$tuser =~ /([\w]+)/i;
$user = $1;
if($user ne $adminuser){
print "<pre>\n";
open(PASS,"|$htpasswd $userfile $newuser") print PASS "$newpass1\n$newpass1\n";
close(PASS);
exit(0);
} # Otherwise, display a form
$myurl = &MyURL;
print <<XX;
<hr>
<form method="post" action="$myurl">
Create a new user.<p>
Enter username:
<input type="text" size=8 name="newuser" ><br>
Enter password:
<input type="password" size=8 name="newpass1"><br>
Enter password again:
<input type="password" size=8 name="newpass2"><br>
<input type=submit value="create"> or <input type=reset value="clear">
</form>
XX print "</pre>\n";
exit(0);
}
Trang 15Example 14.2 Script for Letting Users Change Their Own Passwords
#!/usr/local/bin/perl -T
#
# This script lets users change their passwords
#
$userfile = "/etc/users.simple";
$htpasswd = "/usr/local/etc/httpd/support/htpasswd";
require "cgi-lib.pl";
{ $ENV{'PATH'} = "/usr/bin:/bin";
$ENV{'IFS'} = " ";
$| = 1;
print &PrintHeader,"<title>Add new users</title>\n";
$tuser = $ENV{'REMOTE_USER'};
print "<h1>Changing password for $user </h1>\n";
print "<pre>\n";
open(PASS,"|$htpasswd $userfile $user") print PASS "$newpass1\n$newpass1\n";
close(PASS);
exit(0);
} # Otherwise, display a form
$myurl = &MyURL;
print <<XX;
<hr>
<form method="post" action="$myurl">
Change password for <b>$user</b>.<p>
Enter password:
<input type="password" size=8 name="newpass1"><br>
Enter password again:
<input type="password" size=8 name="newpass2"><br>
<input type=submit value="create"> or <input type=reset value="clear">
</form>
XX print "</pre> \n";
exit(0);
}
Trang 16Chapter 15 Secure CGI/API Programming
Web servers are fine programs, but innovative applications delivered over the World Wide Web require that servers be extended with custom-built programs Unfortunately, these programs can have flaws that allow attackers to compromise your system
The Common Gateway Interface (CGI) was the first and remains the most popular means of extending web servers CGI programs run as subtasks of the web server; arguments are supplied in environment variables and to the program's standard input; results are returned on the program's standard output CGI programs have been written that perform database queries and display the results; that allow people to perform
complex financial calculations; and that allow web users to "chat" with others on the Internet Indeed,
practically every innovative use of the World Wide Web, from WWW search engines to web pages that let you track the status of overnight packages, was originally written using the CGI interface
A new way to extend web servers is by using proprietary Application Programmer Interfaces (APIs) APIs are
a faster way to interface custom programs to web servers because they do not require that a new process be started for each web interaction Instead, the web server process itself runs application code within its own address space that is invoked through a documented interface
This chapter focuses on programming techniques that you can use to make CGI and API programs more secure
15.1 The Danger of Extensibility
Largely as a result of their power, the CGI and API interfaces can completely compromise the security of your
web server and the host on which it is running That's because any program can be run through these
interfaces This can include programs that have security problems, programs that give outsiders access to your computer, and even programs that change or erase critical files from your system
Two techniques may be used to limit the damage that can be performed by CGI and API programs:
• The programs themselves should be designed and inspected to ensure that they can perform only the desired functions
• The programs should be run in a restricted environment If these programs can be subverted by an attacker to do something unexpected, the damage that they can do will be limited
On operating systems that allow for multiple users running at multiple authorization levels, web servers are normally run under a restricted account, usually the nobody or httpd user Programs that are spawned from the web server, either through CGI or API interfaces, are then run as the same restricted user
Unfortunately, other operating systems do not have this same notion of restricted users On Windows 3.1, Windows 95, and the Macintosh operating systems, there is no easy way to have the operating system
restrict the reach of a CGI program
15.1.1 Programs That Should Not Be CGIs
Interpreters, shells, scripting engines, and other extensible programs should never appear in a cgi-bin
directory, nor should they be located elsewhere on a computer where they might be invoked by a request to the web server process Programs that are installed in this way allow attackers to run any program they wish
on your computer
For example, on Windows-based systems the Perl executable PERL.EXE should never appear in the cgi-bin
directory Unfortunately, many Windows-based web servers have been configured this way because it makes
it easier to set up Perl scripts on these
It is easy to probe a computer to see if it has been improperly configured To make matters worse, web
search engines can be used to find vulnerable machines automatically