Friday, October 1, 2010

Advanced Audio Coding (AAC) aacplus

In the last year, the rapid development of mobile world. Applications, such as audio / video streaming become more attractive and desirable. Moreover, any minute now 3G network is ready to enter the market. Speed cellular networks is not as fast, as good, and sestabil regular network. In addition, people also do not expect the quality of streaming audio will be as good as the original audio CD quality for instance. So, who is being sought is how to deliver acceptable audio quality, but (very) efficient bit rate or bandwidth.

Spectral Band Replication: Decoder will try to reconstruct the lost bandwidth based on the information stored by the encoder.
AAC: super low bit rate but still acceptableAAC uses a standard MPEG-2 and MPEG-4. AAC standard is already present for a long time, since 1997. However, due to various limitations, this standard does not expand as fast as its competitors relative.
AAC also expanded into three variants of the popular. Variants include AAC, AACPlus v1, v2 and AACPlus. Sometimes, there are different variants of names, such as LC AAC, eAAC, eAAC +, HE-AAC, and others. However, there are really only three main variants of the AAC format.



Parametric Stereo: Can you save the bit rate by sending an audio signal into a form monoaural.
Spectral Band Replication (SBR): 
duplication of the audio bandwidth Most existing audio formats tends to reduce the bandwidth if the bit rate used was too small. For example, the bit rate of 128 Kbps, MP3 format can provide bandwidth up to 20 kHz. If bit rate is halved (64 Kbps), the bandwidth that is able to produce only about 10 kHz only. Audio frequencies above 10 kHz auto-cut or in other words will not be saved. AAC with SBR mechanism enables a wider bandwidth at a small bit rate. Based on specifications, SBR was able to maintain the bandwidth to more than 15 kHz at low bit rates.
How can it be done? The truth is simple. Part Maker (encoder) will only store data required (approximately half of the bandwidth) plus any other information needed to reconstruct the data being transmitted. Additional bandwidth will be made by the receiver (decoder). Special algorithm will make a full re frequency based on analysis of existing data. This method is certainly more efficient than if the encoder had to save bandwidth in full.
Three variants: Addition of Spectral Band Replication algorithm (SBR) and parametric stereo (PS) determines the type of AAC format that will be generated.
SBR will be very effective for low to medium bit rate. In addition to AAC (AACPlus v1), SBR technique is also used in MP3 format, known as MP3PRO.
Parametric stereo: more frugal with the signal monoauralParametric stereo (PS) is a standard for AACPlus v2. PS is optimized for very low bit rate (16-40 kbps only). According to specifications, using this PS, it is a good audio quality can be enjoyed only by a bit rate of 24 Kbps.
Courtesy : http://chip.co.id/articles/mag/tag/tes-teknologi-advanced-audio-coding-aac/

Sunday, September 19, 2010

Create a passive network tap for your home network

http://www.enigmacurry.com/category/diy/
http://thnetos.wordpress.com/2008/02/22/create-a-passive-network-tap-for-your-home-network/
http://hackaday.com/2008/09/14/passive-networking-tap/

Tuesday, September 14, 2010

Determining signal strength

The wireless standard 802.11b operates in the 2.4-2.485GHz (gigahertz) radio frequency (RF) band; RF is measured in decibels (dB). Wireless cards often come with client software that displays signal strength in dB or dBm (a variant of dB that provides an exact correlation to the power of the radio signal in watts).

Note: The minimum power sensitivity on most 802.11b clients is -96dBm (very low). If your software displays "Signal/Noise" or "SNR" in dBm, you can convert this to dB by subtracting the minimum power sensitivity, -96dBm, from the number displayed as "SNR" or "Signal/Noise". For example, if the Signal/Noise is -47dBm, you would convert this to dB as follows:
-47dBm - (-96dBm) = 49dB

If your wireless card software is indicating that the signal-to-noise ratio (SNR) is greater than 10dB, you are getting the maximum available bandwidth, or 11Mbps (megabits per second). An SNR higher than 10dB won't increase the amount of bandwidth beyond this maximum (in the example above, with an SNR of 49dB, the bandwidth is still 11Mbps, the maximum available rate). When the SNR drops below 10dB, however, the maximum data rate drops:
SNR
Maximum data rate
8dB
5.5Mbps
6dB
2Mbps
4dB
1Mbps
Even though the maximum data rate goes down, the connection will still be maintained as long as you have an SNR of 4dB or greater.

Possible problems

Other factors can affect the quality of your wireless connection. The list below is incomplete, but it may offer some explanation for poor performance that occurs even when the signal strength is good:
  • Multipath: In general, an RF signal grows wider as it is transmitted farther. As it spreads, the RF signal will meet objects in its path that will interfere with the signal in various ways (e.g., by reflecting it). When the signal is reflected by an object (e.g., a metal object) while moving toward a receiver, multiple wave fronts are created, one for each reflection point. This can result in a large number of waves, depending on how many reflecting surfaces the original signal encounters. Many of these reflected waves are still moving toward the receiver, creating a condition known as multipath. As the number of reflective surfaces increases, the signal deteriorates.
  • Near/far: Near/far is a problem that can happen when multiple wireless users have devices that are very near an access point, much closer than a user who's on the radio signal boundary. The farthest device cannot be heard over the traffic from the devices closer to the access point. The only solution is to move the more distant device closer to the access point.
  • Hidden node: When you turn on an 802.11b-capable laptop or handheld device, it immediately scans the airwaves for access points. It quickly evaluates the signal  strength of the available access points, and the number of users per  access point. Based on this, the device will choose the access point with  the strongest RF signal and the fewest users. In hidden node situations, at least one client (node) is unable to "hear" one or more of  the other clients connected to the same access point. Usually this is because of some physical obstruction between it and other users. As a result, there can be problems in the way the clients share the available bandwidth, causing data "collisions", or bit errors. When a bit error occurs, the clients need to re-transmit the data. These collisions can result in significantly degraded data transmission rates in the      wireless network.
  • Dynamic Rate Shifting (DRS): The terms Adaptive (or Automatic) Rate Selection (ARS) and Dynamic Rate Shifting (DRS) denote how bandwidth is adjusted dynamically by wireless clients. This adjustment in speed occurs as distance increases between the  client and the access point (or possibly if interference increases). As  the distance grows greater, the signal strength will decrease to a point  where the current data rate cannot be maintained. As the signal strength drops, the client will drop its data rate to the next lower specified data rate, for example, from 11Mbps to 5.5Mbps, or from 2Mbps to 1Mbps.

Throughput

Throughput is a measure of the speed of your wireless connection. Defined as the amount of data transmitted in a given time period, throughput is based on many factors. Three important factors are described below:
  • Interference from another radio frequency source: IU uses the 802.11b wireless standard, which operates in a frequency range that is unlicensed, meaning the Federal Communications Commission allows anyone to use it. Unfortunately for 802.11b users, microwave ovens and 2.4GHz cordless phones operate in the same frequency band. The radio signal emanating from such devices can severely degrade or completely destroy an 802.11b signal.
  • Security: 802.11b networks, if not secure, are susceptible to hacking and data theft by unauthorized users. Any sensible network owner will use some form of security. This adds more overhead to the wireless data packets; this overhead (bits of data) uses valuable bandwidth, but is absolutely necessary. Although 802.11b allows for 11Mbps maximum throughput, a user will typically get only about 5.5-6Mbps of data. IU uses a virtual private network (VPN) for encryption and authentication. Another method is wired equivalent privacy (WEP), but it is much less secure than VPN.
  • Distance: Greater distances between the transmitter (access point) and receiver (client) will cause the throughput to decrease because of an increase in the number of errors (bit error rate). 802.11b recognizes these bit errors and requires that the bits be retransmitted. 802.11b is configured to make discrete jumps to specified data rates (11, 5.5, 2, and 1Mbps). If 11Mbps cannot be maintained because of bit errors and degrading signal strength, then the device will drop to 5.5Mbps, then to 2Mbps, and then to 1Mbps, with eventual loss of the connection. Remember, too, that because of overhead, you'll get only about half of the available bandwidth.

What is Signal to Noise Ratio ?

Signal to noise ratio is a specification that measures the level of the audio signal compared to the level of noise present in the signal. Signal to noise ratio specifications are common in many components, including amplifiers, phonograph players, CD/DVD players, tape decks and others. Noise is described as hiss, as in tape deck, or simply general electronic background noise found in all components.
How is it Expressed?

As the name suggests, signal to noise ratio is a comparison or ratio of the amount of signal to the amount of noise and is expressed in decibels. Signal to noise ratio is abbreviated S/N Ratio and higher numbers mean a better specification. A component with a signal to noise ratio of 100dB means that the level of the audio signal is 100dB higher than the level of the noise and is a better specification than a component with a S/N ratio of 90dB.
Why is it Important?

Unfortunately, all components add some level of noise to an audio signal but it should be kept as low as possible. Analog components, such as amplifiers, tape decks and phonograph players generally have a lower signal to noise ratio than digital components, such as CD and DVD players but the goal is still to keep noise as low as possible. As an example, a signal to noise ratio for tape deck or phonograph player is typically about 60dB-70dB, while it is common for a CD player to have a S/N Ratio of 100dB or higher. S/N Ratio is important, but should not be used as the only specification to measure the sound quality of a component.

Courtesy : http://stereos.about.com/od/faqs/f/SNratio.htm

Saturday, September 11, 2010

Traffic shaping with tc

Sometimes it's necessary to limit traffic bandwidth from and to a container. You can do it using ordinary tc tool.
Contents

* 1 Packet routes
* 2 Limiting outgoing bandwidth
* 3 Limiting incoming bandwidth
* 4 Limiting CT to HN talks
* 5 Limiting packets per second rate from container
* 6 An alternate approch using HTB
* 7 External links

Packet routes


First of all, a few words about how packets travel from and to a container. Suppose we have Hardware Node (HN) with a container (CT) on it, and this container talks to some Remote Host (RH). HN has one "real" network interface eth0 and, thanks to OpenVZ, there is also "virtual" network interface venet0. Inside the container we have interface venet0:0.

venet0:0 venet0 eth0
CT >------------->-------------> HN >--------->--------> RH

venet0:0 venet0 eth0
CT <-------------<-------------< HN <---------<--------< RH

Limiting outgoing bandwidth

We can limit container outgoing bandwidth by setting the tc filter on eth0.

DEV=eth0
tc qdisc del dev $DEV root
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit
tc class add dev $DEV parent 1: classid 1:1 cbq rate 256kbit allot 1500 prio 5 bounded isolated
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip src X.X.X.X flowid 1:1
tc qdisc add dev $DEV parent 1:1 sfq perturb 10

X.X.X.X is an IP address of container.

Limiting incoming bandwidth

This can be done by setting the tc filter on venet0:

DEV=venet0
tc qdisc del dev $DEV root
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit
tc class add dev $DEV parent 1: classid 1:1 cbq rate 256kbit allot 1500 prio 5 bounded isolated
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip dst X.X.X.X flowid 1:1
tc qdisc add dev $DEV parent 1:1 sfq perturb 10

Note that X.X.X.X is an IP address of container.

Limiting CT to HN talks

As you can see, two filters above don't limit container to HN talks. I mean a container can emit as much traffic as it wishes. To make such a limitation from the HN, it is necessary to use tc police on venet0:

DEV=venet0
tc filter add dev $DEV parent 1: protocol ip prio 20 u32 match u32 1 0x0000 police rate 2kbit buffer 10k drop flowid :1

Limiting packets per second rate from container


To prevent dos atacks from the container you can limit packets per second rate using iptables.

DEV=eth0
iptables -I FORWARD 1 -o $DEV -s X.X.X.X -m limit --limit 200/sec -j ACCEPT
iptables -I FORWARD 2 -o $DEV -s X.X.X.X -j DROP

Here X.X.X.X is an IP address of container.
An alternate approch using HTB

For details refer to the HTB Home Page


#!/bin/sh
#
# Incoming traffic control
#
CT_IP1=$1
CT_IP2=$2
DEV=venet0
#
tc qdisc del dev $DEV root
#
tc qdisc add dev $DEV root handle 1: htb default 10
#
tc class add dev $DEV parent 1: classid 1:1 htb rate 100mbit burst 15k
tc class add dev $DEV parent 1:1 classid 1:10 htb rate 10mbit ceil 10mbit burst 15k
tc class add dev $DEV parent 1:1 classid 1:20 htb rate 20mbit ceil 20mbit burst 15k
tc class add dev $DEV parent 1:1 classid 1:30 htb rate 30mbit ceil 30mbit burst 15k
#
tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev $DEV parent 1:30 handle 30: sfq perturb 10
#
if [ ! -z $CT_IP1 ]; then
tc filter add dev $DEV protocol ip parent 1:0 prio 1 u32 match ip dst "$CT_IP1" flowid 1:20
fi
if [ ! -z $CT_IP2 ]; then
tc filter add dev $DEV protocol ip parent 1:0 prio 1 u32 match ip dst "$CT_IP2" flowid 1:30
fi
#
echo;echo "tc configuration for $DEV:"
tc qdisc show dev $DEV
tc class show dev $DEV
tc filter show dev $DEV
#
# Outgoing traffic control
#
DEV=eth0
#
tc qdisc del dev $DEV root
#
tc qdisc add dev $DEV root handle 1: htb default 10
#
tc class add dev $DEV parent 1: classid 1:1 htb rate 100mbit burst 15k
tc class add dev $DEV parent 1:1 classid 1:10 htb rate 10mbit ceil 10mbit burst 15k
tc class add dev $DEV parent 1:1 classid 1:20 htb rate 20mbit ceil 20mbit burst 15k
tc class add dev $DEV parent 1:1 classid 1:30 htb rate 30mbit ceil 30mbit burst 15k
#
tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev $DEV parent 1:30 handle 30: sfq perturb 10
#
if [ ! -z $CT_IP1 ]; then
tc filter add dev $DEV protocol ip parent 1:0 prio 1 u32 match ip src "$CT_IP1" flowid 1:20
fi
if [ ! -z $CT_IP2 ]; then
tc filter add dev $DEV protocol ip parent 1:0 prio 1 u32 match ip src "$CT_IP2" flowid 1:30
fi
#
echo;echo "tc configuration for $DEV:"
tc qdisc show dev $DEV
tc class show dev $DEV
tc filter show dev $DEV

Courtesy : http://wiki.openvz.org/Traffic_shaping_with_tc

Secure OpenSSH Config Reference

OpenSSH is a set of utilities to allow you to connect to a remote machine through an encrypted tunnel. You can use it as a terminal connection or to tunnel any data through a VPN interface.

OpenSSH is a FREE version of the SSH suite of network connectivity tools that increasing numbers of people on the Internet are coming to rely on. Many users of telnet, rlogin, ftp, and other such programs might not realize that their password is transmitted across the Internet unencrypted, but it is. OpenSSH encrypts all traffic (including passwords) to effectively eliminate eavesdropping, connection hijacking, and other network-level attacks.OpenSSH FAQ

Most operating systems come with one version or another of OpenSSH. You may want to make sure you have the latest version on your machine. Check the OpenSSH site for the latest source code. You can also look to the package maintainers of your OS revision to see if they make a premade package for you to install. The directives and options listing in the following config files apply to the latest official OpenSSH release.

SECURITY NOTE: Notice that we have specified the "Ciphers" for the client and server config files. It is important to only use the Advanced Encryption Standard (AES) encryption with stateful-decryption counter (CTR) only. AES with CBC is vulnerable to the Plaintext Recovery Attack Against SSH. AES is the strongest encryption available in openssl and all others are too weak to trust. We are also specifying the "MACs" or Hash-based Message Authentication Code to use. Again, we want the strongest security model available.

Client side ssh config options (/etc/ssh/ssh_config)

This config is for the client side options. You can specify directives here and the client will negotiate them with the server. Only if the server allows them will they will take effect.

#######################################################
### Calomel.org CLIENT /etc/ssh/ssh_config
#######################################################
Host *
AddressFamily inet
CheckHostIP yes
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
Compression no
ConnectionAttempts 1
ConnectTimeout 10
ControlMaster auto
ControlPath ~/.ssh/master-%r@%h:%p
EscapeChar ~
ForwardAgent no
ForwardX11 no
ForwardX11Trusted no
HashKnownHosts yes
IdentityFile ~/.ssh/identity
IdentityFile ~/.ssh/id_rsa
IdentityFile ~/.ssh/id_dsa
IdentitiesOnly yes
MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160
PermitLocalCommand no
Port 22
Protocol 2
RekeyLimit 1G
ServerAliveInterval 15
ServerAliveCountMax 3
StrictHostKeyChecking ask
TCPKeepAlive no
Tunnel no
TunnelDevice any:any
VisualHostKey no
#######################################################
### Calomel.org CLIENT /etc/ssh/ssh_config
#######################################################

Server side sshd config options (/etc/ssh/sshd_config)

These directives are for sshd. Permissions should be "chmod 755". We want to restrict access with the following options to better protect the server.

#######################################################
### Calomel.org SERVER /etc/ssh/sshd_config
#######################################################
#
Port 22
Protocol 2
AddressFamily inet
#ListenAddress 127.0.0.1

#See the questions section for setting up the gatekeeper
#ForceCommand /tools/ssh_gatekeeper.sh

AllowUsers calomel@10.10.10.3 calomel@192.168.*
AllowGroups calomel

AllowTcpForwarding yes
#AuthorizedKeysFile .ssh/authorized_keys (need to be be commented for OpenSSH 5.4)
Banner /etc/banner
ChallengeResponseAuthentication no
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
ClientAliveInterval 15
ClientAliveCountMax 3
Compression yes
GatewayPorts no
LogLevel VERBOSE
LoginGraceTime 50s
MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160
MaxAuthTries 6
MaxStartups 10
PasswordAuthentication yes
PermitEmptyPasswords no
#PermitOpen localhost:80
PermitRootLogin no
PermitUserEnvironment no
PidFile /var/run/sshd.pid
PrintLastLog yes
PrintMotd no
PubkeyAuthentication yes
StrictModes yes
Subsystem sftp /usr/libexec/sftp-server
SyslogFacility AUTH
TCPKeepAlive no
UseDNS no
UseLogin no
UsePrivilegeSeparation yes
X11DisplayOffset 10
X11Forwarding no
X11UseLocalhost yes

#Match User anoncvs
# X11Forwarding no
# AllowTcpForwarding no
# ForceCommand cvs server
#
#######################################################
### Calomel.org SERVER /etc/ssh/sshd_config
#######################################################

Courtesy : https://calomel.org/openssh.html

Configure soekris as a OpenBSD wireless NAT router.

I use a Soekris device, bought mine for € 70,- with a wireless network interface. (wi0)
Besides that interface, this "machine" has two other ports; sis0 going to the modem and sis1 is not used, but any computer may be connected.

How difficult would it be to use this machine as a router using OpenBSD? Not difficult at all!

First install your Soekris with OpenBSD.

Now login and configure a few things.
# vi /etc/rc.conf.local
# Start NTP, it syncs time and requires very little maintenance.
ntpd_flags="-s"
# Start a DNS server.
named_flags=
# Clients should receive an IP-address. DHCP will only listen on sis1 and wi0, the network
# interfaces where computers will connect on. Don't start DHCP on your "modem-port".
dhcpd_flags="sis1 wi0"
# Enable Packet Filter.
pf=
# Here are the rules for PF.
pf_rules=/etc/pf.conf

Configure named, the DNS server.

# cat /var/named/etc/named.conf
// $OpenBSD: named-simple.conf,v 1.9 2008/08/29 11:47:49 jakob Exp $
//
// Example file for a simple named configuration, processing both
// recursive and authoritative queries using one cache.


// Update this list to include only the networks for which you want
// to execute recursive queries. The default setting allows all hosts
// on any IPv4 networks for which the system has an interface, and
// the IPv6 localhost address.
//
acl clients {
localnets;
::1;
};

options {
version ""; // remove this to allow version queries

listen-on { any; };
listen-on-v6 { any; };

empty-zones-enable yes;

allow-recursion { clients; };
};

logging {
category lame-servers { null; };
};

// Standard zones
//
zone "." {
type hint;
file "etc/root.hint";
};

zone "localhost" {
type master;
file "standard/localhost";
allow-transfer { localhost; };
};

zone "127.in-addr.arpa" {
type master;
file "standard/loopback";
allow-transfer { localhost; };
};

zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" {
type master;
file "standard/loopback6.arpa";
allow-transfer { localhost; };
};

zone "lan.meinit.nl" {
type master;
file "master/lan.meinit.nl";
};

zone "wifi.meinit.nl" {
type master;
file "master/wifi.meinit.nl";
};

zone "1.168.192.in-addr.arpa" {
type master;
file "master/1.168.192.in-addr.arpa";
};

zone "2.168.192.in-addr.arpa" {
type master;
file "master/2.168.192.in-addr.arpa";
};

Now add all zones.

# cat lan.meinit.nl
$ORIGIN lan.meinit.nl.
$TTL 6h

@ IN SOA lan.meinit.nl. root.meinit.nl. (
1 ; serial
1h ; refresh
30m ; retry
7d ; expiration
1h ) ; minimum

NS soekris.lan.meinit.nl.
soekris A 192.168.1.1
32 A 192.168.1.32
33 A 192.168.1.33
34 A 192.168.1.34
35 A 192.168.1.35
36 A 192.168.1.36
37 A 192.168.1.37
38 A 192.168.1.38
39 A 192.168.1.39
40 A 192.168.1.40
41 A 192.168.1.41
42 A 192.168.1.42
43 A 192.168.1.43
44 A 192.168.1.44
45 A 192.168.1.45
46 A 192.168.1.46
47 A 192.168.1.47
48 A 192.168.1.48
49 A 192.168.1.49
50 A 192.168.1.50
51 A 192.168.1.51
52 A 192.168.1.52
53 A 192.168.1.53
54 A 192.168.1.54
55 A 192.168.1.55
56 A 192.168.1.56
57 A 192.168.1.57
58 A 192.168.1.58
59 A 192.168.1.59
60 A 192.168.1.60
61 A 192.168.1.61
62 A 192.168.1.62
63 A 192.168.1.63
64 A 192.168.1.64
65 A 192.168.1.65
66 A 192.168.1.66
67 A 192.168.1.67
68 A 192.168.1.68
69 A 192.168.1.69
70 A 192.168.1.70
71 A 192.168.1.71
72 A 192.168.1.72
73 A 192.168.1.73
74 A 192.168.1.74
75 A 192.168.1.75
76 A 192.168.1.76
77 A 192.168.1.77
78 A 192.168.1.78
79 A 192.168.1.79
80 A 192.168.1.80
81 A 192.168.1.81
82 A 192.168.1.82
83 A 192.168.1.83
84 A 192.168.1.84
85 A 192.168.1.85
86 A 192.168.1.86
87 A 192.168.1.87
88 A 192.168.1.88
89 A 192.168.1.89
90 A 192.168.1.90
91 A 192.168.1.91
92 A 192.168.1.92
93 A 192.168.1.93
94 A 192.168.1.94
95 A 192.168.1.95
96 A 192.168.1.96
97 A 192.168.1.97
98 A 192.168.1.98
99 A 192.168.1.99
100 A 192.168.1.100
101 A 192.168.1.101
102 A 192.168.1.102
103 A 192.168.1.103
104 A 192.168.1.104
105 A 192.168.1.105
106 A 192.168.1.106
107 A 192.168.1.107
108 A 192.168.1.108
109 A 192.168.1.109
110 A 192.168.1.110
111 A 192.168.1.111
112 A 192.168.1.112
113 A 192.168.1.113
114 A 192.168.1.114
115 A 192.168.1.115
116 A 192.168.1.116
117 A 192.168.1.117
118 A 192.168.1.118
119 A 192.168.1.119
120 A 192.168.1.120
121 A 192.168.1.121
122 A 192.168.1.122
123 A 192.168.1.123
124 A 192.168.1.124
125 A 192.168.1.125
126 A 192.168.1.126
127 A 192.168.1.127

# cat wifi.meinit.nl
$ORIGIN wifi.meinit.nl.
$TTL 6h

@ IN SOA wifi.meinit.nl. root.meinit.nl. (
1 ; serial
1h ; refresh
30m ; retry
7d ; expiration
1h ) ; minimum

NS soekris.wifi.meinit.nl.
soekris A 192.168.2.1
32 A 192.168.2.32
33 A 192.168.2.33
34 A 192.168.2.34
35 A 192.168.2.35
36 A 192.168.2.36
37 A 192.168.2.37
38 A 192.168.2.38
39 A 192.168.2.39
40 A 192.168.2.40
41 A 192.168.2.41
42 A 192.168.2.42
43 A 192.168.2.43
44 A 192.168.2.44
45 A 192.168.2.45
46 A 192.168.2.46
47 A 192.168.2.47
48 A 192.168.2.48
49 A 192.168.2.49
50 A 192.168.2.50
51 A 192.168.2.51
52 A 192.168.2.52
53 A 192.168.2.53
54 A 192.168.2.54
55 A 192.168.2.55
56 A 192.168.2.56
57 A 192.168.2.57
58 A 192.168.2.58
59 A 192.168.2.59
60 A 192.168.2.60
61 A 192.168.2.61
62 A 192.168.2.62
63 A 192.168.2.63
64 A 192.168.2.64
65 A 192.168.2.65
66 A 192.168.2.66
67 A 192.168.2.67
68 A 192.168.2.68
69 A 192.168.2.69
70 A 192.168.2.70
71 A 192.168.2.71
72 A 192.168.2.72
73 A 192.168.2.73
74 A 192.168.2.74
75 A 192.168.2.75
76 A 192.168.2.76
77 A 192.168.2.77
78 A 192.168.2.78
79 A 192.168.2.79
80 A 192.168.2.80
81 A 192.168.2.81
82 A 192.168.2.82
83 A 192.168.2.83
84 A 192.168.2.84
85 A 192.168.2.85
86 A 192.168.2.86
87 A 192.168.2.87
88 A 192.168.2.88
89 A 192.168.2.89
90 A 192.168.2.90
91 A 192.168.2.91
92 A 192.168.2.92
93 A 192.168.2.93
94 A 192.168.2.94
95 A 192.168.2.95
96 A 192.168.2.96
97 A 192.168.2.97
98 A 192.168.2.98
99 A 192.168.2.99
100 A 192.168.2.100
101 A 192.168.2.101
102 A 192.168.2.102
103 A 192.168.2.103
104 A 192.168.2.104
105 A 192.168.2.105
106 A 192.168.2.106
107 A 192.168.2.107
108 A 192.168.2.108
109 A 192.168.2.109
110 A 192.168.2.110
111 A 192.168.2.111
112 A 192.168.2.112
113 A 192.168.2.113
114 A 192.168.2.114
115 A 192.168.2.115
116 A 192.168.2.116
117 A 192.168.2.117
118 A 192.168.2.118
119 A 192.168.2.119
120 A 192.168.2.120
121 A 192.168.2.121
122 A 192.168.2.122
123 A 192.168.2.123
124 A 192.168.2.124
125 A 192.168.2.125
126 A 192.168.2.126
127 A 192.168.2.127

# cat 1.168.192.inaddr.arpa
$ORIGIN 1.168.192.in-addr.arpa.
$TTL 6h

@ IN SOA lan.home.meinit.nl. root.meinit.nl. (
1 ; serial
1h ; refresh
30m ; retry
7d ; expiration
1h ) ; minimum

NS soekris.lan.meinit.nl.
1 PTR soekris.lan.meinit.nl.
$GENERATE 32-127 $ PTR $.lan.meinit.nl.

# cat 2.168.192.in-addr.arpa
$ORIGIN 2.168.192.in-addr.arpa.
$TTL 6h

@ IN SOA wifi.meinit.nl. root.meinit.nl. (
1 ; serial
1h ; refresh
30m ; retry
7d ; expiration
1h ) ; minimum

NS soekris.home.meinit.nl.
1 PTR soekris.wifi.meinit.nl.
$GENERATE 32-127 $ PTR $.wifi.meinit.nl.

And setup the DHCP server.

# cat /etc/dhcpd.conf
subnet 192.168.1.0 netmask 255.255.255.0 {
option domain-name "lan.meinit.nl";
option domain-name-servers 192.168.1.1;
option routers 192.168.1.1;
range 192.168.1.32 192.168.1.127;
}
subnet 192.168.2.0 netmask 255.255.255.0 {
option domain-name "wifi.meinit.nl";
option domain-name-servers 192.168.2.1;
option routers 192.168.2.1;
range 192.168.2.32 192.168.2.127;
}

Finally configure your PF in /etc/pf.conf:

# wan is the interface to which the modem is connected.
wan = sis0
# This is an extra interface, not in use right now, but you could connect a cable.
lan = sis1
# This is the (Prism 2) wireless network card. Clients will connect to this interface mostly.
wifi = wi0

scrub in all

nat on $wan from !($wan) to any -> ($wan)

Now beter reboot to activate all changes. (Sure you could start every daemon by hand...)

Courtesy : http://meinit.nl/configure-soekris-openbsd-wireless-nat-router

A shell script to measure network throughput on Linux machines

Here is a shell script to see how many (kilo-, mega-, giga-, terra-) bytes pass a network interface.

The output looks like this:

$ ./network-traffic.sh --help
Usage: ./network-traffic.sh [-i INTERFACE] [-s INTERVAL] [-c COUNT]

-i INTERFACE
    The interface to monitor, default is eth0.
-s INTERVAL
    The time to wait in seconds between measurements, default is 3 seconds.
-c COUNT
    The number of times to measure, default is 10 times.
$ ./network-traffic.sh        
Monitoring eth0 every 3 seconds. (RXbyte total = 706 Mb TXbytes total = 1 Gb)
RXbytes = 104 b TXbytes = 194 b
RXbytes = 80 b TXbytes = 188 b
RXbytes = 52 b TXbytes = 146 b
RXbytes = 689 b TXbytes = 8 Kb
RXbytes = 52 b TXbytes = 146 b
RXbytes = 52 b TXbytes = 146 b
RXbytes = 52 b TXbytes = 146 b
RXbytes = 52 b TXbytes = 146 b
RXbytes = 4 Kb TXbytes = 4 Kb
RXbytes = 716 b TXbytes = 5 Kb
Here is the script:

#!/bin/sh

usage(){
echo "Usage: $0 [-i INTERFACE] [-s INTERVAL] [-c COUNT]"
echo
echo "-i INTERFACE"
echo "    The interface to monitor, default is eth0."
echo "-s INTERVAL"
echo "    The time to wait in seconds between measurements, default is 3 seconds."
echo "-c COUNT"
echo "    The number of times to measure, default is 10 times."
exit 3
}

readargs(){
while [ "$#" -gt 0 ] ; do
  case "$1" in
   -i)
    if [ "$2" ] ; then
     interface="$2"
     shift ; shift
    else
     echo "Missing a value for $1."
     echo
     shift
     usage
    fi
   ;;
   -s)
    if [ "$2" ] ; then
     sleep="$2"
     shift ; shift
    else
     echo "Missing a value for $1."
     echo
     shift
     usage
    fi
   ;;
   -c)
    if [ "$2" ] ; then
     counter="$2"
     shift ; shift
    else
     echo "Missing a value for $1."
     echo
     shift
     usage
    fi
   ;;
   *)
    echo "Unknown option $1."
    echo
    shift
    usage
   ;;
  esac
done
}

checkargs(){
if [ ! "$interface" ] ; then
  interface="eth0"
fi
if [ ! "$sleep" ] ; then
  sleep="3"
fi
if [ ! "$counter" ] ; then
  counter="10"
fi
}

printrxbytes(){
/sbin/ifconfig "$interface" | grep "RX bytes" | cut -d: -f2 | awk '{ print $1 }'
}

printtxbytes(){
/sbin/ifconfig "$interface" | grep "TX bytes" | cut -d: -f3 | awk '{ print $1 }'
}

bytestohumanreadable(){
multiplier="0"
number="$1"
while [ "$number" -ge 1024 ] ; do
  multiplier=$(($multiplier+1))
  number=$(($number/1024))
done
case "$multiplier" in
  1)
   echo "$number Kb"
  ;;
  2)
   echo "$number Mb"
  ;;
  3)
   echo "$number Gb"
  ;;
  4)
   echo "$number Tb"
  ;;
  *)
   echo "$1 b"
  ;;
esac
}
 
printresults(){
while [ "$counter" -ge 0 ] ; do
  counter=$(($counter - 1))
  if [ "$rxbytes" ] ; then
   oldrxbytes="$rxbytes"
   oldtxbytes="$txbytes"
  fi
  rxbytes=$(printrxbytes)
  txbytes=$(printtxbytes)
  if [ "$oldrxbytes" -a "$rxbytes" -a "$oldtxbytes" -a "$txbytes" ] ; then
   echo "RXbytes = $(bytestohumanreadable $(($rxbytes - $oldrxbytes))) TXbytes = $(bytestohumanreadable $(($txbytes - $oldtxbytes)))"
  else
   echo "Monitoring $interface every $sleep seconds. (RXbyte total = $(bytestohumanreadable $rxbytes) TXbytes total = $(bytestohumanreadable $txbytes))"
  fi
  sleep "$sleep"
done
}

readargs "$@"
checkargs
printresults


Courtesy : http://meinit.nl/shell-script-measure-network-throughput-linux-machines

Firewall Failover with pfsync and CARP

On most networks, the firewall is a single point of failure. When the firewall goes down, inside users are unable to surf the web, the website goes dead to the outside world, and email grinds to a halt. Since version 3.5, OpenBSD has included a number of components which can be used to solve this problem, by placing two firewalls in parallel. All traffic passes through the primary firewall; when it fails the backup firewall assumes the identity of the primary firewall, and continues where it left off. Existing connections are preserved, and network traffic continues as if nothing had happened.


Not only does such a configuration increase the reliability of the network, it can also increase the security in some subtle ways. It is now trivial to do upgrades without impacting the network, by taking the firewalls offline one at a time. The result? Hopefully the firewalls will be upgraded more frequently and there will be less resistance to applying patches "because the network will go down". Furthermore, in many corporate environments there is strong pressure to keep the network up "no matter what". Frequently then a firewall failure means running unprotected rather than waiting until a new one can be brought up - obviously increasing firewall reliability reduces the risk of this happening.

The tools

The two main components provided by OpenBSD are CARP (the Common Address Redundancy Protocol), which allows a backup host to assume the identity of the primary, and pfsync, which ensures that firewall states are synchronised so that the backup can take over exactly where the master left off and no connections will be lost.

CARP

The Common Address Redundancy Protocol manages failover at the intersection of Layers 2 and 3 in the OSI Model (link layer and IP layer). Each CARP group has a virtual MAC (link layer) address, and one or more virtual host IP addresses (the common address). CARP hosts respond to ARP requests for the common address with the virtual MAC address, and the CARP advertisements themselves are sent out with this as the source address, which helps switches quickly determine which port the virtual MAC address is currently "at".
The master of the address sends out CARP advertisement messages via multicast using the CARP protocol (IP Protocol 112) on a regular basis, and the backup hosts listen for this advertisement. If the advertisements stop, the backup hosts will begin advertising. The advertisement frequency is configurable, and the host which advertises most frequently is the one most likely to become master in the event of a failure.
A reader who is familiar with VRRP will find this is somewhat familiar, however there are some significant differences:
  • The CARP protocol is address family independent. The OpenBSD implementation supports both IPv4 and IPv6, as a transport for the CARP packets as well as common addresses to be shared.
  • CARP has an "arpbalance" feature that allows multiple hosts to share a single IP address simultaneously; in this configuration, there is a virtual MAC address for each host, but only one IP address.
  • CARP uses a cryptographically strong SHA-1 HMAC to protect each advertisement.
Besides these technical differences, there is another significant difference (perhaps the most important one, in fact): CARP is not patent encumbered. See this page for details on the history of CARP and our reasons for avoiding a VRRP implementation.

pfsync

pfsync transfers state insertion, update, and deletion messages between firewalls. Each firewall sends these messages out via multicast on a specified interface, using the PFSYNC protocol (IP Protocol 240). It also listens on that interface for similar messages from other firewalls, and imports them into the local state table.
In order to ensure that pfsync meets the packet volume and latency requirements, the initial implementation has no built-in authentication. An attacker who has local (link layer) access to the subnet used for pfsync traffic can trivially add, change, or remove states from the firewalls. It's possible to run the pfsync protocol on one of the "real" networks, but because of the security risks, it is strongly recommended that a dedicated, trusted network be used for pfsync. This can be as simple as a crossover cable between interfaces on two firewalls

A basic example

My firewall cluster at home consists of two Soekris Engineering 4501s. Each device has three interfaces - one interface connects to a hub on the external, Internet side, one interface connects to the internal network, and the third interface connects the two firewalls to each other via a crossover cable: a dedicated link for the pfsync protocol.


 Configuration details for this setup are available below.

Something bigger

This larger configuration is in place at a large educational institution, providing load balancing and redundancy for a cluster of web servers:



In this configuration, both outer firewalls are handling connections. They both answer to the same IP address, but have unique MAC address. Which MAC address to hand out is determined based on the source address of the incoming ARP request.
Passing through the firewalls, traffic is redirected from the single, outer IP address to one of the hosts on the inside. The source-hash option is used which, by selecting a redirection target based on a hash of the source address, ensures that multiple connections from the same client will all be redirected to the same server.
On the second layer, the 4 application servers take part in 4 different carp groups, backing each other up. If one of the servers dies, as Server 4 has done in the above example, one of the other servers will take over that address, and serve requests for both its own and the one it is backing up.

Preemption

In many cases, the firewalls in the cluster are identical, and it doesn't matter which one is currently master. Allowing hosts to hold on to the virtual address indefinitely reduces the number of transitions, but if for some reason it is preferable that one firewall handle the traffic whenever possible, the intended master can be told to preempt the backup and take back the address.
When preemption is enabled, each CARP host will look at the advskew parameter in the advertisements it receives from the master, to try to determine whether it can advertise more frequently. If so, it will begin advertising, and the current master, seeing that there is another host with a lower advskew, will bow out.

The Failover Sequence

The diagram below illustrates a time-line of events in a typical failover, and illustrates what happens with preemption enabled. When the pfsync interface first comes up, pfsync broadcasts a request for a bulk update of the entire state table. After this, all updates to the state table are on a per-state, best effort basis. pfsync attempts to prevent carp from taking ownership of the common addresses until the bulk update has completed.

Scalability
There is essentially no limit to how many pfsync+carp hosts can participate in a cluster. Except for the bulk update at bootup, the traffic generated by the pfsync protocol scales linearly with the amount of regular traffic passing through the firewall cluster, and besides brief periods when a new master is being selected, only one host in a carp group is advertising at any given time.
In test environments, we have run up to 4 pfsync+carp hosts (all different architectures: i386, sparc, sparc64, and amd64!), randomly rebooting them. TCP sessions were not interrupted through over two days of such torture testing.

Sample configuration

The sample configuration given here is for the basic example above. Each box has three sis(4) interfaces; sis0 is the external interface, on the 10.0.0.0/24 subnet, sis1 is the internal interface, on the 192.168.0.0/24 subnet, and sis2 is the pfsync interface, using the 192.168.254.0/24 subnet. A crossover cable connects the two firewalls via their sis2 interfaces. On all three interfaces, firewall A uses the .254 address, while firewall B uses .253.
The interfaces are configured as follows:
/etc/hostname.sis0:
inet 10.0.0.254 255.255.255.0 NONE
/etc/hostname.sis1:
inet 192.168.0.254 255.255.255.0 NONE
/etc/hostname.sis2:
inet 192.168.254.254 255.255.255.0 NONE
/etc/hostname.carp0:
inet 10.0.0.1 255.255.255.0 10.0.0.255 vhid 1 pass foo
/etc/hostname.carp1:
inet 192.168.0.1 255.255.255.0 192.168.0.255 vhid 2 pass bar
/etc/hostname.pfsync0:
up syncif sis2
pf(4) must also be configured to allow pfsync and CARP traffic through. The following should be added to the top of /etc/pf.conf:
pass quick on { sis2 } proto pfsync keep state (no-sync)
pass on { sis0 sis1 } proto carp keep state
When writing the rest of the pf ruleset, it is important to keep in mind that from pf's perspective, all traffic comes from the physical interface, even if it is routed through the carp address. However, the address is of course associated with the carp interface. Therefore, in the interface context, such as "pass in on $extif ...", $extif would be the physical interface, but in the context of "from $foo" or "to $foo", the carp interface should be used, as it's being meant in the address context.

Preemption

If preemption is desired, the server intended to be the backup needs to have a higher advskew than the primary. In this case, the carp configuration for the backup firewall would be as follows:
/etc/hostname.carp0:
inet 10.0.0.1 255.255.255.0 10.0.0.255 vhid 1 advskew 100 pass foo
/etc/hostname.carp1:
inet 192.168.0.1 255.255.255.0 192.168.0.255 vhid 2 advskew 100 pass bar
And of course, the following must be added to /etc/sysctl.conf on both firewalls:
net.inet.carp.preempt=1

Other uses

Although this article focuses on applications using both these components, they can also be used independently - pfsync can be used on its own where dynamic routing is used to handle failover, and CARP can be used on a cluster of servers rather than routers, to provide redundancy for a specific application (somewhat like the example "Something bigger" above).

Related Links

Ryan McBride - mcbride@openbsd.org

Courtesy : http://www.countersiege.com/doc/pfsync-carp/

Friday, September 10, 2010

Dropbox : sync your files online and across your computers automatically



Dropbox Features

File Sync

Dropbox allows you to sync your files online and across your computers automatically. Free for Windows, Mac, Linux, and Mobile
  • 2GB of online storage for free, with up to 100GB available to paying customers.
  • Sync files of any size or type.
  • Sync Windows, Mac and Linux computers.
  • Automatically syncs when new files or changes are detected.
  • Work on files in your Dropbox even if you're offline. Your changes sync once your computer has an Internet connection again.
  • Dropbox transfers will correctly resume where they left off if the connection drops.
  • Efficient sync - only the pieces of a file that changed (not the whole file) are synced. This saves you time.
  • Doesn't hog your Internet connection. You can manually set bandwidth limits.

File Sharing

Sharing files is simple and can be done with only a few clicks.
  • Shared folders allow several people to collaborate on a set of files.
  • You can see other people's changes instantly.
  • A "Public" folder that lets you link directly to files in your Dropbox.
  • Control who is able to access shared folders (including ability to kick people out and remove the shared files from their computers).
  • Automatically create shareable online photo galleries from folders of photos in your Dropbox.

Online Backup

Dropbox backs up your files online without you having to think about it.
  • Automatic backup of your files.
  • Undelete files and folders.
  • Restore previous versions of your files.
  • 30 days of undo history, with unlimited undo available as a paid option.

Web Access

A copy of your files are stored on Dropbox's secure servers. This lets you access them from any computer or mobile device.
  • Manipulate files as you would on your desktop - add, edit, delete, rename etc.
  • Search your entire Dropbox for files.
  • A "Recent Events" feed that shows you a summary of activity in your Dropbox.
  • Create shared folders and invite people to them.
  • Recover previous versions of any file or undelete deleted files.
  • View photo galleries created automatically from photos in your Dropbox.

Security & Privacy

Dropbox takes the security and privacy of your files very seriously.
  • Shared folders are viewable only by people you invite.
  • All transmission of file data and metadata occurs over an encrypted channel (SSL).
  • All files stored on Dropbox servers are encrypted (AES-256) and are inaccessible without your account password.
  • Dropbox website and client software have been hardened against attacks from hackers.
  • Dropbox employees are not able to view any user's files.
  • Online access to your files requires your username and password.
  • Public files are only viewable by people who have a link to the file(s). Public folders are not browsable or searchable.

Mobile Device Access

The free Dropbox application for iPhone, iPad, and Android lets you:
  • Access your Dropbox on the go.
  • View files from within the application.
  • Download files for offline viewing.
  • Take photos and videos and sync them to your Dropbox.
  • Share links to files in your Dropbox.
  • Export your files to other applications.
  • Sync downloaded files so they're up-to-date.
A mobile-optimized version of the website is also available for owners of Blackberry phones and other Internet-capable mobile devices.

Courtesy : http://www.dropbox.com/features

Ethernet Frame Calculations

Ethernet Frame Calculations

This page contains some example calculations for the operation of an Ethernet LAN.
Example 1: Calculate the maximum frame rate of a node on an Ethernet LAN.

The minimum frame payload is 46 Bytes (dictated by the slot time of the Ethernet LAN architecture). The maximum frame rate is achieved by a single transmitting node which does not therefore suffer any collisions. This implies a frame consisting of 72 Bytes (see table above) with a 9.6 µs inter-frame gap (corresponding to 12 Bytes at 10 Mbps). The total utilised period (measured in bits) corresponds to 84 Bytes.

Frame Part Minimum Size Frame
Inter Frame Gap (9.6µs) 12 Bytes
MAC Preamble (+ SFD) 8 Bytes
MAC Destination Address 6 Bytes
MAC Source Address 6 Bytes
MAC Type (or Length) 2 Bytes
Payload (Network PDU) 46 Bytes
Check Sequence (CRC) 4 Bytes
Total Frame Physical Size 84 Bytes


Calculation of number of bit periods occupied by smallest size of Ethernet frame

The maximum number of frames per second is:

Ethernet Data Rate (bits per second) / Total Frame Physical Size (bits)

= 10 000 0000 / (84 x 8 )

= 14 880 frames per second.

In practice, this exceeds the forwarding capacity of many routers and bridges (typically 1000's of frames per second). This is however not usually a concern, since most Ethernet networks carry packets with a range of lengths and usually transmit a significant proportion of maximum sized frames (the maximum rate of transmission of maximum sized frames is only 812 frames per second). If the forwarding capacity is momentarilly exceeded due to a large number of small frames, the bridge/router will simply discard the excess frames which it can not process (this is allowed, since Ethernet provides only a best effort link layer service).
Example 2: Calculate the maximum throughput of the link layer service provided by Ethernet

The maximum frame payload is 1500 Bytes, this will offer the highest throughput, when the frames are transmitted by only one node on the network (i.e. there are no collisions) To calculate the throughput provided by the link layer, one must first calculate the maximum frame rate for this size of frame. Frame Part Maximum Size Frame
Inter Frame Gap (9.6µs) 12 Bytes
MAC Preamble (+ SFD) 8 Bytes
MAC Destination Address 6 Bytes
MAC Source Address 6 Bytes
MAC Type (or Length) 2 Bytes
Payload (Network PDU) 1500 Bytes
Check Sequence (CRC) 4 Bytes
Total Frame Physical Size 1538 Bytes


Calculation of number of bit periods occupied by largest size of Ethernet frame

The largest frame consists of 1526 Bytes (see table above) with a 9.6 µs inter-frame gap (corresponding to 12 Bytes at 10 Mbps). The total utilised period (measured in bits) therefore corresponds to1538 Bytes.

The maximum frame rate is:

Ethernet Data Rate (bits per second) / Total Frame Physical Size (bits)

= 812.74 frames per second

The link layer throughput (i.e. number of payload bits transferred per second) is:

Frame Rate x Size of Frame Payload (bits)

= 812.74 x (1500 x 8)

= 9 752 880 bps.

This represents a throughput efficiency of 97.5 %.
Example 3: One node transmits 100 Byte frames at 10 frames per second, another transmits 1000 Byte frames at 2 frames per second, calculate the utilisation of the Ethernet LAN.

The utilisation is the percentage of time the physical link is transmitting data. This calculation will assume that the transmissions do not collide. (Thisd may need to be reviewed if the utilisation is greater than about 10%).

The maximum frame payload is 1500 Bytes, this will offer the highest throughput. To calculate the throughput provided by the link layer, one must first calculate the maximum frame rate for this size of frame. Frame Part Frame 1 Frame 2
Inter Frame Gap (9.6µs) 12 Bytes 12 Bytes
MAC Preamble (+ SFD) 8 Bytes 8 Bytes
MAC Destination Address 6 Bytes 6 Bytes
MAC Source Address 6 Bytes 6 Bytes
MAC Type (or Length) 2 Bytes 2 Bytes
Payload (Network PDU) 100 Bytes 1000 Bytes
Check Sequence (CRC) 4 Bytes 4 Bytes
Total Frame Physical Size 138 Bytes 1038 Bytes


Calculation of number of bit periods occupied by largest size of Ethernet frame

The two frames occupy respectively a period corresponding to 138 Bytes and 1038 Bytes (including the 9.6 µs inter-frame gap). The total number of bits which are utilised in 1 second are therefore be:

(Frame Size 1 x Frame Rate 1) + (Frame Size 2 x Frame Rate 2)

= (138 x 8 x 10) + (1038 x 8 x 2)

= 27648 bits

This represents a utilisation of :

(27648 / 10 000 0000) x 100

= 0.28 %

(N.B. 0.28 % << 10%, therefore it was permissable to neglect the effect of collisions.)

Courtesy : http://www.erg.abdn.ac.uk/users/gorry/course/lan-pages/enet-calc.html

Web pages per second: A simple calculation

Takeaway: When planning a Web site, it is difficult to estimate how much traffic it will take, which makes it even harder to plan for equipment and software. Brien Posey offers this quick guide to calculating how many pages per second a site will serve.

Web traffic is completely unpredictable, which can make it difficult to estimate how much traffic your servers can handle. You may know that a particular Web site averages 500 hits per hour, but you can't really tell exactly how many hits that a site will actually take on a given day. So when you’re planning for a new Web site’s capacity, it’s important to reasonably estimate the amount of traffic that you expect to get and then build a server that can comfortably handle that amount of traffic and more.
Calculating Web traffic
Web traffic can be a very difficult statistic to pinpoint. To estimate the number of hits a site takes, you must know what you want to estimate (unique IP addresses, total hits minus Web bot traffic, stickiness, etc.) and whether you will do dynamic (counters or banners) or static (log file analysis) monitoring.

Allowing for growth
Why build a server that’s more powerful than what you really need? Well, for starters, building a more powerful server allows for growth. After all, the whole idea behind having a Web site is to attract visitors to it. Because you’ll be trying to attract more and more visitors, the amount of traffic flowing in and out of your server will probably increase over time.

Even if you aren’t aggressively promoting your Web site, you still need to build a server that exceeds the amount of traffic that you planned for, because although your site may average a certain number of hits per hour, not all hours will be the same. For example, if your Web site offers news-related content, your peak traffic times will probably be early in the morning, late afternoon, and lunchtime. Of course, this is assuming that your site is limited to local interest. If your recreational site services more than one or two time zones, then there’s really no telling when your peak periods will be. It's important for you to realize that your site will receive a lot of hits during some parts of the day, and during other parts of the day, the site will receive comparatively fewer hits. It's important to be sure your server can comfortably handle the busiest parts of the busiest days and still have resources to spare.

Bandwidth and page type
Another rule of capacity planning is that not all pages are created equally. The majority of Internet capacity planning is focused around bandwidth needs, which are based on how many pages can be transmitted per second. For example, a standard page of text with one or two simple graphics might occupy about 5 KB of space. However, you don’t want to base your pages-per-second calculations on 5 KB if you’ve got other pages that are full of graphics.

Bandwidth isn’t the only factor to consider. If all of your pages are composed entirely from HTML text with the occasional graphic, bandwidth will be your primary concern. However, many Web sites these days depend on Active Server Pages (ASPs). This means that when the server receives a user request, the server must be able to dynamically construct a Web page in memory and transmit that page to the appropriate person. This process consumes a considerable amount of memory and processing power. The process of dynamically constructing a Web page and transmitting it also consumes a little bit more bandwidth, because the user isn’t simply transmitting a request for a static page. Instead, the user is transmitting all of the information necessary for the server to build the dynamic page. (This is typically done through the URL.) This information isn’t usually any more than a couple of hundred bytes, but when you compare a couple hundred bytes with the 20 or 30 bytes that might be needed to access a static page, you can see the negative impact that the operation would have on bandwidth.

This is especially true when you consider how a large number of users would affect the process. For example, let’s forget about outbound pages and every type of traffic except for the kind generated by an end user. Suppose that clicking on a static link sent 30 bytes of traffic to the server, while clicking on a dynamic link sent 200 bytes of traffic to the server. If 1,000 users performed the action at the same time, the action of clicking on the static link would generate just under 30 Kbps of traffic, while the same number of users connecting to a dynamic page would generate about 195 Kbps of traffic. While neither of these numbers seems significant, you must remember that this is only the traffic being sent to the server by the users clicking on a link.

I’m not saying not to use ASPs. ASPs are a great technology, and I encourage their use. I’m not even saying that ASPs are going to push your server to the breaking point. I'm merely pointing out that more traffic is generated by the users when you use ASPs, and you should at least consider it when calculating your bandwidth.

Estimating bandwidth capacity
If you’re planning on hosting a Web site, you probably already have an idea of what you want the site to consist of and what type of Internet connection you want to use. To help you determine your server’s bandwidth capacity, let’s use my Web site as an example. On my Web site, my largest page is my brother’s bio page, which includes a few fairly small graphics and two large JPEG photographs. Although the page is about 150 KB in size, this isn’t really excessive for a Web page. Just to make the math easier, let’s assume that the page was an even 200 KB. Overestimating the size of your biggest page gives you a smaller number of total hits per second than you’d actually be able to support. Remember though that underestimating is probably a good thing because calculating the exact numbers would reflect how many pages per second that your server could host under perfect conditions. Since conditions in the real world are seldom perfect, it makes since to play with the numbers a bit.

With that said, let’s look at some numbers. It’s tempting to simply divide the page size by your bandwidth (i.e., 1.5 Mbps of bandwidth divided by a 5 Kb page equals 300 pages per second) for an answer, but there’s more to the process than that. First, you need to determine what traffic is required for a client to access the page. The client must first establish a TCP/IP session with your Web server. This process requires about 180 bytes of information to flow across your connection. The next step in the process is the GET request, which is the process of requesting a specific page from your Web site.

The amount of data required by this process varies depending on the length of the URL (ASP URLs being longer than static page URLs). Unfortunately, you can’t simply count the number of bytes in the URL to come up with a number, because there is some overhead involved in the process. Instead, let's assume that the process requires about 256 bytes, which is an average number for a typical static Web page. Now, you need to determine the size of the page you’re working with. The page size will vary, depending on the number and size of graphics that the page contains.

I said I was going to use 150 KB as the size of my page. Since all of my other numbers have been calculated in bytes, I’ll convert the 150 KB into bytes by multiplying the number by 1,024, which comes out to be 153,600 bytes. There’s some overhead involved in using the TCP/IP protocol. Remember that all data flowing to and from your Web server is encapsulated into TCP/IP packets. In addition to your data, each packet must contain header information that includes information such as the packet’s source, destination, and sequence number.

The actual amount of overhead generated by TCP/IP varies, depending on whether you’re using an encryption algorithm such as IPSec, or just standard TCP/IP. You can determine the amount of overhead required by TCP/IP by performing some calculations. Each TCP/IP packet uses a 32-byte header that tells TCP/IP how to route the packet. The actual size of the message within the packet varies and can be up to 65,535 bytes in size. However, most of the time, the total size of the packet never exceeds 576 bytes. Because it’s impossible for me to know the exact packet structure that you’re using, I’ll go with the 576-byte model.

If a packet is 576 bytes in size and 32 bytes of the packet is the header, that leaves 544 bytes for the actual data. If you’re downloading a 153,600-byte page, and each packet can contain 544 bytes of data, it will take roughly 283 packets to move the page to the user’s browser. With that said, let’s do the math (see Table A).
Table A
Byte usage                                 Byte count 
TCP/IP connection                         Approximately 180 bytes 
GET request                                 Approximately 256 bytes 
150-KB Web page                         153,600 bytes 
Protocol overhead (32 bytes * 283 packets)     9,056 bytes 
Total:                                         163,092 bytes or 159.3 KB 

Now that you have an idea of how much data must actually be moved to display a page, you need to divide your connection speed (in bits per second) by the number of bits per page. Remember that you made your calculations in bytes, so you must convert the number of bytes to bits by multiplying the result by eight. This gives you a total of 1,304,736 bits per page.

Table B displays the number of bits per second offered by various types of connections. I’ve gone on to list the total number of pages per second that the connection could support at the current page size. Keep in mind that I’m working with approximate numbers.

Table B
Connection  Bits per second (divided by)  Bits per page (equals)  Pages persecond 
28.8 modem  28,800  1,304,736  0.02 
56 K modem  56,000  1,304,736  0.04 
T-1  1,544,000  1,304,736  1.18 
10 Mbps Ethernet  10,000,000  1,304,736  7.66 
100 Mbps Ethernet  100,000,000  1,304,736  76.6 

As you can see, a 150-KB page isn’t such a good idea if you’re expecting to get a lot of hits. But that’s why you do capacity planning, so you can figure these things out in advance.

Courtesy :

Bits per second to packets per second converter

Hi there! How is it going? Sometimes, when we talk about device performance we are talking in terms of packets per second (pps) and bits per second (bps). But in latter case it's not quite correct to say "this device can do one hundrends megabits ber second" because router/switch/whatever performance is greatly depends on packet size and if you want to mention device performance in more accurate and professional way you would say "this device can do one hundrends megabits per second at 64 bytes packet size"

Often vendors such as our favorite Cisco specify device performance as packets per second, so we don't need to bother about packet size mentioning because pps is rather a characteristic of device's (processor, bus, ASICs) computing power. Packets per second more or less still the same with different packets size. But it is not very convinient to deal with pps in a real life because we have to know "real" device performance in our network. So we have to do two things:

1) Determine the average packet size which is specific for our network. For example traffic profile for our network could be 30% ftp-data (large packet at 1500 bytes) and 70% VoIP-data (a lot of small packets at 64 bytes) so our average packet size is about 800 bytes.

2) Calculate with simple formula how much there will be Megabits per second (Mbps) if our average packet size is 800 bytes and device performance is, lets say, 100 kpps (one hundred thousands of packet per second)
The second step is not a big deal for a real professional, but we live in 21st century, aren't we? Unfortunately I didn't found any bps to pps converter/calculator anywhere online so I decided to make it myself (though I'm not a programmer).

Although converter is mathematically correct (I hope ;) I'm not sure it's fair to use it as a "exact throughput" reference. As I said before device performance is greatly depends on packet size, but this dependency is not quit linear one.
For example as pretty old cisco document says FWSM performance at 64-byte packet size and 2.84Mpps is about 1.3Gbps. If we would recalculate 2.84Mpps with 1500 bytes per packet we should get about 30Gbps throughput which is not true. FWSM throughput is about 5.5Gbps at 1400 bytes packet length. So, clearly some additional inspections made for a greater payload.

The hand-made not state of the art quick and dirty bits per second to packets per second converter can be found at CCIEvault Tools page. Feel free to advice me on any improvements I can make on this tool.

P.S. There is one more thing I need to say. There are at least three well know packet size: the least one - 64 bytes (toughest case for device, usually referred with router/switch performance), the biggest one 1500 bytes (sometimes 1400 bytes) usually referred with firewall/VPN performance and the so-called "real" one - IMIX at 427 bytes, which represents an average packet size somewhere in the Internet (but I saw values in between 300-900 bytes)

Courtesy : http://ccievault.net/index.php/articles/37-cvnarticles/58-bps2pps

Link :
http://www.cisco.com/web/about/security/intelligence/network_performance_metrics.html
http://www.tomshardware.com/forum/19880-42-maximum-maximum-packets-megabit-ethernet
http://blog.famzah.net/2009/11/24/benchmark-the-packets-per-second-performance-of-a-network-device/
http://www.numion.com/calculators/Distance.html
http://www.tamos.net/~rhay/wp/overhead/overhead.htm
http://www.tamos.net/~rhay/wp/atm-ip/aal5-atm-ip.htm
http://www.compwisdom.com/topics/Bits-per-second

Cherokee web server

Cherokee is really really fast. The speed at which any web server can serve requests for content is both directly tied to and limited by the I/O speed of underlying hardware and operating system. In this regard, a web server's performance is measured by the latency incurred after an I/O request to the underlying system has completed.
The primary objective of the Cherokee project is to reduce to zero the latency incurred between the time a dynamic or static system I/O request has completed and the time the resulting content is served to the requesting client. Admittedly, a very tough goal to reach.

In fact, it might be impossible. But achieving that which was once seen as impossible is what drives innovation. And it's innovation that drives the ongoing development of the Cherokee project, bringing us closer to the impossible with each new release.

To show the project's progression towards the ultimate goal, whenever a benchmark is performed it will be published right here. Older releases had impressive but not very thorough benchmarks. Whenever it is possible the conditions of the benchmark will be provided so that anyone can replicate the results. That is, after all, an essential basis of the Scientific method.

The benchmark consisted on half a million requests of a 1.7KiB static file, with 20 concurrent clients, using a 1Gbit/s local network. The results (fastest to slowest) were:
Cherokee:

Server Software:        Cherokee/0.8.1
Server Hostname:        10.0.0.102
Server Port:            80

Document Path:          /index.html
Document Length:        1795 bytes

Concurrency Level:      20
Time taken for tests:   17.819725 seconds
Complete requests:      500000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    500000
Total transferred:      999007442 bytes
HTML transferred:       897506630 bytes
Requests per second:    28058.79 [#/sec] (mean)
Time per request:       0.713 [ms] (mean)
Time per request:       0.036 [ms] (mean, across all concurrent requests)
Transfer rate:          54747.93 [Kbytes/sec] received

Lighttpd:
Server Software:        lighttpd/1.4.19
Server Hostname:        10.0.0.102
Server Port:            80

Document Path:          /index.html
Document Length:        1795 bytes

Concurrency Level:      20
Time taken for tests:   21.248000 seconds
Complete requests:      500000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    470598
Total transferred:      991856958 bytes
HTML transferred:       897503590 bytes
Requests per second:    23531.63 [#/sec] (mean)
Time per request:       0.850 [ms] (mean)
Time per request:       0.042 [ms] (mean, across all concurrent requests)
Transfer rate:          45585.94 [Kbytes/sec] received

NginX:
Server Software:        nginx/0.5.33
Server Hostname:        10.0.0.102
Server Port:            80

Document Path:          /index.html
Document Length:        1795 bytes

Concurrency Level:      20
Time taken for tests:   23.741872 seconds
Complete requests:      500000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    500000
Total transferred:      1006000217 bytes
HTML transferred:       897500000 bytes
Requests per second:    21059.84 [#/sec] (mean)
Time per request:       0.950 [ms] (mean)
Time per request:       0.047 [ms] (mean, across all concurrent requests)
Transfer rate:          41379.30 [Kbytes/sec] received

Apache2.2:
Server Software:        Apache/2.2.8
Server Hostname:        10.0.0.102
Server Port:            80

Document Path:          /index.html
Document Length:        1795 bytes

Concurrency Level:      20
Time taken for tests:   35.438605 seconds
Complete requests:      500000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    495064
Total transferred:      1043777896 bytes
HTML transferred:       897500000 bytes
Requests per second:    14108.91 [#/sec] (mean)
Time per request:       1.418 [ms] (mean)
Time per request:       0.071 [ms] (mean, across all concurrent requests)
Transfer rate:          28762.81 [Kbytes/sec] received
For the record: I did my best configuring all the servers in the very same way. In all the cases I removed unnecessary rules that could have slowed down the server (checks for htpasswd files and so on). And all the binaries came from the Debian repository, except for Cherokee 0.8.1 that hasn't been packaged yet.
Anyway, this benchmark has been just a quick test. It is not certainly representing the result that these servers would have handling real traffic though. So, in the following days I will try to do a new a more accurate benchmark with static and dynamic content, compression, redirections, etc. I'm pretty sure the results will be even better.

Cherokee + Apache + Lighttpd Benchmark

This benchmark was performed by Brian Rosner with Cherokee 0.6.0 beta2.

Software

  • cherokee 0.6.0 beta2
  • apache 2.0.59
  • lighttpd 1.4.16

Hardware

  • 733 MHz PIII
  • 256 MB RAM
  • 80GB 7200RPM IDE HD
  • Debian GNU/Linux 4.0

Background

I installed a fresh installation of Debian on the server hardware. Right after you login you will need to get sudo to perform root commands from your account:
su
apt-get install sudo
Then add yourself to the /etc/sudoers file by running visudo and adding yourself in the user section. I just followed the root entry as this does not need to be a very secure server since it will not be running publicly. Now make sure you get back to your account and do:
sudo apt-get install gcc make automake autoconf libtool
mkdir src ; cd src
sudo mkdir /usr/local/cherokee
sudo mkdir /usr/local/lighttpd
The installed version of gcc is 4.1.2

Cherokee Setup Details

The following is what I executed to build Cherokee:
wget http://www.cherokee-project.com/download/0.6/0.6.0/cherokee-0.6.0b863.tar.gz
tar zxvf cherokee-0.6.0b863.tar.gz
cd cherokee-0.6.0b863
./configure --prefix=/usr/local/cherokee/0.6.0b863
make
sudo make install
Here is the configuration for cherokee:
server!port = 80
server!timeout = 60
server!keepalive = 1
server!keepalive_max_requests = 500
server!pid_file = /var/run/cherokee.pid
server!server_tokens = full
server!encoder!gzip!allow = html,html,txt
server!panic_action = /usr/local/cherokee/0.6.0b863/bin/cherokee-panic
server!mime_files = /usr/local/cherokee/0.6.0b863/etc/cherokee/mime.types

vserver!default!document_root = /usr/local/cherokee/0.6.0b863/var/www
vserver!default!directory_index = index.html

vserver!default!directory!/!handler = common
vserver!default!directory!/!handler!iocache = 1
vserver!default!directory!/!priority = 1
To run the web server I used:
cd /usr/local/cherokee/0.6.0b863
sudo sbin/cherokee -C etc/cherokee/cherokee.conf

Apache Setup Details

The following is what I executed to build Apache:
wget http://apache.oregonstate.edu/httpd/httpd-2.0.59.tar.gz
tar zxvf httpd-2.0.59.tar.gz
cd httpd-2.0.59
./configure --prefix=/usr/local/apache/2.0.59
make
sudo make install
I used the supplied highperformance.conf configuration file. I started the server with:
cd /usr/local/apache/2.0.59
sudo bin/httpd -k start -f conf/highperformance.conf
The server ran using prefork.

Lighttpd Setup Details

The following is what I executed to build lighttpd:
wget http://www.lighttpd.net/download/lighttpd-1.4.16.tar.gz
tar zxvf lighttpd-1.4.16.tar.gz
cd lighttpd-1.4.16
./configure --prefix=/usr/local/lighttpd/1.4.16
make
sudo make install
The configuration I used looked like this:
server.modules = (
    "mod_access",
    "mod_accesslog"

)

server.document-root = "/var/www"

mimetype.assign = (
    ".html" => "text/html",
    ".txt" => "text/plain"

)
I started the server with:
cd /usr/local/lighttpd/1.4.16
sudo sbin/lighttpd -f sbin/lighttpd.conf

Benchmark

I will perform several different benchmarks on each webserver. This is to help gauge what type of performance each server can handle in the different conditions. Each test will have SSL turned on and turned off.

small static file test

  • filesize: 99 bytes
  • command: ab -c 2 -t 2 -k http://localhost/index0.html

large static file test

  • filesize: 1.5MB
  • command: ab -c 2 -t 2 -k http://localhost/static.txt

Results

I have included cherokee with both iocaching on and off. The out of the box setting is that iocache is turned on.

small static file test w/ keepalive

  • cherokee 0.6.0b863 w/ iocache - 7816 reqs./sec.
  • cherokee 0.6.0b863 w/o iocache - 5761 reqs./sec.
  • lighttpd 1.4.16 - 4884 reqs./sec.
  • apache 2.0.59 - 2924 reqs./sec.

small static file test w/o keepalive

  • cherokee 0.6.0b863 w/ iocache - 2182 reqs./sec.
  • cherokee 0.6.0b863 w/o iocache - 1874 reqs./sec.
  • lighttpd 1.4.16 - 2255 reqs./sec.
  • apache 2.0.59 - 1250 reqs./sec.

large static file test w/ keepalive

  • cherokee 0.6.0b863 w/ iocache - 108 reqs./sec.
  • cherokee 0.6.0b863 w/o iocache - 107 reqs./sec.
  • lighttpd 1.4.16 - 106 reqs./sec.
  • apache 2.0.59 - 94 reqs./sec.

large static file test w/o keepalive

  • cherokee 0.6.0b863 w/ iocache - 88 reqs./sec.
  • cherokee 0.6.0b863 w/o iocache - 88 reqs./sec.
  • lighttpd 1.4.16 - 92 reqs./sec.
  • apache 2.0.59 - 118 reqs./sec.
Courtesy : http://www.cherokee-project.com/benchmarks.html