Re: Solaris 10 Containers / Zones Security Flaw
In-Reply-To: <424EC41F.2060901@xxxxxxx>
agreed Robert, there are many easy ways to limit this,
my research was more about whether Sun had implemented sanity
limits
in virtual memory and cpu usage as a default. which they hadn't.
it's a sad state, but most admins wouldn't use ulimit or set maxuprc
to limit this.. as Jonathon Katz mentioned, it's a balence between
usability and security, but i would've thought there should have been
some sane level of limit on virtual memory or similar for the zone upon
initial creation..
if it hasn't been working for you, as you pasted to me.. check your
ulimit for that user, is there any space limitations?
bash complainign about not enough space to fork could be virtual
memory limits which can be set for each user within ulimit..
thanks for your email and time, hope i've written something of interest
to you
Jonathan katz-
Likewise thanks for the reply, by no means apologise for your input, as
mentioned in my initial thread, i am aware that you CAN limit
resources, but was unaware how to do this for zones specifically. your
email has cleared this up!
>Received: (qmail 15752 invoked from network); 2 Apr 2005 19:01:38
-0000
>Received: from outgoing.securityfocus.com (HELO
outgoing3.securityfocus.com) (205.206.231.27)
> by mail.securityfocus.com with SMTP; 2 Apr 2005 19:01:38 -0000
>Received: from lists2.securityfocus.com (lists2.securityfocus.com
[205.206.231.20])
> by outgoing3.securityfocus.com (Postfix) with QMQP
> id 011E2237291; Sat, 2 Apr 2005 11:27:56 -0700 (MST)
>Mailing-List: contact bugtraq-help@xxxxxxxxxxxxxxxxx; run by ezmlm
>Precedence: bulk
>List-Id: <bugtraq.list-id.securityfocus.com>
>List-Post: <mailto:bugtraq@xxxxxxxxxxxxxxxxx>
>List-Help: <mailto:bugtraq-help@xxxxxxxxxxxxxxxxx>
>List-Unsubscribe: <mailto:bugtraq-unsubscribe@xxxxxxxxxxxxxxxxx>
>List-Subscribe: <mailto:bugtraq-subscribe@xxxxxxxxxxxxxxxxx>
>Delivered-To: mailing list bugtraq@xxxxxxxxxxxxxxxxx
>Delivered-To: moderator for bugtraq@xxxxxxxxxxxxxxxxx
>Received: (qmail 9846 invoked from network); 2 Apr 2005 08:47:55
-0000
>Message-ID: <424EC41F.2060901@xxxxxxx>
>Date: Sat, 02 Apr 2005 11:11:11 -0500
>From: Robert Escue <roescue@xxxxxxx>
>User-Agent: Mozilla Thunderbird 1.0 (Windows/20041206)
>X-Accept-Language: en-us, en
>MIME-Version: 1.0
>To: jim allan <intehnet@xxxxxxxxx>
>Cc: bugtraq@xxxxxxxxxxxxxxxxx
>Subject: Re: Solaris 10 Containers / Zones Security Flaw
>References:
<20050401073804.28308.qmail@xxxxxxxxxxxxxxxxxxxxx>
>In-Reply-To:
<20050401073804.28308.qmail@xxxxxxxxxxxxxxxxxxxxx>
>Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>Content-Transfer-Encoding: 7bit
>
>jim allan wrote:
>
>>all,
>>
>>
>>thought i'd share something from a bit of home research. It's a bit
trivial, and the "hole" (so to speak) is easily patched up, but it defies
the claims of Sun in regards to Solaris 10 security.
>>
>>
>>Solaris 10 contains a feature called containers, or zones, which are
kind of like a "VMware" "session" embedded inside the kernel. These
seperate zones have their own ip address (virtual interface off a
physical interface, eg; bge0:1), their own /proc /dev /etc and file
system, entirely their own operating system, and unable to affect the
master, or other zones.
>>Sun suggest zones are good for running separate internet facing
applications, for example, a sol10 box runs a webserver in one zone,
and an internal DNS on another zone. If the internet facing web server
gets compromised, and an attacker drops them selves to root on that
zone, whilst they are physically connected to the box, they cannot go
outside that zone, often, they'll have to be wise to solaris 10 to even
know they are in a zone, and it's not it's own box.
>>They can compromise and wreck havoc in that zone, without any
other zones, or the master zone, from which all zones are controlled,
being affected. There is NO way to drop out of a slave zone into a
master zone (yet...) unless you logged into the master zone first. I
hope that makes sense.. read suns webpage if you wanna know more.
http://www.sun.com/software/solaris/
>>
>>
>>Here's where it gets interesting. By default, there is no limit on
virtual memory or cpu time for each zone. By doing a standard bash
fork bomb, I was able to take down an entire Solaris 10 box, from
within a non-master zone. All zones were locked up, including the
master zone.
>>
>>
>>It's nothing ground breaking, but I just found it interesting/poor
that Sun didn't place, by default, CPU or memory limits on zones,
which are meant to be, essentially, master of their own domain, and
unable to affect other zones. One would have to go out of their way to
configure CPU limits.
>>
>>
>>See bash fork bomb below.
>>
>>
>>#!/usr/local/bin/bash
>>:(){ :|:& };:
>>
>>
>>
>>
>>ps; if you wish to patch this, either set a ulimit to the amount of
virtual memory a user can have, or explore the set up of zones, i've
been told there is a way to configure a limit to cpu time, although i
haven't been able to find any relevant documentation after a brief
search.
>>I'm considering writing a patch using solaris 10's dtrace D language
to capture a process that is forking X amount in Y time, given some
miracle that I have some free time once in a while :)
>>
>>look forward to your replies
>>
>>
>>jim allan
>>
>>intehnet at g mail dot com
>>
>>
>>
>>
>Jim,
>
>Did you install bash or use the supplied one with Solaris 10
>(/usr/bin/bash)? Because I cannot duplicate the results you got on my
>Ultra 2 using a your fork bomb in a bash shell as an unprivileged
user,
>see below:
>
>This is the session I started after I ran the fork bomb for at least 15
>minutes:
>
>login as: luser
>Password:
>Last login: Sat Apr 2 10:26:15 2005 from 192.168.1.12
>Sun Microsystems Inc. SunOS 5.10 Generic January 2005
>-bash-3.00$ id
>uid=101(luser) gid=10(staff)
>-bash-3.00$
>
>This is the screen output of the session where I launched the fork
bomb:
>
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: xmalloc: execute_cmd.c:267: cannot allocate 32 bytes (0
bytes
>allocated)
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: xmalloc: execute_cmd.c:267: cannot allocate 32 bytes (0
bytes
>allocated)
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>-bash: fork: Not enough space
>
>This is the output of prstat -Z showing the activity of the zone
zonetest:
>
> PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/
NLWP
> 10950 root 7040K 4520K cpu1 59 0 0:00:00 0.2% prstat/1
> 10939 root 10M 4864K sleep 59 0 0:00:00 0.0% sshd/1
> 1219 root 3696K 1752K sleep 59 0 0:00:00 0.0% nscd/25
> 10944 luser 5208K 2344K sleep 59 0 0:00:00 0.0% bash/1
> 5812 root 5976K 2496K sleep 59 0 0:00:00 0.0% sendmail/1
> 1188 root 3552K 648K sleep 59 0 0:00:00 0.0% sh/1
> 1138 daemon 6552K 1792K sleep 59 0 0:00:00 0.0% kcfd/3
> 1107 root 12M 328K sleep 59 0 0:00:04 0.0% svc.startd/
13
> 1275 root 6016K 1568K sleep 59 0 0:00:00 0.0% syslogd/14
> 1214 root 4976K 8K sleep 59 0 0:00:00 0.0% cron/1
> 1223 root 2120K 824K sleep 59 0 0:00:00 0.0% ttymon/1
> 1181 root 6936K 264K sleep 59 0 0:00:01 0.0% inetd/4
> 1184 root 1256K 936K sleep 59 0 0:00:00 0.0% utmpd/1
> 1173 daemon 2936K 8K sleep 59 0 0:00:00 0.0% statd/1
> 1268 root 6176K 1112K sleep 59 0 0:00:00 0.0% sshd/1
>ZONEID NPROC SIZE RSS MEMORY TIME CPU ZONE
> 2 36 193M 36M 2.4% 0:00:25 0.2% zonetest
>
>
>
>
>Total: 36 processes, 106 lwps, load averages: 0.01, 1.02, 19.88
>
>And finally the output of prstat -a showing the activity of the whole
>system:
>
> PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/
NLWP
> 10951 root 7040K 4520K cpu0 59 0 0:00:00 0.1% prstat/1
> 1219 root 3696K 1752K sleep 59 0 0:00:00 0.0% nscd/25
> 5812 root 5976K 2496K sleep 59 0 0:00:00 0.0% sendmail/1
> 1107 root 12M 328K sleep 59 0 0:00:04 0.0% svc.startd/
13
> 1275 root 6016K 1568K sleep 59 0 0:00:00 0.0% syslogd/14
> 1214 root 4976K 8K sleep 59 0 0:00:00 0.0% cron/1
> 1223 root 2120K 824K sleep 59 0 0:00:00 0.0% ttymon/1
> 1181 root 6936K 264K sleep 59 0 0:00:01 0.0% inetd/4
> 1188 root 3552K 648K sleep 59 0 0:00:00 0.0% sh/1
> 1184 root 1256K 936K sleep 59 0 0:00:00 0.0% utmpd/1
> 1173 daemon 2936K 8K sleep 59 0 0:00:00 0.0% statd/1
> 1138 daemon 6552K 1792K sleep 59 0 0:00:00 0.0% kcfd/3
> 1268 root 6176K 1112K sleep 59 0 0:00:00 0.0% sshd/1
> 1222 root 1984K 752K sleep 59 0 0:00:00 0.0% sac/1
> 1109 root 9128K 264K sleep 59 0 0:00:20 0.0%
svc.configd/12
> NPROC USERNAME SIZE RSS MEMORY TIME CPU
> 28 root 148M 29M 1.9% 0:00:25 0.1%
> 4 luser 31M 6096K 0.4% 0:00:00 0.0%
> 4 daemon 14M 1816K 0.1% 0:00:00 0.0%
>
>
>Total: 36 processes, 105 lwps, load averages: 0.01, 0.57, 16.39
>
>There are multiple ways of controlling resource use in Solaris 10, but
>if you want to limit total processes you could use these lines in
>/etc/system:
>
>set maxuprc=(number of processes)
>
>For more information:
>
>http://docs.sun.com/app/docs/doc/806-7009/6jftnqsjd?a=view
>
>
>Robert Escue
>System Administrator
>
>
>
>