Thursday, December 18, 2008

Opera, SVGs and Java applets

Opera 9.63 was just released with some security fixes. I reported one of these issues, but neither myself nor Tarquin (a super friendly and knowledgeable Opera security guy) could do anything significant with it, despite feeling uneasy about the feature.

The issue is this: when an SVG image is included via an <img> tag, it is standard practice to disable running of JavaScript in that context. However, I noted that you could run a Java applet (and Flash presumably) in this context via the SVG tags such as <html:applet>.

A demo is here:

Every attack we came up with is catered for. We discussed some very in-depth attacks (which I don't want to go into just yet) but Opera has some nice tweaks such as respecting Content-Disposition: attachment for SVG images referred to via the <img> tag. The Opera guys even checked that the unusual context of executing due to an <img> tag gets the domain correct (that of the img resource, not the hosting page). By the time I inquired about this, they had already checked.

I continue to be impressed with Opera; the bug was fixed lightning fast even though no severe impact is known. And a few little Opera defensive measures turned out useful. This follows on from Opera being immune to my image theft via SVG attack.

Not many browsers support this advanced feature. Aside from Opera, both Safari and Chrome support this. But they do not render Java applets in SVGs in the <img> tag context.

If you can think of a scenario where these embedded applets could cause more trouble that I've realized, please leave a comment or mail me :)

Wednesday, December 17, 2008

Firefox cross-domain text theft....

... and a reappearance of the "302 redirect trick".

Here's the second bug from my PacSec presentation, and it's another Firefox one; kudos to the Firefox security team for their responsiveness. It's fixed in the recent and 3.0.5 releases.

It involves, yes, a cross-domain <script src="blah"> tag. These remain a horrible wart in web app security; you have to make sure that any authenticated resource on your domain either does not have any side effects when parsed / executed as JavaScript, or is CSRF protected.

This particular bug involves Firefox's window.onerror handler, which reports on JavaScript parse and execution errors. This handler has previously been used by Jeremiah Grossman to determine login status via script errors, see here! (Whereas this hole can be closed, it's not clear my similar attack via CSS can be).

The new attack notes that certain JavaScript error messages leak real content from remote domains, for certain constructs of data. More in-depth technical detail is here:

One cute twist is that Firefox 3 already had this fixed (thanks to Filipe Almeida; see credit below), but the "302 redirect trick" bypassed that fix. This trick is becoming quite fruitful; see previous Firefox image theft bug.

Credit to Filipe Almeida for being awesome. He was playing with this stuff long before anyone else.

Monday, November 24, 2008

Cookie forcing

It's time to write some coherent details about "cookie forcing", which is the name I've given for a new way to attempt to break into secure https sessions. This is surfjacking to the max - attacks an active MITM (man-in-the-middle) can attempt against an https application that follows best practices like marking its cookies secure; avoiding XSS and XSRF; etc.

Cookie forcing relies on slightly broken browser behaviour. Namely, an http response can set, overwrite or delete cookies used by an https session. This is a minor violation of https session integrity that can have significant consequences. Unfortunately, the cookie model works this way by specification and the addition of the Secure flag did not clean this up.

This means that every cookie value used by https applications could be malicious. This is somewhat counter-intuitive for developers of https-only apps, so it's understandable that vulnerabilities result from too much trust here. Looking at specific classes of vulnerability that can result from this, we have:
  • XSS
  • XSRF
  • Login XSRF
  • DoS
  • Logic abuse
Regarding XSS, any trust of the cookie value to be properly escaped (before e.g. pasting it into the DOM or document.write) is a vulnerability. Properly escaping the cookie value on write and relying on https for integrity is not sufficient, unfortunately.

There's another increasingly common application construct which cookie forcing can abuse to cause XSS. An increasing number of frameworks are writing JSON into cookie values for fast deserialization of complex data types. Some of these frameworks read the values back in using the fast option of e.g. eval('var x = ' + getCookieValue('STATE')). I'm sure you can see the pitfall here.

Regarding XSRF, there is a certain type of XSRF protection that can be subverted by cookie forcing. If the XSRF protection is a simple comparison between a URL parameter value and a cookie value, the active MITM attacker can now fake both of these. The mitigation is to ensure that the value is cryptographically attached to the current user's session via e.g. an HMAC.

Regarding login XSRF, I recommend the very recent paper by Adam Barth, Collin Jackson and John Mitchell which covers this topic nicely. These guys are great - they don't just moan about breakage in the browsers, they work to fix it up too. See the "Cookie-Integrity" header suggestion. In the absence of this header, one possible mitigation is to randomize the name of the session cookie. But that's going to a somewhat extreme measure to work around a browser deficiency. I'm not going to recommend every web app does something complex, when the fix should be driven by the browsers. Also note that logout XSRF will be near impossible to fix; an attacker can just spray cookies into a browser until it drops the session cookie.

Regarding DoS and logic abuse, there is the possibility to mess with any cookies an https app uses, to try and make is misbehave when it encounters unexpected values. The mitigation is to sign any sensitive cookies (tying to the current user, preferably current session).

Cookie forcing works very well in conjunction with my previous post regarding browser background http requests. In such an attack, a victim won't see anything untoward. The redirections all happen behind the scenes and do not change the URL status bar.

Remember that this attack is only relevant against https applications without any more obvious vulnerabilities. You need an active MITM capability (e.g. public wireless) to attempt it. Any applications without https support are already ruined against such a threat model.

Credit to Filipe Almeida for being about two years ahead of the rest of the web app security community, as usual. The XSRF issue was his originally, and a long time ago. More recently, there appears to have been an independent discovery by Collin Jackson and friends at Stanford.

Friday, November 21, 2008

Owning the paranoid: browser background traffic

When I talk to a lot of security researchers or paranoid types, it's very common to hear them describe how they very carefully access their bank account or personal GMail etc. Generally, the model used is to launch a separate browser instance, and navigate straight to an https bookmark. The session remains single-window, single-tab. It's a powerful model; the intent is to eliminate the chance of another (http) tab being a vector for owning the browser, or more likely abusing a cross-domain flaw in the browser or bank's web application.

Attacking this browsing model was one of the key demos in my PacSec presentation.

Whether you know it or like it or not, your browser is likely engaging in a flurry of behind-the-scenes plain http requests. Some examples are:
  • Safebrowsing updates
  • OCSP or other certificate related requests
  • Updating RSS feeds
Before going on to what a MITM attacker can do here, it's very worth mentioning the mitigation that I found. It seems to work well on the browsers I tested. If you set your http proxy (and all protocols other than https, why not) to localhost:1, then this unwanted plain http traffic will not go out on the network. The browsers seem to honour the proxy setting even for internally-initiated requests which is nice. It's not entirely clear that blocking OCSP requests in this way is ideal, but it's better than the alternatives.

So what evil can the MITM attacker do with these plain http requests? The good news is that the requests that need to be are signed (Safebrowsing and OCSP). Interestingly, a failure talking OCSP during an https initiation does not prevent the connection, but that's a separate discussion.

Specific useful attacks include:
  • Attacking the exposed HTTP protocol attack surface
  • Replying with a 302 redirect in order to exploit surfjacking
  • Replying with a 302 redirect followed by a Set-Cookie to exploit cookie forcing
As you can see, it's an interesting result that this paranoid browsing model does not protect you from surfjacking attacks where you think it might have. Particularly so because a lot of financial web sites neglect to mark their cookies Secure.

Cookie forcing is a great advanced way for an MITM to break into https web apps that are not vulnerable to surfjacking (or XSS, XSRF, XSSI, the usual stuff etc). I will detail this new attack class and its opportunities in a subsequent post. Also see Billy's nice write up on mixed content http script loading for another under-appreciated attack against https web apps.

Closing questions that could lead to future research include:
  • Do Firefox / Opera / other browsers have robust OCSP response parsers?
  • What can you do with evil / malformed XML responses to RSS updates?
  • What about replying to a background request with an unexpected MIME type - does that expand the attack surface?
  • What about other interesting or unexpected HTTP headers?

Tuesday, November 18, 2008

E4X and a Firefox XML injection bug

Up-front credit to my colleagues Filipe Almeida and Michal Zalewski who led the way in E4X security research.

If you haven't heard of E4X, or don't know why Firefox's E4X support should scare you, please consider reading this article.

I've just released details for a recently fixed Firefox XML injection bug. It's one of those bugs that is in search of a good exploitation opportunity. Currently, the known impact is negligible, but I'm throwing it out in case anyone has better ideas than I do. It feels like the interaction of this bug and E4X should be fruitful but perhaps not:

Monday, November 17, 2008

Firefox cross-domain image theft... and the "302 redirect trick"

Here's the first bug with full details from my PacSec presentation. It's fixed in the recent Firefox update. Firefox 3 was never vulnerable. In a nutshell, decent modern browsers permit you to read the pixels from an image by rendering images to a <canvas> and calling the Javascript APIs getImageData or toDataUrl. Therefore, cross-domain checks are required on the usage of these APIs. In Firefox, these checks were present but did not cater for the "302 redirect trick" properly.

So what is the "302 redirect trick"? It is where a malicious web page accesses a remote resource by referring to a local same-domain URL which hits the remote resource via an HTTP redirect. If the attacker is lucky, the browser is fooled into believing the remote resource is actually a local resource. And then theft is trivial.

The "302 redirect trick" has appeared many times in the past. It will undoubtedly lead to more vulnerabilities (I have two pending in fact). My personal past favourite was from my Google colleague Martin Straka, who noticed that 302 redirect targets leaked into the DOM when loading stylesheets.

The "302 redirect trick" works particularly well in cross-domain areas where the cross-domain nature of requests used to be unimportant. In this example of images, it has always been accepted that existence (or not) and width / height leaks cross-domain. However, getting the domain right became critical when the image data itself became accessible. Contract with iframes, where getting the domain right has always been critical. Browsers tend to not suffer vulnerabilities when loading iframes via a redirect.

Full details including demo attack code are in the advisory:

Sunday, November 16, 2008

PacSec presentation

My recent PacSec presentation (with Billy Rios), entitled "Cross-domain leakiness", is now online.
You can view it via this link.

There's a new way to attack SSL-enabled web apps in there ("Cookie Forcing"); a bunch of serious browser cross-domain thefts (many not yet disclosed); and attacks against the paranoid one window / one tab browsing model.

The slides by themselves are a little sketchy on detail. So over the next few days, time permitting, I'll write individual blog posts summarizing these areas. I will also blog details about the serious cross-domain thefts as and when the browser vendors fix them.

Monday, October 20, 2008

Some Python bugs

A little late on this report, but here are some Python runtime bugs I found back in May 2007:

Nothing too interesting. It continues to illustrate that modules backed by native code are a great way to break out of a VM. Also, image manipulation code remains a hot spot for integer overflows.

The pickle bug is worth talking about. It has been known for trusted applications to unpickle untrusted data. Of course, any such application invites arbitrary Python code execution unless the pickled buffer is very carefully sanitized; Python pickle buffers can carry Python executable payloads. Assuming an application avoids this more egregious security bug, this is a nasty subtlety. Along with the string concatenation bug, it's a way an attacker could directly attack a trusted application written in an allegedly memory-safe language.

Whilst testing out the pickle bug, I was seeing a very interesting glibc interaction:

*** glibc detected *** ./python: munmap_chunk(): invalid pointer: 0x0819e9a8
Segmentation fault

Unfortunately, my laptop with the magic glibc / gcc versions to reproduce this died horribly and I can't even remember what it was running. Anyway, these messages suggest that the glibc memory error handler is trusting the heap rather than "getting the hell out" by using write(stderr, ...) and kill(getpid(), SIGABRT). This can sometimes turn an unexploitable condition into an exploitable one. If you're interested in looking into this, let me know and I can try and help with the test environment.

Saturday, August 30, 2008

Cross-domain leaks of site logins

Browsers suck. We're building our fortified web apps on foundations of sand. A little while back, I was talking with Jeremiah about an interesting attack he had to determine whether a user is logged into a given site or not. The attack relies on the target site hosting an image at a known URL for authenticated users only. It proceeds by abusing a generic browser cross-domain leak of whether an image exists or not -- via the onload vs. onerror javascript events. Browsers generally closed that leak for local filesystem URLs (thus preventing accurate profiling of a victim's machine) but neglected to close it generally.

My version of this "login determination" attack is to abuse another leaky area of browser cross-domain handling: CSS. The <link> tag permits us to load CSS resource from arbitrary domains. The two interesting observations here are that we can read arbitrary CSS property values if we know the name of the style plus the property name we are interesting in. Secondly, most websites serve different CSS depending on whether the user is logged in or not. In addition, remember that browsers will happily pluck inline style definitions out of HTML. Put these things together, and here's a FF3.0.1 snippet that will tell if you are logged into MySpace or not:

<link rel="stylesheet"
function func() {
var ele = document.getElementById('blah');
alert(window.getComputedStyle(ele, null).getPropertyValue('margin-bottom'));
<body onload="func()">
<div id="blah" class="show">

If you are logged in, you'll see "3px" vs. "0px" otherwise.

You'll also appreciate from this that any CSS property value is stealable cross-domain, assuming the style names aren't randomized (which I've never seen). The natural follow-up question is, are sensitive values stored in CSS properties? Currently, generally not, although I have seen background-url storing look & feel customization which could assist fingerprinting a user. In a couple of extreme cases, I've seen background-url used with a data: URI such as data:image/png;base64,blabla. Might be worth stealing.

Friday, August 29, 2008

Ode to the bug that almost was

This post is a tribute to the hundreds of bugs that never quite were serious, and the emotional roller coaster ride on which they take researchers.

Some brief background. The skill in finding serious bugs these days isn't in being a demon code auditor or a furious fuzzer; there are thousands of these. The skill lies instead in finding a piece of software, or a piece of functionality, that has the curious mix of being important yet not having seen much scrutiny.

Onto today's almost-bug: an interesting integer condition in the ZIP parser of Sun's JDK. The ZIP parser is a critical piece of code because not only is it used to parse JARs, but also server-side Java applications will often parse untrusted ZIPs. (Such direct server-side attacks, along the lines of my JDK ICC parser vulnerabilities last year, are nasty, and starting to recently become in-vogue for Python, Ruby and Perl too). The affected API is Best I know, is not backed by the same native code, and thereby unaffected.

The interesting code, in zip_util.c follows:

/* Following are unsigned 32-bit */
jlong endpos, cenpos, cenlen;
/* Get position and length of central directory */
cenlen = ENDSIZ(endbuf);
if (cenlen > endpos)
ZIP_FORMAT_ERROR("invalid END header (bad central directory size)");
cenpos = endpos - cenlen;

jlong is a signed 64-bit type. The ENDSIZ macro, because of the way it is formulated, returned a signed int. Therefore, the assignment to cenlen triggers sign extension. This means that cenlen can end up being negative, rather than the stated intent of being treated as an unsigned 32-bit quantity. The negative value will of course bypass the security check and lead to subsequent undesirable state. (Note that the best fix is not to enhance the check, but to add a cast to unsigned int to the underlying macro as it is used in multiple places).

So why does this appear to be just a bug and not a security vulnerability? Well, on systems without mmap(), a huge allocation will either cleanly fail, or a read() attempt past EOF will cleanly fail. On systems with mmap(), things are more interesting. A 32-bit build will attempt a 2Gb large mapping on a potentially much smaller file. This could lead to interesting SIGBUS conditions as a server DoS. By quite some luck, the Sun JVM process seems to spray mappings liberally through the address space, leaving no room for a contiguous 2Gb mapping.

The same sign-extension bug exists in other parts of the ZIP handling, and leads to some interesting negative values getting to some interesting places. But lower-level sanity checks save the day in the cases that I could find.

A zip file capable of triggering the interesting log line "mmap failed for CEN and END part of zip file" is available at

Ah well, maybe next time. Come to thing of it, my pipeline does include real JDK vulns. Watch this space.

Monday, August 25, 2008

A dangerous combination of browser features

As browsers gain more and more features, the possibility increases for interesting or dangerous interactions between these features. I was recently playing with a couple of new browser features -- <canvas> and SVGs -- and found a cross-domain leak in the development version of Webkit:

Fortunately, no production versions of the major browsers are affected - and forearmed with this information, they can keep it that way. The only production browser I found that supports all of the required pieces is Opera 9.52, and they deserve some serious credit for getting the security check correct.

Thursday, July 31, 2008

Buffer overflow in libxslt

libxslt is an interesting attack surface; there are various places in which it is used to process untrusted stylesheets. This includes some browsers, although namespace issues seem to prevent the affected code from being reached in a browser context.

Within libxslt itself, there are some built-in functions. These are usually a fruitful place to look for vulnerabilities, particularly those that take integers etc. In this instance, I found problems in a little used cryptography related extension function. An incoming string is over-trusted in that its length is not sanitized, leading to a heap overflow.

XSLT, surprisingly, is turing-complete, even in its currently deployed incarnations (although you need to implement looping via recursion). There may be interesting DoS and further exploitation opportunities here.

Full technical details can be found here:

Tuesday, July 29, 2008

On FTP, SSL and broken interfaces

Oh what a fun day I just had piecing together a few SSL changes for vsftpd!

Let's start with a brief background on SSL. SSL provides not just secrecy but also integrity - an attacker cannot change your data stream in flight. This includes obviously changing data in the stream, and less obviously, truncating the stream. The interesting attack to truncate the stream is to fake a TCP packet with FIN set. Truncated data is still an integrity violation and could have interesting consequences depending on what is being transferred. Anyway, as of SSLv3 and newer, the protocol protects against this. So, we're all good right?

Well, no, not really. Let's look at how SSL_read() indicates a problem. We all know how read() behaves, of course. It returns 0 on a healthy EOF. SSL_read() does the same on a healthy, cryptographically guaranteed EOF. But what does it do it if the attacker forces the TCP stream to close with a faked FIN? It also returns 0. That's right - no indication of any problem at the API level. If you want to check what is going on, you need to either check the SSL error code, in case there is one, or call SSL_get_shutdown(). (This double option in itself leads to confusion and random variance in code, which isn't great but that's another topic).

The OpenSSL API for SSL_read() is broken. You can phrase why in various ways: it violates the principle of least surprise. Or, perhaps best said, it provides an API that is easy to abuse. Good, secure APIs are hard to abuse. Contrast strcat() with string::operator+=() for the classic example.

A quick survey of popular FTP servers reveals that they universally fail to check for this condition when accepting an SSL secured upload. vsftpd v2.0.7 now optionally checks for this condition but it is off by default! Why? The majority of FTP clients have an interoperability bug in that they don't SSL_shutdown() their uploads. It's a direct knock-on from the SSL_read() broken API. If it returned error on forced connection shutdown, then FTP servers wouldn't have ever tolerated buggy clients, and clients would have been forced to correctly shutdown their connections.

Thanks to Tim Kosse of FileZilla fame for putting me on to this area by checking for secure connection shutdown in the latest version of FileZilla, exposing an interoperability bug in vsftpd's SSL downloads.

Monday, July 14, 2008

Lame OpenOffice PCX crash

Sorry for the lame vuln. It's something I was playing with over a year ago and I just happened to notice it got fixed. I forget what the original deal was. I'm only posting because this blog serves as an RSS feed for the main vuln list.

A more interesting OpenOffice observation is in the works.

Sunday, July 13, 2008

Fancy an exploitation challenge?

So you think you're 1337? Check out these just released details of a buffer overflow in bzip2:

It looks pretty harmless, and it probably is... but I'd love for it not to be... if you think you have what it takes.

Friday, July 11, 2008

iPhone Safari update fixes old libxslt version

This story is both interesting and boring at the same time.

Boring because I didn't find anything new -- I just noted the applicability of something old to Apple's Safari. I've made sure to credit the finder of the old bug that applies to Safari; unfortunately not everyone in the security industry credits the original finder of the bug when noting it applies to a new context.

The story is interesting because it illustrates the ongoing challenge of depending upon complex open source libraries. As these move forward, you need a good way of keeping on top. The public nature of their bug repositories are a challenge; frequently, some user will log a "crash" bug which in fact has serious security consequences. These consequences may not immediately be realized and called out, in the bug report, change log or release announcement.

Wednesday, March 5, 2008

Sun JDK image parsing vulnerabilities

The technical details for this pair of vulnerabilities can be found here:

These vulnerabilities follow on from my original advisory in this area:

There are lots of interesting sub-stories here.

The first is that exploitation of the heap buffer overflows (in both the old and new advisories) relies on that fact that the JDK environment has a SEGV handler installed. These particular heap overflows will always try and perform massively long copies, therefore faulting as part of the copy. This would be a DoS only apart from the SEGV handler. As part of trying to dump out a good crash report, it can access trashed memory and become an exploitable condition.

The second is that this is a very dangerous class of attack. Most previous JDK attacks apply to running untrusted applets. These bugs, however, trigger also in server-side environments where JPEG parsing is performed. Direct, data-driven compromise of servers is quite unfortunate, especially in a runtime environment where memory corruptions can't usually occur.

Wednesday, February 27, 2008

Buffer overflow in Ghostscript

Given the huge amount of attention given to xpdf (and derivatives), it is surprising that not as much attention has been given to Ghostscript. Most Linux desktops will render both PDF and PS files directly from the web.

The attack surface of Ghostscript is huge. Not only is it a Turing Complete language[*], but it has a rich set of runtime operators and APIs. Many of these operators and APIs stray into areas of functionality that might be integer overflow prone: decompressors, image parsers, graphics rending, canvas handing, etc.

I've placed technical details of a buffer overflow at:

[*] Client-side execution of such languages has never gone particularly well from a security perspective. Think Java applets, or Javascript.

Wednesday, February 13, 2008

Your FTP / SSL solution is really secure, right?

Well no, not really. Almost all real-world usage of FTP over SSL has problems whereby the FTP data connection can be stolen (resulting in stolen downloads or forged uploads). The problem is mainly with FTP clients - if you require end users to generate their own SSL certs and manually enable sending them to the server, you've already lost on usability grounds.

Full technical details at

Saturday, February 2, 2008

Sun JDK6 XXE protection broken

Sun released JDK6u4 which fixes a possibly nasty issue where one of the XXE protection methods for the default XML parser was broken.

My advisory is at

Sun's advisory is at

Secunia picked it up at

Web services are obviously a key concern here. I haven't checked to see how the common web service frameworks do XXE protection. It's possible to ban DTDs outright, but I'd suspect more common would be to use the broken parser property

I'd love feedback on specific affected technologies.