- Second, some browsers will not allow you to host an SSL page in a frame if the parent page is not using SSL. The easy fix for the malicious developer is to simply use SSL for the parent site, but that could be problematic as the CA’s theoretically verify the sites requesting certificates.
if (top != self)
- Fourth, there is a new HTTP header that Microsoft introduced in IE 8 that tells the browser that if the requested page is hosted in a frame to simply stop processing the request. Safari and Chrome support it natively, and Firefox supports it with the NoScript add on. The header is called X-Frame-Options and it can have two values: “DENY” which prevents all requests, and “SAMEORIGIN” which allows a page to be rendered if the parent page is the same page. E.g. the parent is somesite.com/page and the framed page is somesite.com/page.
There are a couple of ways to add this header to your page. First you can add it via ASP.NET:
Or you could add it to all pages via IIS. To do this open the IIS Manager and select the site in question. Then select the Feature “HTTP Response Headers”:
Select Add… and then set the name to x-frame-options and the value to DENY:
By keeping in mind these options you can do a lot to prevent any exploits that use frames.
Only took a couple quick searches Googling with Bing, but in IIS 7 if you create a
request for a certificate, create it by a CA and then complete the request, and find
it blows up with this message box:
CertEnroll::CX509Enrollment::p_InstallResponse: ASN1 bad tag value met. 0x8009310b
All it means is that the CA that issued the certificate isn’t trusted on the server.
I came across this in a test environment I was building. I had a Domain with
CA Services, and a server that existed outside the domain. I used the domain
CA to create the certificate, but because the web server wasn’t part of the domain,
it didn’t trust the CA.
My fix was to add the CA as a trusted Root Authority on the web server.
Earlier today, Cory Fowler suggested I write up a post discussing the differences
between the AntiXss library and the methods found in HttpUtility and how it helps
defend from cross site scripting (xss). As I was thinking about what to write,
it occurred to me that I really had no idea how it did what it did, and why it differed
from HttpUtility. <side-track>I’m kinda wondering how many other people
out there run in to the same thing? We are told to use some technology because
it does xyz better than abc, but when it comes right down to it, we aren’t quite sure
of the internals. Just a thought for later I suppose. </side-track>
A Quick Refresher
To quickly summarize what xss is: If you have a textbox on your website that someone
can enter text into, and then on another page, display that same text, the user could
This usually results in redirecting to another website that shows advertisements or
try’s to install malware.
The way to stop this is to not trust any input, and encode any character that could
be part of a tag to an HtmlEncode’d entity.
HttpUtility does this though, right?
The HttpUtility class definitely does do this. However, it is relatively limited
in how it encodes possibly malicious text. It works by encoding specific characters
like the the brackets < > to < and > This can get tricky
because it you could theoretically bypass these characters (somehow – speculative).
The AntiXss library works in essentially the opposite manner. It has a white-list
of allowed characters, and encodes everything else. These characters are the
usual a-z 0-9, etc characters.
I’m not really doing you, dear reader, any help by reiterating what dozens of people
have said before me (and probably did it better), so here are a couple links that
contain loads of information on actually using the AntiXss library and protecting
your website from cross site scripting: