Sign in to follow this  
Followers 0
sokar2k7

Some websites invulnerable to ARP MITM?

11 posts in this topic

Hi,

I have been doing research/testing of ARP man in the middle attacks using ettercap as well as Cain and Abel (C&A). I have observed that there are many “secure” websites such as gmail, yahoo mail, myspace, etc. whose passwords can be sniffed using either ettercap or C&A.

However, I have also noticed that there are quite a few for which this attack will not work. For example, this attack will not work against these three banking sites:

https://www.wellsfargo.com/

https://chaseonline.chase.com/online/home/sso_co_home.jsp

https://www.bankofamerica.com/index.jsp

I am using IE6, so when I navigate to these webpages, I get the invalid certificate warning as usual, but when I enter my username/password, nothing shows up in either ettercap or C&A.

How are these websites secured? I assume that it has something to do with either the type of certificates used or something to do with the SSL handshake, but not much beyond that.

Thanks.

0

Share this post


Link to post
Share on other sites

Some servers will allow the client to pick the SSL version, and your script kiddie tools are automatically overriding that and making the server force an older version of SSL, which is vulnerable to MITM.

The higher security sites are forcing a higher version of SSL, and ignoring the client downgrade requests.

This has nothing to do with ARP. It is independent of the vector that you used to set up your MITM connection.

I think it's pretty clear that you're not actually doing this for "research/testing" purposes, and I feel bad for responding to this thread seriously.

Edited by xyzzy
0

Share this post


Link to post
Share on other sites

thnx for the reply

you r free to think whatever you want, but the reason i asked this was to understand countermeasures to ARP MITM attacks. I actually oversaw the implementation of port security on our switches after i discovered this vulnerability on our network, but I also understand some switches are older and port sec implementation would be a pain.

also you have to consider the fact that when you publish a website, (like gmail) you have to assume that people will view your website on ARP poisonable switched networks. Therefore, from a website implementation standpoint, you will want, at the very least, to understand how to make your site to be ARP MITM proof.

0

Share this post


Link to post
Share on other sites

It's unreasonable to make a mass-use free site require SSLv3, when many users do not have SSLv3 compliant suites.

ARP has nothing to do with this discussion.

0

Share this post


Link to post
Share on other sites

uhh...ARP poisoning is the means of getting in between the client and the default gateway. it is at this point when you can inject faulty SSL certificates and sniff usernames and passwords

btw, i am using dummy usernames/passwords to test these websites. whether the login is valid or not, the U/P will be sniffed

anyway, i checked the wellsfargo site's certificate and compared it to another website that is vulnerable to SSL injection and both are running SSLv3 and signed by verisign, and both are using SHA1/RSA as their algorithm and using 1024 bit encryption. As far as i can tell, both certificates are identical, which would mean that your explanation that my "script kiddie" tools are forcing negotiation to a lower version of SSL is incorrect.

0

Share this post


Link to post
Share on other sites

Did you check the SSL certificates while ettercap was actively injecting?

What I'm saying is that it doesn't matter that you ARP spoofed. It's impossible to make an application "ARP MITM proof", because applications are not aware of the data link layer. This same scenario exists whether you ARP spoof, DNS spoof, are a legitimate proxy, or are in any other MITM scenario.

0

Share this post


Link to post
Share on other sites

Ok, when you phrase it that way, you are correct. No web app can tell it’s being improperly redirected via ARP poisoning, since, as you said, ARP is a layer 2 protocol. In this case, I really don’t care where packets to/from my app are coming/going. What I do care about is certificate injection, which is the means by which usernames/passwords are being sniffed.

So let me rephrase my question yet again; how are these websites preventing faulty SSL certificate injection and is it feasible for a small to medium business to implement such standards?

I checked the certificate that was being injected and compared it to the real one, and I noticed the following differences:

The certification path is invalid

The signature algorithm used in fake is md5RSA; in the real one, SHA1RSA is used

The faked certificate is missing the following fields:

basic constraints

key usage

crl distribution points

certificate policies

enhanced key usage

authority information access

1.3.6.1.5.5.7.1.12

0

Share this post


Link to post
Share on other sites

What do the discrepancies tell you?

Note: algorithms are negotiated BEFORE certificates are presented.

0

Share this post


Link to post
Share on other sites

well, these descrepencies explain why I am recieving a certificate warning message in IE

both of the original certificates and fake ones appear to to be identical, however I can sniff the passwords for one of the sites and not the other

does the vulnerability have to do with the algorithm negotiation?

0

Share this post


Link to post
Share on other sites

Some websites may need the fields to be added manually to cain and abel by clicking configure and then choosing http fields.

0

Share this post


Link to post
Share on other sites

Just to recap, here is my working theory thus far:

Because two websites exist whose SSL information are identical, yet one is vulnerable to faulty SSL certificate injection while the other is not, would infer that the vulnerability is SSL independent.

If not SSL, the only other alternative, unless there is another web security mechanism I am unaware of, is an application level scheme, such as hashing the password and using web programming to “hide” username/password combinations. I have actually seen such methods before; when I log into certain non-SSL https websites, and then I view the packets that were sent/received, where the username/password should be, I simply see java garbage.

In our case, for example, if a website were to both hash and encode usernames and passwords then ettercap or Cain & Abel or any other MITM program would not know where the username/password hashes are, which would explain why they cannot display them.

Hashes and encoding are not the end-all approach to web security because anyone who wrote or involved in the coding of the website would know where to look to extract the usernames and passwords. Therefore, SSL is used to prevent such an occurrence.

The reason encoding/hashing is not widely implemented is due to cost/performance restrictions/considerations.

Any thoughts? Comments? Suggestions? Are there any other web security mechanisms that could be at work here?

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0