Sign in to follow this  
Followers 0
wwwd40

Next Generation Firewall Limitations

6 posts in this topic

.: Products Everywhere :.
If product marketing was to be fully believed a particular brand of cleaning product could make hard work a thing of the past, fizzy drinks would be the key to happiness and buying the right car would give you superhuman powers. Who wants to hear that bog standard brand dishwasher salt is the same as the one with fancy packaging, or that Cola will make you fat with rotten teeth or that what type of car you drive is of no consequence to anyone? The reality is that any given company has a vested interest in presenting their product in the best possible way even if the reality is somewhat different and the shortcomings are conveniently left in the shadows. No one has a duty to explain the shortcomings with a product and you are left to feel these out for yourself and that is not without consequences: surmountable when we are talking about the latest innovation in toothpaste, but in the context of network security it is simply not good enough: A false sense of security is as bad, if not worse, than no security at all.
 
.: The Context of the Firewall :.
There is no doubt that a firewall is an essential tool in enforcing network security policy. Next generation firewall products offer tangible improvements over traditional firewalls in so much as they are able to provide context for traffic as opposed to allowing or denying traffic based purely on packet headers (OSI layer 2,3 and 4). Essentially, a next gen firewall is a decision engine which will inspect traffic to a greater or lesser degree based on dynamic, external information sources (e.g. AD, DNS, DHCP) as opposed to a filtering device leveraging very static information as seen in traditional firewalls. Next generation seeks to qualify the legitimacy of a particular flow by, amongst other things, validating the application that is in use and that sounds great, doesnt it? And there it is a real benefit over and above the traditional approach. But to qualify the marketing blurb further we need to understand how this works under the hood. The devil is always in the detail!
 
Every vendor has their own technology that identifies traffic based on a signature list of protocols/applications, but for all their different marketing names they pretty much work the same way under the hood. To fully identify traffic, the firewall would have to hold a packet indefinitely while the firewall runs its checks, but that latency isn't sane. To keep performance acceptable, some tricks or shortcuts are employed to allow the firewall to do its job without impacting end user experience. Let’s not forget that performance always trumps security. We also can't have a situation (or too many instances..) where an application changes so much, for example a new release of a messaging client or online game, that it is no longer seen as defined by the signature else the product becomes too unreliable or demanding of maintenance to stomach and features are switched off. So what shortcuts are used, and what are the implications for security?
 
1. SSL Blind Spot
Encrypted traffic is necessary to provide integrity of date in transit between endpoints. Nearly everything these days is a 'web application' and, if implemented properly, SSL can and should be leveraged to give a common underlying security to those programmes. The problem is that if the firewall can't inspect the traffic, it can’t judge what application is in use and cant spot threats or data exfiltration. In other words, SSL can be used to hide nasties within and the devices responsible for integrity shrug their shoulders and allow the traffic past. Instead, the firewall needs to sit as a virtuous man-in-the-middle of these streams, turning the encrypted to plain text, parsing it, then encrypting it to the end point if all is well. This is be processor intensive if performed on the firewall especially where there is a large amount of traffic, doesn't lend itself to single pass very well since SSL traffic will very likely need to be offloaded, and is difficult to implement in BYOD environments where trusted certificate authority information needs to be fettled.
 
2. Return Traffic
Next generation firewalls work on the principles of flows: if returning traffic belongs to already inspected and validated outbound traffic then - in the main - it will not be inspected.
 
3. Default Behaviour for Unknown Traffic
Next generation firewalls need to see a certain amount of traffic before they can make a decision as to what an application is or to put it another way all connections start from a position of there being insufficient data to determine what the application is. The amount of data required beyond the full connection handshake to make this determination varies from app to app - it could be two packets, or it could be ten or more. This leaves a potential for data leakage from the firewall so long as it is in small chunks, which could be leveraged to exfiltrate intellectual property or other sensitive information from the network.
 
4. Identifying Applications
Next generation firewalls rely on a library of application definitions which detail characteristics and classification: for example the standard TCP/UDP ports it uses, what applications it is dependent on etc etc. These definitions use match conditions and rely on a small, limited set of attributes to make a positive match. Thus we have a situation whereby application signatures they use only basic information to categorise an application. For example, a signature definition for facebook might just specify http as the method and facebook.com (or.co.uk or..) as the host string. If those conditions are met then the firewall categorises it as facebook application traffic even if is destined to an IP address that is not facebook! We have a situation where not only can data can be exfiltrated in small chunks, but it can be moved in large chunks defined by the application id engine as legitimate traffic.
 
These behaviours are fundamental to next generation firewalls, so are not bugs waiting to be fixed. The truth of the matter is that the firewall is a very large part of the answer, but not the entire answer, and as such only forms part of the overall security posture. Event monitoring and correlation through analytics are critical to network security, as is agility and responsiveness of your Security Incident Response.
 
 
--------
 
References:
Custom Application Signatures in PAN: https://live.paloaltonetworks.com/t5/Tech-Notes/Custom-Application-Signa...
PacketKnockOut - Exploration of data exfiltration by port numbers: https://github.com/JousterL/PacketKnockOut
FireAway - Next Generation Firewall Bypass Tool: https://github.com/tcstool/fireaway
”Network Application Firewalls Exploits and Defense”  Brad Woodberg, Defcon 19
”Bypassing Next-Gen Firewall Rules” Dave Lassalle, Nolasec 9/27/2012
"Sinking the Next Generation Firewall" Russell Butturini, Derbycon 2016

1

Share this post


Link to post
Share on other sites

Posted (edited)

It is my opinion firewalling an internal network should be a layered approach for best security. Also, network design should be thought out for complete security.

 

Just getting a state-of-the-art wiz-bang 7 layer filtering firewall.... If it ever has complete hardware failure the entire network will be vulnerable until RMA processes are completed. If a security hole is found in the filter mechanisms, the firewall is useless until patched by the vendor.

 

Border Firewall filtering protocols and stateful sessions (ingress and egress) ||-> DMZ ||-> host based firewall |-> web servers, VPN, other services that need exposure with IPS/IDS (like Snort) |->  Firewalls filtering protocols and hosts ||-> Layer 2/3 Switches ||-> host based firewalls with Administrator controlled rules and active IDS/IPS.

 

Honestly, I think layering might work great and save money in both equipment and training/hiring.  

Edited by tekio
0

Share this post


Link to post
Share on other sites

Layering is for sure the way to go. It also lessens the load on your border firewall. Host-based firewalls make host-based blacklisting far simpler -- you don't have to try and dynamically control firewall rules on another system.

0

Share this post


Link to post
Share on other sites

Also it allows Admin to set rules in a Windows Domain.  Worked for a company where executives were allowed to stream iTunes and anything they wanted. But not so for customer service people. Simple: apply firewall rules to Windows Security Groups in GP. :-)

0

Share this post


Link to post
Share on other sites

Layering is essential and the consistency of policy management is key to it working in operation. That said, app-id (or the same by another name) can still be fooled even in those scenarios - if the decision engine can be tricked then the game is up.

 

The border is a changed notion. It should be about zero trust and borders everywhere, define what is sensitive and be paranoid about all traffic. Its a big task to parse all traffic flows and syslog, learn user and entity behaviour and flag anything suspicious or out of the norm. Its even more difficult to reliably automate any response to that, and systems that say they do so are presumably liable to the same over hyped marketing. On that note, has anyone played with any Exabeam or another UEBA analytics system? Trying to convince my bosses to let me :)

 

Oh and GP. I especially like global protect as a mechanism for encryption of traffic in transit (lot less clumsy than Oddyssey used to be!)

0

Share this post


Link to post
Share on other sites
4 hours ago, wwwd40 said:

Oh and GP. I especially like global protect as a mechanism for encryption of traffic in transit (lot less clumsy than Oddyssey used to be!)

 

GlobalProtect can die in a fire. A client required that we use it for remote access, nothing but problems. They used RSA tokens with it, we pretty much lost sync on at least one token during the course of a week.

 

We use OpenVPN for everything we control at work. We've integrated it into a few clients' applications so they can manage certificate generation, revocation, and emailing of "click to install" config packages.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0