wwwd40

Members
  • Content count

    56
  • Joined

  • Last visited

Community Reputation

0 Neutral

About wwwd40

  • Rank
    DDP Fan club member

Profile Information

  • Gender
    Male
  1. Layering is essential and the consistency of policy management is key to it working in operation. That said, app-id (or the same by another name) can still be fooled even in those scenarios - if the decision engine can be tricked then the game is up. The border is a changed notion. It should be about zero trust and borders everywhere, define what is sensitive and be paranoid about all traffic. Its a big task to parse all traffic flows and syslog, learn user and entity behaviour and flag anything suspicious or out of the norm. Its even more difficult to reliably automate any response to that, and systems that say they do so are presumably liable to the same over hyped marketing. On that note, has anyone played with any Exabeam or another UEBA analytics system? Trying to convince my bosses to let me Oh and GP. I especially like global protect as a mechanism for encryption of traffic in transit (lot less clumsy than Oddyssey used to be!)
  2. .: Products Everywhere :. If product marketing was to be fully believed a particular brand of cleaning product could make hard work a thing of the past, fizzy drinks would be the key to happiness and buying the right car would give you superhuman powers. Who wants to hear that bog standard brand dishwasher salt is the same as the one with fancy packaging, or that Cola will make you fat with rotten teeth or that what type of car you drive is of no consequence to anyone? The reality is that any given company has a vested interest in presenting their product in the best possible way even if the reality is somewhat different and the shortcomings are conveniently left in the shadows. No one has a duty to explain the shortcomings with a product and you are left to feel these out for yourself and that is not without consequences: surmountable when we are talking about the latest innovation in toothpaste, but in the context of network security it is simply not good enough: A false sense of security is as bad, if not worse, than no security at all. .: The Context of the Firewall :. There is no doubt that a firewall is an essential tool in enforcing network security policy. Next generation firewall products offer tangible improvements over traditional firewalls in so much as they are able to provide context for traffic as opposed to allowing or denying traffic based purely on packet headers (OSI layer 2,3 and 4). Essentially, a next gen firewall is a decision engine which will inspect traffic to a greater or lesser degree based on dynamic, external information sources (e.g. AD, DNS, DHCP) as opposed to a filtering device leveraging very static information as seen in traditional firewalls. Next generation seeks to qualify the legitimacy of a particular flow by, amongst other things, validating the application that is in use and that sounds great, doesnt it? And there it is a real benefit over and above the traditional approach. But to qualify the marketing blurb further we need to understand how this works under the hood. The devil is always in the detail! Every vendor has their own technology that identifies traffic based on a signature list of protocols/applications, but for all their different marketing names they pretty much work the same way under the hood. To fully identify traffic, the firewall would have to hold a packet indefinitely while the firewall runs its checks, but that latency isn't sane. To keep performance acceptable, some tricks or shortcuts are employed to allow the firewall to do its job without impacting end user experience. Let’s not forget that performance always trumps security. We also can't have a situation (or too many instances..) where an application changes so much, for example a new release of a messaging client or online game, that it is no longer seen as defined by the signature else the product becomes too unreliable or demanding of maintenance to stomach and features are switched off. So what shortcuts are used, and what are the implications for security? 1. SSL Blind Spot Encrypted traffic is necessary to provide integrity of date in transit between endpoints. Nearly everything these days is a 'web application' and, if implemented properly, SSL can and should be leveraged to give a common underlying security to those programmes. The problem is that if the firewall can't inspect the traffic, it can’t judge what application is in use and cant spot threats or data exfiltration. In other words, SSL can be used to hide nasties within and the devices responsible for integrity shrug their shoulders and allow the traffic past. Instead, the firewall needs to sit as a virtuous man-in-the-middle of these streams, turning the encrypted to plain text, parsing it, then encrypting it to the end point if all is well. This is be processor intensive if performed on the firewall especially where there is a large amount of traffic, doesn't lend itself to single pass very well since SSL traffic will very likely need to be offloaded, and is difficult to implement in BYOD environments where trusted certificate authority information needs to be fettled. 2. Return Traffic Next generation firewalls work on the principles of flows: if returning traffic belongs to already inspected and validated outbound traffic then - in the main - it will not be inspected. 3. Default Behaviour for Unknown Traffic Next generation firewalls need to see a certain amount of traffic before they can make a decision as to what an application is or to put it another way all connections start from a position of there being insufficient data to determine what the application is. The amount of data required beyond the full connection handshake to make this determination varies from app to app - it could be two packets, or it could be ten or more. This leaves a potential for data leakage from the firewall so long as it is in small chunks, which could be leveraged to exfiltrate intellectual property or other sensitive information from the network. 4. Identifying Applications Next generation firewalls rely on a library of application definitions which detail characteristics and classification: for example the standard TCP/UDP ports it uses, what applications it is dependent on etc etc. These definitions use match conditions and rely on a small, limited set of attributes to make a positive match. Thus we have a situation whereby application signatures they use only basic information to categorise an application. For example, a signature definition for facebook might just specify http as the method and facebook.com (or.co.uk or..) as the host string. If those conditions are met then the firewall categorises it as facebook application traffic even if is destined to an IP address that is not facebook! We have a situation where not only can data can be exfiltrated in small chunks, but it can be moved in large chunks defined by the application id engine as legitimate traffic. These behaviours are fundamental to next generation firewalls, so are not bugs waiting to be fixed. The truth of the matter is that the firewall is a very large part of the answer, but not the entire answer, and as such only forms part of the overall security posture. Event monitoring and correlation through analytics are critical to network security, as is agility and responsiveness of your Security Incident Response. -------- References: Custom Application Signatures in PAN: https://live.paloaltonetworks.com/t5/Tech-Notes/Custom-Application-Signa... PacketKnockOut - Exploration of data exfiltration by port numbers: https://github.com/JousterL/PacketKnockOut FireAway - Next Generation Firewall Bypass Tool: https://github.com/tcstool/fireaway ”Network Application Firewalls Exploits and Defense” Brad Woodberg, Defcon 19 ”Bypassing Next-Gen Firewall Rules” Dave Lassalle, Nolasec 9/27/2012 "Sinking the Next Generation Firewall" Russell Butturini, Derbycon 2016
  3. The main switchgear is probably somewhere near the generators in the basement levels and will be buggered. The building is owned by Sabey now who (quote) "Sabey has more than 20 years of experience in the data center business and is perhaps the largest provider of hydro-powered facilities in the United States." The hydro thing is pretty ironic ;-) ref: https://www.datacenterknowledge.com/archives/2011/06/07/sabey-acquires-huge-verizon-building-in-nyc/ Anyone know whether their planed power system upgrades had already happened? If not it might be time to embark on that.
  4. A typical NIC will drop ghost frames - they are normally an indication of electrical interference (wiring problems). Usually the NIC will deny all knowledge of ghosts and since the software is relying on what the chipset is telling it, it wont be reported in software either (interface stats, test tool interface etc). Its the same as jam events on a collision domain. As for generating your own, I could be wrong, it might be done with modified drivers but how you'd see them based on them being dropped by the rx'ing NIC chipset I'm not sure. I don't think C raw buffer's allow you to specify preamble or sfd as the NIC handles that for you (its a 'flavour of ethernet' NIC after all so is complying with the standards). Everything I've seen on programming the raw frames starts at MACDST, but if you find a way then post your findings. Cheers, wd
  5. I've recently been looking into intrusion deception systems, specifically the Mykonos Juniper solution (see for an overview). Essentially it is a proxy that sits in front of your webserver and injects/strips code served by the webserver to place 'tar traps' that entice an attacker during the early phases of an attack. It attempts to profile the attacker on a per machine basis according to the severity of their activities. It attempts to track them by placing various "persistent tokens" (cookies, browser specific storage, multimedia framework storage (Flash, silverlight) clientside javascript storage, clever use of etag values): so independent of and more intelligent than simple ip tracking. The injected code points are numerous and configurable making it very difficult to tell whether the object you are playing with is a true resource of the website or a tar trap until you've already "tripped a wire" at which point the system may be remediating you: slowing your connection, presenting captcha if it thinks you are a bot, blocking your connection entirely, serving up broken pages, forcing log out etc.NB this doesnt actually spot attacks, just spots the potential for attacks by looking for reconnaissance activity. Its not a web application firewall or IPS/IDS. This approach goes a long way to visibility of activities that are normally very difficult to spot, address or report on. It also is not very intensive to set up and configure and doesn't require an ever updating list of signatures (lets be honest signature systems are often a step or 2 behind). From what I can tell, an attacker that: Uses a different VM for each recon activity or session or Goes straight for blind attacks or Is very efficient at cleaning their caches or Uses a browser that stores absolutely nothing (or an application that isn't a browser) may be able to thwart parts of the system tracking. Additionally, the system is not completely mature in terms of its clustering ability/data correlation and I can see companies being very jumpy about anything that is going to sit in line between their SLB and webfarm so it needs to be 100% proven. That said, people already do this with web application firewalls - I can see Mykanos like functionality being incorporated into these appliances very soon. Does anyone have any experience with this or similar systems? Does anyone have any of this software that can be tested? Cheers, /wd EDIT - Some interesting info: Open source persistent cookies: http://samy.pl/evercookie/ Mykanos blog about evercookie: http://blog.mykonossoftware.com/?p=142
  6. Don't worry.. It was bumped by a spammer to whom I replied to with an appropriatly terse response, only for the mods to remove the spam post making it look like I bumped it back from the grave. Honest guv it wasn't me. Kudos to the new mods for dealing with abuse so quickly
  7. Delete me please
  8. Hi, What is the make and model of your hard disk and what is the make and model of the machine it originally came from? There are some defaults that might work for you, and these can be found with a little bit of searching the web. Cheers Wd
  9. well do u know how to make a USB keylogger? Make a USB keylogger? http://www.instructables.com/id/How-to-build-your-own-USB-Keylogger/ Or do you mean how to install a key logger application silently via a USB stick? http://wiki.hak5.org/index.php?title=USB_Switchblade I suppose it is anyone's guess.
  10. Found this in my bookmarks, thought it might help & amuse similarly small minded people such as myself. http://routergod.com/ Some good basic information can be found presented in a comical way, e.g. http://www.routergod.com/paulhogan/index.html
  11. Hrmph. DDoS isn't "hacking" and it's lame. Botnets can be interesting, but not for what you want to do (malicious activity). You'd be better off spending your time on better ventures. IE - "real hacking". There's a ton of ways you can get involved which don't involve destruction and disturbance of services. Hardware hacking, System & network security, etc. I dunno I never really saw any use for white-hat other than stress testing software. Are there any other practical uses? Where would I get started with this? Nothing was mentioned about colours of hats and in any case, all the same knowledge applies just with a different application and moral compass. I think Beave's point was that generally there is not much to be learned by building a Zeus et al botnet and blasting crap at 'targets'. You seem to be outcome focussed (I want to get invovled in DDOS) as opposed to focussing on the journey of learning the mechanics behind a botnet and how you would go about coding your own, or re engineering the leaked Zeus code for example. Why is it that you want to get involved in DDOS activites?
  12. It's fair to point out to you that this is the type of question that wont recieve many welcome responses around here as it is far too open ended, and doesnt show a whole lot of understanding or research on your part. There is no process or flow chart that says 'do x then y and a bit of z' and a hacked website will drop out of the end. The most important thing for you is to know your target (google for hacking reconnaissance phase or similar). By gathering information about the website application and more generally the server, infrastructure that it runs over as well as the people who use or maintain the site you can research those technologies and people and plan the best way of achieving your goal. For example the site in question may well utilise SQL queries from the web front end to a backend database. It could be possible to manipulate the way the web interface interacts with the database to reveal superuser or administrator account details and will require nothing more than a web browser. Or, for example, the webserver may have known vulnerabilities which are exploitable - maybe a buffer overflow is present that allows for injection of shell code to return an admin shell. However, these are just two examples of possibilities - it would be just as valid to install a key logger on the web site administrator PC and steal his admin credentials that way. It all hinges on your research and what is 'the low hanging fruit'. Note that there are automated tools that can scan for poor input validation of web forms for SQL vulnerabilities and the same for known defects in web server applications. Lastly, if it appears that you dont want to put in any effort and learn things for yourself and you just want to achieve an outcome then prepare for disappointment. Additionally if you just go around downloading random tools and pointing them at websites then you should expect to pick up a virus or two along the way and depending on how effective the tool, the nature of the site, and how well you covered your tracks you may well have a knock on the door from the sweeny.
  13. If you are using esxi or some other VM set up, you can connect using vsphere or by connecting to the host os in another way. For sun servers, you may want to look at the LOM port which is effectively a serial connection that allows you to bring the box back up when it's dropped run levels for some reason. As for BSOD on a windows box.. jeez! Isn't the only thing that will sort that a power cycle? If so, why not install network connected PDU's (apc and others do this, you could probably build your own without too much trouble) and kill the power then bring it back up? Its not going to be a panacea for all faults but would work in the bsod scenario you mentioned.
  14. Before I saw the windows requirement, I automatically thought of ZFS. A quick google revealed http://code.google.com/p/zfs-win/ - it looks immature but might be something to look into further. cheers, /wd40
  15. The bit at the bottom looks like detail of muxed optical channels split according to their lambdas (wavelengths). The "blobs" hanging off the H octogon look like peering or transit points with the respective networks AS numbers.. e.g. AS2976 is sprint, AS1584 is DoD Network Information Center (http://bgp.he.net for search).