• Content count

  • Joined

  • Last visited

  • Days Won


NimitySSJ last won the day on March 31 2012

NimitySSJ had the most liked content!

Community Reputation

8 Neutral

About NimitySSJ

  • Rank
    SUP3R 31337 P1MP
  1. Don't forget the easiest method to target people: financially. Wikileaks leveraged anonymity and INFOSEC techniques to protect their work. Hackers, governments, media types, businesses, and more wanted them gone. Took enormous resources and they fortunately had steady funding. Then, they were about to target a major bank in America like they did Julius Baer. Bank of America's net worth dropped by several billion in a day after that announcement given it was believed they were the target. The core banks, arguably the elites of the elites, showed their power: all reliable funding mechanisms to Wikileaks were cut. Visa, Mastercard, Paypal, and so on. Wikileaks then burned through money until it finally collapsed. The internal breakup didn't help. The U.S. government can find ways to imprison you for not complying with their wishes. FBI can seize your machines before charging you. IRS can freeze your assets before charging you. U.S.P.S. can monitor or seize your mail (eg checks). Apparently, banks can cut off your funding as well. This doesn't even factor in CIA N.C.S. efforts like torture flights. Anyone creating a high assurance product that the NSA or FBI couldn't circumvent under any conditions could experience all of this. The Tor project has been lucky so far that they've rarely been a factor in stopping FBI or NSA from hitting their targets. Otherwise, they'd be next after Wikileaks. Note: I'd love to talk to Tor lawyers to see how they avoid what companies like Lavabit and Google can't. Maybe the strategy could be copied.
  2. @ SirAnonymous and all re privacy I think Bruce Schneier wrote the definitive essay on this a while back. It might help you. @ mSparks Reports on those slides are misleading. RedPhone is strong enough to be immune to passive network surveillance. If you're not important, they don't see your communications. However, they have 0 days on Android. So, they hack Android and bypass RedPhone crypto. That's why I advocate a holistic approach. NSA and even sophisticated blackhats are targeting every level. Security is only as strong as the weakest link. So, each level must be protected and most current solutions don't do that. I used to say "no FOSS has used high assurance methods." I make exception these days for one: Tinfoil Chat. Markus Ottela was one of the few to pay attention to the lessons others and I gave on high assurance (esp on Schneier's blog). His solution combined several strong techniques, from data diodes to my physical separation approach, into a novel solution that might be immune to remote attacks in a rigorous implementation. At my request, he also added a cascading cipher variant for practicality. The sooner people start applying proven methods, like he did, the sooner we'll have secure solutions to our problems. Still waiting on market and FOSS to get some sense. At least academia is building useful solutions: processor, Cambrige CHERI processor, hardware CFI, CodeSEAL, and so on.
  3. Got an email on this thread. I got to look at my old statements, look at the results of Snowden leaks, and see how accurate/inaccurate they turned out. My old claims proved true over time. The NSA leaked TAO catalog showed they attack BIOS, peripheral firmware, emanations, crypto, base station layer, device drivers, privileged software, and regular apps. They also subvert the hell out of things at supplier level with bribes, coercion and infiltration. My requirement for high assurance (EAL6-7) at every layer, component, and interaction with independent evaluation by mutually suspicious parties and secure distribution is indeed necessary to stop (or slow) a TLA. Funny thing most projects reacting to NSA leaks are aiming for *much* less security or trusting complex lower layers already hacked. Such is a fool's game of building foundations on quicksand. re Cryptophone I talked to Frank Rieger, Cryptophone co-founder, on Schneier's blog years ago. I told him his phone was trivially hackable by an NSA type opponent despite his OS hardening because of lower layers and insecure architecture. I said he also couldn't prove to his users he didn't backdoor it, which had huge precedents (eg Crypto AG, Skype). As such, I told him he'd need ground up redesign to secure the architecture with independent evaluation that publishes the hash of the image, including software/firmware updates. Over time, they've added Android to the TCB and a mere "baseband firewall" for IO issues. (Rolls eyes...) My solution, which addressed various layers, had enough hardware to fill a brief case lol. At least the headset and keypad were light! A small, integrated version would require a custom ASIC with each component likely done fresh. ASIC's with proven components often cost $15-30 million to develop (mask alone is $2mil). Such is the cost and complexity of a nearly NSA-proof cell phone. Virtually every "secure" cellphone on the market shortcuts by using COTS, so my heuristic says they're *definitely* insecure. Secured cellphone would be bulky, more expensive than Cryptophone, subject to patent lawsuits, and probably still be vulnerable for EMSEC attacks. See General Dynamic's Sectera Edge for what it would look like far as size and trusted path interface. Situation looks dire for mobile COMSEC. My current scheme is to build a portable machine that other things, eg laptop or cell phone, can interface to. The common hardware uses best of breed security engineering to implement a MILS security model. The interfaces layer has much functionality, esp crypto, sealed in the hardware. So, the person pulls out their tiny smartphone, but it's really another device doing the work. The phone just has to securely connect to it, relay IO, and display something on a screen. Security requirements for that device should be small enough to allow high robustness. The *other* device will be bulkier and more complex, but it affords extra chips I need to use proven methods* maybe even in reusable way. Main board can be in briefcase, backpack, conference table, desk, car, etc. Whole thing would be expensive with the assurance activities and hardware. Hopefully, it can be under $10k. (Original briefcase model with medium assurance components & high assurance security kernel cost a few grand in hardware alone.) * Proven methods for nearly impenetrable systems include tagged memory, capabilities, smart IO processors with IOMMU, interrupt-less designs, non-writable firmware, small TCB, and default-deny control flow with access table for permissible function calls. Each of these already exist in a real product or design, past or present. Good news is much of it existed in old systems so it's consistent with my recent strategy of building modern system out of ultra-old shit to prevent patent-related takedown. Look up Burrough's B5000 libraries/HLLcode, IBM System/38 whole architecture, Hydra/CAP capability machines, GECOS's firmware approach, Intel's i432/i960MX designs, and so on. These each have some brilliant design decisions that make modern architecture look like shit from a security standpoint. However, the memory confidentiality and integrity schemes that prevent common physical attacks seem to be a modern invention almost certainly covered by patents. My first system will therefore assume physical security & trusted admin while aiming to defeat all remote, software or interface level attacks. Next system will do other stuff. My post on Schneier listing much of the best current security tech in case anyone wants to work on *real* security like the people in the paper (and myself) are doing: I hope NSA leaks inspire more work on *real* full-system security instead of all the mental masturbation of finding 0-days, writing tons of unsafe code (eg C/C++) and putting more band-aid's on architectures not designed for security (eg UNIX model). But, hey, there's still billions to be made pushing or compromising bullshit security so why not.
  4. I'd say this scheme tops the rest of our desktops. Talk about an "upgrade." I do like how he accomplished all of his goals and has the same computing environment across radically different machines.
  5. I didn't know this was happening. Thanks for bringing it to my attention. That said, it's far from insane: it's a side effect of capitalism and makes things even more profitable. It's usually software companies doing lock-in via weird formats, protocols, etc. Software being easier to bypass, it troubles me to see laptop hardware manufacturers following suit.
  6. That was funny.
  7. I agree. Most developers install or update their product, but don't check past that. I think a MITM attack is more common. It's well known, though, that subversion is the sophisticated attacker's tool of choice. I'm included in that. I also have a love for BIOS/firmware rootkits & covert channels. I like the latter b/c they're hard to notice, few "IT security pro's" even know what they are, and you can get a lot of data out b/f anyone knows it's happening. Not that I'm stealing data from anyone. Nick P
  8. Generally not, as the major OSS compilers get a lot of attention. However, verified compilation is a big thing for me in my research into high assurance and trustworthy systems. Anyone worried about their compiler should use CompCert. It's another excellent product of Xavier Leroy's team at INRIA. They used the Coq (lol i know) proof assistant to formally specify and verify the phases of compilation. Only the initial phase, concrete syntax tree i believe, isn't formally verified. (Good luck doing that anyway). The compiler is automatically extracted from Coq code as ML or Ocaml code. (Ocaml is another great product of INRIA.) The Ocaml compiler was used almost as-is during a DO-178B project, so it's super high quality. The study below that tested many different compilers found tons of bugs in all of them, although very few in CompCert. It also had NO middle end bugs that were present in others. Goes to show their formal verification process works. They're currently making a MiniML compiler for the type of ML Coq generates. That would make the chain complete from specification to assembler, if we verify the autogen.
  9. Bingo. This guy's got it. looks like ive been told. I do use truecrypt. I dont trust it 100% tho I don't blame you. A suspicious attitude toward security products is a good thing. The people who invented the first "high assurance" OS's were the one's that said absolute security on a general purpose PC is impossible. If a high assurance design comes with a bit of skepticism, shouldn't we be even more critical of a typical program interfacing with an EAL4 certified OS? Note: EAL4 means secure against "casual or inadvertant attempts to breach security." High robustness (EAL6-7) means secure against well-funded, sophisticated attackers with time on their hands. Medium assurance (EAL5) is a bit vague. Windows and Linux are EAL4+. Mac was EAL3 (lol). Most security software is EAL2-4. Only one OS (XTS-400) is currently even medium assurance. High assurance is mostly dead, so trust nothing unless it proves itself over time. Truecrypt has, so I trust it a bit and way more than its proprietary barely tested competition.
  10. Anyone looking into Java should also consider Scala and Netbeans. Scala is a superior language that's Java compatible. Netbeans is a very popular development tool and Eclipse's primary competitor. I know many Java developers that prefer Netbeans to Eclipse. Netbeans was also faster last time I tested it. (Note: This may have changed.) The one good thing about Eclipse is that many different languages and tools build their IDE on top of it. So, if you know it for one, it's easy to learn the next.
  11. "thats all fine and dandy, but truecrypt has many versions and the development is being funded by an unknown source. While it may be open source, the amount of new versions that come out are being deployed so rapidly that it's hard for the open source community to audit all the code." You're talking like it's an anonymous, black-box Fedora. TrueCrypt 1 was released in February 2004. Truecrypt 7.1, current version, was released in February 2012. That's 7 major versions over an 8 year period. Many had little .1 or .2 versions that did bug fixes or added a few features. Hardly a tough release schedule to keep up with, eh? Additionally, many security researchers audit the software and report bugs. Many promote it. Schneier's team did a security assessment of the "deniable" partitions and many issues they raised were fixed in the next version before the paper was finished, which they noted in the paper. "also, who's funding the developers for truecrypt? who are the developers? " Who are the developers for IronKey? Who funds the company? Did they backdoor it like AT&T, Vodaphone and Clipper? Idk. That's private, like the design & implementation. TrueCrypt is funded primarily by donations & built by volunteers far as we know. The developers have kept the code open & consistently refused commercial activity. Their licensing issues are probably deliberate to keep them in control of the code and brand, most likely for quality. They have excellent documentation telling you how to do things right, what can cause problems and the limitations of their software. (People backdooring things don't go that far usually.) Of course, you can always tell if it's a government scheme when they easily break the crypto and get their man: "In July 2008, several TrueCrypt-secured hard drives were seized from a Brazilian banker Daniel Dantas, who was suspected of financial crimes. The Brazilian National Institute of Criminology (INC) tried unsuccessfully for five months to obtain access to TrueCrypt-protected disks owned by the banker, after which they enlisted the help of the FBI. The FBI used dictionary attacks against Dantas' disks for over 12 months, but were still unable to decrypt them." Or not lol Nick P
  12. I might be out of date on this, but the free software killer issue only affects Win8 on ARM. The ARM hardware manufacturers will be required to force loading signed software, such as Win8, on the device. The x86 systems will still let users disable the secure boot. I still think this is utter DRM bullshit b/c the third option is much better: let the user do secure boot & add public keys of their choosing (a la PGP). All of my designs with trusted boot feature this.
  13. I agree. If you want easy-to-learn & quickly build cool stuff, then Python is the way to go. It has less boilerplate than the heavyweight languages. This helps beginners focus on turning their ideas into software. However, I second another poster's claim that the first language should depend on what you're trying to get into. C is definitely better if you're aiming for embedded, Visual Basic if you're doing basic .NET apps (esp GUI), Perl for regex processing, Python/Java/Scala for cross-platform server-side app coding, SQL for databases, & HTML/JavaScript/AJAX for client-side. COBOL is also an option, being the backbone of transaction processing. (Note: see Just Friggin Kidding on COBOL).
  14. I wouldn't trust any of those. I warned clients and friends against them in the past. I told them: any time a cheap hardware maker says their product is secure there will be easy attacks against it. Time proved this true. To see what a secure device is like, look up NSA's Inline Media Encryption device (link below). It's for hard drives. It incorporates EMSEC protections, certified crypto implementation, a token the user possess, a trusted path so the user's PIN isn't captured by malware, a max PIN entry, and a zeroize function. Subtract the token and EMSEC, then you have the minimum design requirements for a secure secondary storage system. So, let's look at these products. Most of them will have FIPS certification, meaning the algorithms are right. They should have a decent random number generator, although many systems fail here. How is the secret entered? Did you say through a possibly backdoored PC? Holy misguided efforts, Batman! The better solution is to use TrueCrypt. It's effectiveness has been proven over time & it's code/mechanisms are open to inspect. Further, it is harder to crack a truecrypt volume just b/c it doesn't say which algorithm was used, meaning several must be tried. It will also be vulnerable to key sniffing, but there's actually potential for improvement there due to source availability & control of the OS. The closed, limited, and driver-restricted nature of these USB products makes improving their weaknesses harder. Hence, use a cheap USB stick & portable TrueCrypt. Keep the computers you use it on as malware-free as possible. Make backups. Less convenient than the "secure" (lol) USB sticks, but you can have more confidence in the results. NSA IME
  15. I'd cheat. There are advanced technologies developed for Linux (heck, even Windows) that can greatly reduce exploitation possibilities. I'm big on risk mitigation b/c I spend most time researching "high assurance/robustness" systems: systems that can resist prolonged attack by sophisticated, well-funded entities (NSA definition). Turns out, there plenty you learn studying these systems that can be applied in COTS. Examples include minimized TCB, control of dataflow, carefully mapping requirements to design to implementation, minimizing implementation complexity, pen testing, etc. So, what comes to mind when we discuss this example? Quick 5 min brainstorming follows. For Linux, could try to get a copy of SecVisor. It's mathematically verified to prevent kernel-level injections. Then, add to the kernel executing only signed code, comparing existing processes against whitelist, detecting file modifications, and MAC (could use SMACK or Tomoyo to avoid SELinux-style complexities). Do the regular checklist steps for the system and specific server. Hackers will have a hard time exploiting this box. For Windows, might use application level virtualization so you can do the above with a precreated (or quickly created) Linux VM, depending on the rules. Technology like TILT can run with an application to detect illegal dataflows with minimal overhead. You might also use virtualization to run the WIndows box in a deprivileged way next to a privileged box (Linux or BSD) that monitors it. Quite a few options. They aren't simple. But protecting inherently insecure legacy software from a whole arsenal of attacks, including zero days, never is easy or simple. It's why I hate monolithic OS's and promote better designs like INTEGRITY, QNX, or MINIX 3 (work in progress). Google LOCK, GEMSOS and XTS-400 for very secure designs from the old days. (I think Schell has a "lessons learned" GEMSOS paper on the net, too.) Bernstein's Qmail Lessons Learned paper is also instructive.