NimitySSJ

Members
  • Content count

    284
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by NimitySSJ

  1. Don't forget the easiest method to target people: financially. Wikileaks leveraged anonymity and INFOSEC techniques to protect their work. Hackers, governments, media types, businesses, and more wanted them gone. Took enormous resources and they fortunately had steady funding. Then, they were about to target a major bank in America like they did Julius Baer. Bank of America's net worth dropped by several billion in a day after that announcement given it was believed they were the target. The core banks, arguably the elites of the elites, showed their power: all reliable funding mechanisms to Wikileaks were cut. Visa, Mastercard, Paypal, and so on. Wikileaks then burned through money until it finally collapsed. The internal breakup didn't help. The U.S. government can find ways to imprison you for not complying with their wishes. FBI can seize your machines before charging you. IRS can freeze your assets before charging you. U.S.P.S. can monitor or seize your mail (eg checks). Apparently, banks can cut off your funding as well. This doesn't even factor in CIA N.C.S. efforts like torture flights. Anyone creating a high assurance product that the NSA or FBI couldn't circumvent under any conditions could experience all of this. The Tor project has been lucky so far that they've rarely been a factor in stopping FBI or NSA from hitting their targets. Otherwise, they'd be next after Wikileaks. Note: I'd love to talk to Tor lawyers to see how they avoid what companies like Lavabit and Google can't. Maybe the strategy could be copied.
  2. @ SirAnonymous and all re privacy I think Bruce Schneier wrote the definitive essay on this a while back. It might help you. https://www.schneier.com/essays/archives/2006/05/the_eternal_value_of.html @ mSparks Reports on those slides are misleading. RedPhone is strong enough to be immune to passive network surveillance. If you're not important, they don't see your communications. However, they have 0 days on Android. So, they hack Android and bypass RedPhone crypto. That's why I advocate a holistic approach. NSA and even sophisticated blackhats are targeting every level. Security is only as strong as the weakest link. So, each level must be protected and most current solutions don't do that. I used to say "no FOSS has used high assurance methods." I make exception these days for one: Tinfoil Chat. Markus Ottela was one of the few to pay attention to the lessons others and I gave on high assurance (esp on Schneier's blog). His solution combined several strong techniques, from data diodes to my physical separation approach, into a novel solution that might be immune to remote attacks in a rigorous implementation. At my request, he also added a cascading cipher variant for practicality. The sooner people start applying proven methods, like he did, the sooner we'll have secure solutions to our problems. Still waiting on market and FOSS to get some sense. At least academia is building useful solutions: crash-safe.org processor, Cambrige CHERI processor, hardware CFI, CodeSEAL, and so on.
  3. Got an email on this thread. I got to look at my old statements, look at the results of Snowden leaks, and see how accurate/inaccurate they turned out. My old claims proved true over time. The NSA leaked TAO catalog showed they attack BIOS, peripheral firmware, emanations, crypto, base station layer, device drivers, privileged software, and regular apps. They also subvert the hell out of things at supplier level with bribes, coercion and infiltration. My requirement for high assurance (EAL6-7) at every layer, component, and interaction with independent evaluation by mutually suspicious parties and secure distribution is indeed necessary to stop (or slow) a TLA. Funny thing most projects reacting to NSA leaks are aiming for *much* less security or trusting complex lower layers already hacked. Such is a fool's game of building foundations on quicksand. re Cryptophone I talked to Frank Rieger, Cryptophone co-founder, on Schneier's blog years ago. I told him his phone was trivially hackable by an NSA type opponent despite his OS hardening because of lower layers and insecure architecture. I said he also couldn't prove to his users he didn't backdoor it, which had huge precedents (eg Crypto AG, Skype). As such, I told him he'd need ground up redesign to secure the architecture with independent evaluation that publishes the hash of the image, including software/firmware updates. Over time, they've added Android to the TCB and a mere "baseband firewall" for IO issues. (Rolls eyes...) My solution, which addressed various layers, had enough hardware to fill a brief case lol. At least the headset and keypad were light! A small, integrated version would require a custom ASIC with each component likely done fresh. ASIC's with proven components often cost $15-30 million to develop (mask alone is $2mil). Such is the cost and complexity of a nearly NSA-proof cell phone. Virtually every "secure" cellphone on the market shortcuts by using COTS, so my heuristic says they're *definitely* insecure. Secured cellphone would be bulky, more expensive than Cryptophone, subject to patent lawsuits, and probably still be vulnerable for EMSEC attacks. See General Dynamic's Sectera Edge for what it would look like far as size and trusted path interface. Situation looks dire for mobile COMSEC. My current scheme is to build a portable machine that other things, eg laptop or cell phone, can interface to. The common hardware uses best of breed security engineering to implement a MILS security model. The interfaces layer has much functionality, esp crypto, sealed in the hardware. So, the person pulls out their tiny smartphone, but it's really another device doing the work. The phone just has to securely connect to it, relay IO, and display something on a screen. Security requirements for that device should be small enough to allow high robustness. The *other* device will be bulkier and more complex, but it affords extra chips I need to use proven methods* maybe even in reusable way. Main board can be in briefcase, backpack, conference table, desk, car, etc. Whole thing would be expensive with the assurance activities and hardware. Hopefully, it can be under $10k. (Original briefcase model with medium assurance components & high assurance security kernel cost a few grand in hardware alone.) * Proven methods for nearly impenetrable systems include tagged memory, capabilities, smart IO processors with IOMMU, interrupt-less designs, non-writable firmware, small TCB, and default-deny control flow with access table for permissible function calls. Each of these already exist in a real product or design, past or present. Good news is much of it existed in old systems so it's consistent with my recent strategy of building modern system out of ultra-old shit to prevent patent-related takedown. Look up Burrough's B5000 libraries/HLLcode, IBM System/38 whole architecture, Hydra/CAP capability machines, GECOS's firmware approach, Intel's i432/i960MX designs, and so on. These each have some brilliant design decisions that make modern architecture look like shit from a security standpoint. However, the memory confidentiality and integrity schemes that prevent common physical attacks seem to be a modern invention almost certainly covered by patents. My first system will therefore assume physical security & trusted admin while aiming to defeat all remote, software or interface level attacks. Next system will do other stuff. My post on Schneier listing much of the best current security tech in case anyone wants to work on *real* security like the people in the paper (and myself) are doing: https://www.schneier.com/blog/archives/2013/12/friday_squid_bl_404.html#c2902272 I hope NSA leaks inspire more work on *real* full-system security instead of all the mental masturbation of finding 0-days, writing tons of unsafe code (eg C/C++) and putting more band-aid's on architectures not designed for security (eg UNIX model). But, hey, there's still billions to be made pushing or compromising bullshit security so why not.
  4. I'd say this scheme tops the rest of our desktops. Talk about an "upgrade." https://www.networkworld.com/community/blog/why-linux-user-now-using-windows-31 I do like how he accomplished all of his goals and has the same computing environment across radically different machines.
  5. I didn't know this was happening. Thanks for bringing it to my attention. That said, it's far from insane: it's a side effect of capitalism and makes things even more profitable. It's usually software companies doing lock-in via weird formats, protocols, etc. Software being easier to bypass, it troubles me to see laptop hardware manufacturers following suit.
  6. http://en.wikipedia.org/wiki/Cold_boot_attack That was funny.
  7. I agree. Most developers install or update their product, but don't check past that. I think a MITM attack is more common. It's well known, though, that subversion is the sophisticated attacker's tool of choice. I'm included in that. I also have a love for BIOS/firmware rootkits & covert channels. I like the latter b/c they're hard to notice, few "IT security pro's" even know what they are, and you can get a lot of data out b/f anyone knows it's happening. Not that I'm stealing data from anyone. Nick P schneier.com
  8. Generally not, as the major OSS compilers get a lot of attention. However, verified compilation is a big thing for me in my research into high assurance and trustworthy systems. Anyone worried about their compiler should use CompCert. It's another excellent product of Xavier Leroy's team at INRIA. They used the Coq (lol i know) proof assistant to formally specify and verify the phases of compilation. Only the initial phase, concrete syntax tree i believe, isn't formally verified. (Good luck doing that anyway). The compiler is automatically extracted from Coq code as ML or Ocaml code. (Ocaml is another great product of INRIA.) The Ocaml compiler was used almost as-is during a DO-178B project, so it's super high quality. The study below that tested many different compilers found tons of bugs in all of them, although very few in CompCert. It also had NO middle end bugs that were present in others. Goes to show their formal verification process works. They're currently making a MiniML compiler for the type of ML Coq generates. That would make the chain complete from specification to assembler, if we verify the autogen. http://lambda-the-ul...e.org/node/4241
  9. Bingo. This guy's got it. looks like ive been told. I do use truecrypt. I dont trust it 100% tho I don't blame you. A suspicious attitude toward security products is a good thing. The people who invented the first "high assurance" OS's were the one's that said absolute security on a general purpose PC is impossible. If a high assurance design comes with a bit of skepticism, shouldn't we be even more critical of a typical program interfacing with an EAL4 certified OS? Note: EAL4 means secure against "casual or inadvertant attempts to breach security." High robustness (EAL6-7) means secure against well-funded, sophisticated attackers with time on their hands. Medium assurance (EAL5) is a bit vague. Windows and Linux are EAL4+. Mac was EAL3 (lol). Most security software is EAL2-4. Only one OS (XTS-400) is currently even medium assurance. High assurance is mostly dead, so trust nothing unless it proves itself over time. Truecrypt has, so I trust it a bit and way more than its proprietary barely tested competition.
  10. Anyone looking into Java should also consider Scala and Netbeans. Scala is a superior language that's Java compatible. Netbeans is a very popular development tool and Eclipse's primary competitor. I know many Java developers that prefer Netbeans to Eclipse. Netbeans was also faster last time I tested it. (Note: This may have changed.) The one good thing about Eclipse is that many different languages and tools build their IDE on top of it. So, if you know it for one, it's easy to learn the next.
  11. "thats all fine and dandy, but truecrypt has many versions and the development is being funded by an unknown source. While it may be open source, the amount of new versions that come out are being deployed so rapidly that it's hard for the open source community to audit all the code." You're talking like it's an anonymous, black-box Fedora. TrueCrypt 1 was released in February 2004. Truecrypt 7.1, current version, was released in February 2012. That's 7 major versions over an 8 year period. Many had little .1 or .2 versions that did bug fixes or added a few features. Hardly a tough release schedule to keep up with, eh? Additionally, many security researchers audit the software and report bugs. Many promote it. Schneier's team did a security assessment of the "deniable" partitions and many issues they raised were fixed in the next version before the paper was finished, which they noted in the paper. "also, who's funding the developers for truecrypt? who are the developers? " Who are the developers for IronKey? Who funds the company? Did they backdoor it like AT&T, Vodaphone and Clipper? Idk. That's private, like the design & implementation. TrueCrypt is funded primarily by donations & built by volunteers far as we know. The developers have kept the code open & consistently refused commercial activity. Their licensing issues are probably deliberate to keep them in control of the code and brand, most likely for quality. They have excellent documentation telling you how to do things right, what can cause problems and the limitations of their software. (People backdooring things don't go that far usually.) Of course, you can always tell if it's a government scheme when they easily break the crypto and get their man: "In July 2008, several TrueCrypt-secured hard drives were seized from a Brazilian banker Daniel Dantas, who was suspected of financial crimes. The Brazilian National Institute of Criminology (INC) tried unsuccessfully for five months to obtain access to TrueCrypt-protected disks owned by the banker, after which they enlisted the help of the FBI. The FBI used dictionary attacks against Dantas' disks for over 12 months, but were still unable to decrypt them." Or not lol Nick P schneier.com
  12. I might be out of date on this, but the free software killer issue only affects Win8 on ARM. The ARM hardware manufacturers will be required to force loading signed software, such as Win8, on the device. The x86 systems will still let users disable the secure boot. I still think this is utter DRM bullshit b/c the third option is much better: let the user do secure boot & add public keys of their choosing (a la PGP). All of my designs with trusted boot feature this.
  13. I agree. If you want easy-to-learn & quickly build cool stuff, then Python is the way to go. It has less boilerplate than the heavyweight languages. This helps beginners focus on turning their ideas into software. However, I second another poster's claim that the first language should depend on what you're trying to get into. C is definitely better if you're aiming for embedded, Visual Basic if you're doing basic .NET apps (esp GUI), Perl for regex processing, Python/Java/Scala for cross-platform server-side app coding, SQL for databases, & HTML/JavaScript/AJAX for client-side. COBOL is also an option, being the backbone of transaction processing. (Note: see Just Friggin Kidding on COBOL).
  14. I wouldn't trust any of those. I warned clients and friends against them in the past. I told them: any time a cheap hardware maker says their product is secure there will be easy attacks against it. Time proved this true. To see what a secure device is like, look up NSA's Inline Media Encryption device (link below). It's for hard drives. It incorporates EMSEC protections, certified crypto implementation, a token the user possess, a trusted path so the user's PIN isn't captured by malware, a max PIN entry, and a zeroize function. Subtract the token and EMSEC, then you have the minimum design requirements for a secure secondary storage system. So, let's look at these products. Most of them will have FIPS certification, meaning the algorithms are right. They should have a decent random number generator, although many systems fail here. How is the secret entered? Did you say through a possibly backdoored PC? Holy misguided efforts, Batman! The better solution is to use TrueCrypt. It's effectiveness has been proven over time & it's code/mechanisms are open to inspect. Further, it is harder to crack a truecrypt volume just b/c it doesn't say which algorithm was used, meaning several must be tried. It will also be vulnerable to key sniffing, but there's actually potential for improvement there due to source availability & control of the OS. The closed, limited, and driver-restricted nature of these USB products makes improving their weaknesses harder. Hence, use a cheap USB stick & portable TrueCrypt. Keep the computers you use it on as malware-free as possible. Make backups. Less convenient than the "secure" (lol) USB sticks, but you can have more confidence in the results. NSA IME http://www.nsa.gov/i...tor/index.shtml
  15. I'd cheat. There are advanced technologies developed for Linux (heck, even Windows) that can greatly reduce exploitation possibilities. I'm big on risk mitigation b/c I spend most time researching "high assurance/robustness" systems: systems that can resist prolonged attack by sophisticated, well-funded entities (NSA definition). Turns out, there plenty you learn studying these systems that can be applied in COTS. Examples include minimized TCB, control of dataflow, carefully mapping requirements to design to implementation, minimizing implementation complexity, pen testing, etc. So, what comes to mind when we discuss this example? Quick 5 min brainstorming follows. For Linux, could try to get a copy of SecVisor. It's mathematically verified to prevent kernel-level injections. Then, add to the kernel executing only signed code, comparing existing processes against whitelist, detecting file modifications, and MAC (could use SMACK or Tomoyo to avoid SELinux-style complexities). Do the regular checklist steps for the system and specific server. Hackers will have a hard time exploiting this box. For Windows, might use application level virtualization so you can do the above with a precreated (or quickly created) Linux VM, depending on the rules. Technology like TILT can run with an application to detect illegal dataflows with minimal overhead. You might also use virtualization to run the WIndows box in a deprivileged way next to a privileged box (Linux or BSD) that monitors it. Quite a few options. They aren't simple. But protecting inherently insecure legacy software from a whole arsenal of attacks, including zero days, never is easy or simple. It's why I hate monolithic OS's and promote better designs like INTEGRITY, QNX, or MINIX 3 (work in progress). Google LOCK, GEMSOS and XTS-400 for very secure designs from the old days. (I think Schell has a "lessons learned" GEMSOS paper on the net, too.) Bernstein's Qmail Lessons Learned paper is also instructive.
  16. You're best option is to try to combine hardware and software. First, you get a nice stealthy software keylogger (or eavesdropper in general). Then, you get it onto the machine via a hardware attack to bypass defences & maybe obscure it further. The USB HID attack is nice for shells. My favorite of all time, due to Apple popularity, is the firewire attack. If they don't have an IOMMU, Firewire bypasses the OS protections altogether to give you full read-write access to RAM (see DMA). You can actually do a lot more than keylog with that kind of intrusion. Lastly, you can always pull a blue pill style attack or other subversion where the OS is running on top of or alongside highly privileged trojan that's intercepting data & invisible to the Windows system. Nick P schneier.com
  17. Perhaps to get a bar code & product name, then make a legitimate-looking (something) and then use it for (illegitimate) something to get (cash/rep/revenge). Just my guess. Nick P schneier.com
  18. One company claimed to have a breach of Google's sandbox. They sell to government and military types. They had no intention of telling Google what the problem was. Assuming it was legit, I wonder if it still works. Google might have found it and corrected it. Either way, they probably made money off it with professional exploit kit + 0day combinations selling for up to a million a year over here. (source: an article on 0-day & infowar companies in the US)
  19. First, compromise the machine physically with an boot attack (e.g. evil maid), firewire, or DMA over custom PCI/ExpressCard device. Then, you have some options: 1. Modify installed OS to load Windows, your rootkit and bogus AV app that has visible icon & basic dialog. Keylogger included. 2. Virtualize windows with a subverted hypervisor/VMM, with your rootkit controlling the "real" keyboard. 3. Disable the AV entirely (hoping they won't notice) and put in a run-of-the-mill keylogger. 4. Modify the keyboard driver to send any key pressed to your software, which just looks like it's moving memory. Another process sends that to you over safe channel. The most undetectable attack is the one almost all computers are vulnerable to and nobody looks for. Well, there's three there: infections that survive reformat, covert channels and emanation security. Such esoteric attacks are my specialty. They are insidious. Infections that survive reformat are classic. The best way to do it is BIOS infection. Get their BIOS, modify it to contain your rootkit as well, and it loads until they can overwrite their BIOS. (A good design should prevent that but requires much sophistication.) Many academics are [finally] looking into using trusted hardware connected to main chips to look for malware & stuff. Well, guess what? You can use the same approach to constantly inject malware or leak keyboard's internal communications. You should use a tiny SOC & connect it in a professional looking way. Best to do it to an identical piece of hardware, put in their hard disk, break it in a believably innocent way, and let them do a "clean" install. Their system will remain dirty until they buy new hardware. (oh the lulz... 8) A covert channel can form anytime two subjects (e.g. users, processes) share an object (e.g. CPU time, a storage area). A covert storage channel uses a shared storage resource to move information. A covert timing channel happens when two processes can see how long it takes to use a given resource. The sender alternates between tying it up and making it quick, representing 1's and 0's. There are ways to mostly eliminate these, but modern OS's ignore them altogether. (Yes, I am grinning most evily.) The keyboard driver or API could be modified to subtly leak key information over a covert channel. Covert channel bandwidth ranges from 1bps to several MBps. The cache issue for AES & RSA was a covert timing channel caused by cache interactions. I and many others identified tons of covert storage channels in TCP/UDP/IP stacks (think I posted an analysis here). There's still around 64+bits of extra space per packet that many people wont pay attention to. One HTTP session sends TONS of packets. Insert something into TCP/IP stack to utilize that if it sees a certain unique identifier in the system call data. So many possibilities. A typical sysadmin would never notice & not be able to comprehend what was happening. (I haven't even began on processor or firmware errata... mainly because the other vulnerabilities are always there lol 8) EMSEC is emanation security. This is the electromagnetic emanations that a computer emits during its operations that may contain patterns that can be used to reconstruct what it did. The emanations may be passively sent into the surrounding area, actively polled by the attacker, or transmitted over power lines. There are other side channels as well. (In around 2000-2001, I designed a concept of using the sound of keys being pressed with a laser bounce on a keyboard or laptop. I think that was independently done a year or two ago.) The EMSEC issues require power filters, shielded cabling, and properly shielded equipment. This is under the umbrella of TEMPEST. TEMPEST Level 1 products are expensive, bulky and hard to get a hold of. The government also classifies what we need to know to protect ourselves. (And, no, a TEMPEST expert told me a Faraday cage isn't the end all solution it sounds like.) I don't have the links but you should Google "keyboard eavesdropping" with words like power outlet and antenna. A group of researchers did it in past few years on video and put it online. So, stealth keylogging is easy because *real* security is hard & the market provides no incentives to build it. High assurance (A1/EAL7/NSA-Type1) design techniques can prevent most or all of these problems. However, high assurance systems are a rarity & modern OS's and hardware leak like a sieve. Hence, if you using the latter and you're enemy is [smart/determined/sophisticated], you're screwed. Q.E.D. Nick P schneier.com
  20. See my reply again. The absence of known flaws isn't my main argument for their superiority. It's their superior design and Tor's higher risk, often-broken design. I've been thinking about building a high assurance implementation of Tor, Freenet or I2P. That they run on "certified insecure" (EAL4) systems is disturbing. Even a minimized OpenBSD appliance with careful configuration would be better than the existing approach. Most of my recent designs have been targeted for Green Hill's INTEGRITY & INTEGRITY-178B platforms. They seem to be the best in security, performance and hardware availability. I've also done some designs utilizing the remaining certified platforms from the old days: Aesec's GEMSOS, BAE Systems' XTS-400/STOP, and Boeing's SNS Server. INTEGRITY Product Line http://www.ghs.com/p.../integrity.html INTEGRITY-178B & Middleware (passed toughest recent NSA evaluation) http://www.ghs.com/p...ty-do-178b.html Aesec's GEMSOS (got NSA's highest security rating ever, I think) http://www.aesec.com/ XTS-400/STOP: last decendent of secure Unix os's http://www.baesystem...sit_xts400.html LOCK (implemented Type Enforcement & ran UNIX apps) http://www.cryptosmi...om/archives/179 Boeing SNS Server (in evaluation under Common Criteria to EAL7, highest rating) http://www.boeing.co...070621b_nr.html I'm still evaluating design approaches. Problem is that high assurance projects require lots of specialized skill and money. I haven't decided the most cost effective approach. Might combine several as building blocks & just glue them together in a robust way.
  21. That's nice for encrypted conversations, but the OP wanted anonymity-preserving communications. The rarity of Pidgin-OTR use alone makes their users stand out amongst other Internet traffic.
  22. It depends on the protocol. The purpose of IP is to ensure delivery of packets, but it can also be used at application layer for other purposes. If the application layer's unencrypted data contains this information, then it leaks identifying information. BitTorrent is an example of a protocol that wasn't really designed for anonymity. Many people started using BT clients over Tor, thinking Tor would anonymize the data. The way that the protocol leaks identifying info led to the source IP identification of at least ten thousand users, maybe more. The application and protocol mustn't leak identifying information or they can become the weakest link in the strongest anonymity scheme. If you're wanting to understand these things, start by Googling that paper. You might also want to look into academic papers on attacks on anonymity schemes. Start by Google-ing which paper? This was the BitTorrent attack: http://arstechnica.c...tor-network.ars The attack exploited a compromise in the operation of the protocol over Tor. The protocol was impossibly slow if all of it was forced over Tor, so they just tried to do the identifying portions over Tor. Malicious exit nodes were used to catch identifying pieces with the rest. The DHT mode was susceptible because it uses UDP, not TCP, and Tor doesn't support UDP. Are you starting to see the complexity involved in knowing whether a given protocol and Tor configuration will preserve anonymity? Compare the Tor "solution" to a dedicated proxy embedded PC connected to a far-away WiFi hotspot with a long-range cantenna, a LiveUSB RAM-based distro, a mac changer, and optionally Tor as an extra layer. Best to view Tor as just one component in an anonymity scheme. The physical device or IP connecting to it shouldn't be yours, just to be safe. As for other attacks, most are DOS attacks. Here's one non-DOS attack and a link to research groups. Attack on particular routing strategy w/ lab test (2007) http://citeseerx.ist...p=rep1&type=pdf Page with links to many anonymity R&D groups https://www.torproje...esearch.html.en
  23. It depends on the protocol. The purpose of IP is to ensure delivery of packets, but it can also be used at application layer for other purposes. If the application layer's unencrypted data contains this information, then it leaks identifying information. BitTorrent is an example of a protocol that wasn't really designed for anonymity. Many people started using BT clients over Tor, thinking Tor would anonymize the data. The way that the protocol leaks identifying info led to the source IP identification of at least ten thousand users, maybe more. The application and protocol mustn't leak identifying information or they can become the weakest link in the strongest anonymity scheme. If you're wanting to understand these things, start by Googling that paper. You might also want to look into academic papers on attacks on anonymity schemes.
  24. The term FUD is usually reserved for claims that have no basis in fact and are purely fearmongering. My warnings are based on protocol analysis and attacks on Tor and similar networks that have been steadily published by security researchers for years, including the recent grab of over ten thousand IP addresses of Tor users. How is FUD again? "people finding holes is good, it means the system is being made more secure" Yeah, it's good if the system isn't in use by people who depend on the anonymity. We're not talking about software for keeping viruses from corrupting your system, whereby you can just restore from backup if it fails. We're talking about a scheme designed for many high stakes situations where well-funded, sophisticated attackers might trace the person trying to stay hidden. The results can be costly or fatal. Systems/protocols like this must be good enough from the get go without serious flaws. Such systems are called "high assurance" systems. There are higher assurance anonymity schemes and they are preferrable over solely depending on Tor. "This doesn't mean Tor is better, just that it's not any less trustable than anything else because security flaws were found it." That's false in this case. People often use Tor to hide their identity for a reason and one leak is all it takes to make them regret it. Tor's security issues provided a steady stream of opportunities for this to happen. Tor was (and is) flawed by DESIGN, while schemes like Freenet have a superior design. A good security scheme is architected with good design, implementation, and usage patterns. In Tor, we have a flawed design, a run-of-the-mill implementation, and it's hard to use apps in a secure manner with no leaks. "A few [security issues] remain, which Tor reminds you of when you download it and requires a very sophisticated adversaries to successfully pull off. " Why use a method with known, "remaining" security issues when alternatives without any known security issues exist? And with that said, I think I'm more than justified in warning people not to use Tor if they *really* need the anonymity. I'm just surprised your happily using a protocol that you know is flawed instead of a scheme w/out any known flaws. One sensible reason is that you have little of importance to hide and you're willing to trade a certain amount of security for convenience. Many individuals needing a Tor-like solution can't make that tradeoff. I wrote my original post with them in mind, as I don't know what Swerve intends to hide.
  25. :laugh: