ChZ

Members
  • Content count

    150
  • Joined

  • Last visited

Community Reputation

0 Neutral

About ChZ

  • Rank
    SUP3R 31337
  • Birthday 04/23/1986
  1. There are probably a lot of programs out there that rely on there being a user with the name 'root' (I know ubuntu has some scripts that make that assumption since I just tried it on a VM). If you don't care about breaking things for what you perceive as increased security, just modify your /etc/passwd, /etc/passwd-, and /etc/group files so that 'root' is renamed to whatever you like. I'm going to note here that the security gains in the face of whatever system stability losses you'll incur aren't that great. Consider what happens when you keep the 'root' user and someone tries to brute force the password. Suppose you use a password that is not dictionary based (passwd even yells at you if you use a weak password these days) that's sufficiently long, say 8 characters, chosen from the space of capital/lower case letters, numbers, and symbols. You should wind up with around (26*2+20)^8 potential passwords (this is conservative, there are slightly more typable characters). Now, say your system can handle a login attempt every microsecond (again, this is conservative, I'm unaware of any system this fast). This means to exhaust the entire space you'll need ((26*2+20)^8/(10^6))/60/60/24/365 = ~23 years. If you have a uniform distribution of passwords then an attacker has a 50% chance of getting into the server after 11.5 years (assuming this person can make A MILLION LOGIN REQUESTS A SECOND). If we're talking remote intrusion from the internet, you're looking at three orders of magnitude more time (i.e. 1 request every millisecond => 23000 years). So it's up to you: experience new and exciting failures because you decided to violate some convention or accept that you have a system so insecure that it can be brute forced remotely in just a few thousand years. (alternatively, just disable root logins from ssh)
  2. Are you saying that ssh isn't secure because it doesn't rate limit failed logins? It's not clear what you're saying here. It's definitely possible to have a failed authentication delay in ssh. Anyway, if your password is relatively strong then the network latencies should be enough to make any remote brute force attack infeasible even without failure delays. Huh? If the filesystem isn't cleanly unmounted you just have to run a fsck on it and everything is good to go. This ensures that your filesystem is in a consistent state before you start writing to it. This is a Good Thing. If your boot filesystem isn't cleanly unmounted, the kernel will boot with the filesystem in read-only mode and then the init scripts should fsck it and remount. What do you consider to be "unix"?
  3. Yes, besides the socket, cache size, clock, power consumption, speedstep, sse3, NX pages, hyperthreading, entire microarchitecture, and so on, it is basically the same thing as a PIII.
  4. lols i have a lot of "adultery" such as 3.5g if solaris only be like to boot really because i cant cheat you wanted to run on... just like $8 just order a good i can see wiki. stating that into something you wanted to board of like 1kb files and yeah id tend to display images and yeah it is breaking federal law and they were to not like micheal jacksoon doesnt do anything deeper than a pdf at 20mb say "i want to find operating system. and use the factory pre installed all of the same speed? people can tell its only like 5ft piece of the windows vista. then it will probably just have the annoying click it with the company was some missing files that thinks he's a opteron is running electrolsys in a p3 mobile but it was that probably fix the phone because it couldnt find the one like a locomotive one so does so he "came" cool im asking but seen it only 12volts.. you may work. :edit some 2 spark plugs, magnet and apparently was 1999 yet not sure about a few 4kv at that linux complain to pull out how do native apps when religion actually see if you could throw a post
  5. Looks like it does an exec("/bin/sh")
  6. What do you mean by padding the original input? From an information standpoint it's exactly the same. I was just saying that you may want a hash function that reveals information about the input. Perfect hashes fall in this category too. That's not a hash function because the size of your domain is the same as your range (I'm assuming F:R->R here). Collision resistance only makes sense when you have collisions. I'm fairly certain that a hash function that is collision resistant is also strong against a hash inversion attack like you describe (i.e. you are given H(x) and want to find some x' (which is not necessarily distinct from x) that satisfies H(x) = H(x')). The proof would involve assuming you had an algorithm for finding collisions that did better than what you could achieve via the birthday paradox, show there is a bias, then base your inversion around that bias. I said something similar to this at the end of my last post. On the other hand, resistance against hash inversion definitely does not imply collision resistance. The type of function where you can invert the input that has the properties you're interested in would likely be a pseudo-random permutation. In fact, there's a common construction for cryptographic hashes that uses PRPs.
  7. The conditions "accepts input" and "deterministic" are true of any function in the mathematical sense (however some people choose to abuse the definition of a function so that it can have nondeterminism, but let's ignore that...). Of course I was talking about what makes a desirable hash function; I said nothing about a definition. Saying hash functions are meant to make data unrecoverable is like saying a dozen tequila shots is meant make you puke all over the mayor. Sure, that's something that it can do, but that's only a useful side-effect of its initial design (i.e. to get you black out drunk). From a strict sense you can define a hash function as any complete mapping H:X->Y where |X| < |Y|. I assumed you were as well, but it appears you think a hash function is equivalent to a noninvertible mapping, possibly with additional constraints on the set of preimages. While |X| < |Y| is a sufficient condition for a function f:X->Y to be noninvertible, it's certainly not necessary. You may want a hash function that leaks information about the input. Suppose I wanted a hashtable with two buckets that hold elements keyed by some domain {0, 1, ..., n} (which I would map to the range {0,1}, i.e. the bucket indices). Additionally, suppose I was storing some input stream S(i) (i being the ith key in the stream) where Pr[s(i) = 0] = sum(Pr[s(i) = k], k, 1, n). If I want an even distribution of elements in each bucket, I'd hash H(x) = 0 if x = 0, and H(x) = 1 otherwise, which will result in both buckets containing approximately the same number of elements. However, note that if the domain is small you can deterministically recover the input with a non-negligible probability. Oops! That was a bit convoluted, but hopefully you can see the utility. From a cryptographic point of view (managing to vaguely stay on topic), unrecoverability is a consequence of collision resistance. While you can work this out for storing passwords on a server, the threat model tends to get overly complicated; instead, consider a simplified message authentication scenario. Say we had some file with important instructions and its associated hash. The file is distributed through some tamperable channel while the hash is distributed through some public but untamperable channel. To ensure the file is what we expected, the hash on the file is computed and compared against the given hash. If there's a mismatch, it's not the actual file. If the hash is not collision resistant, an attacker could construct a fake file with new important instructions that has the same hash and pass it off as the real thing. How does this relate back to unrecoverability? Well, suppose a message 'm' can be recovered with non-negilible probability from its hash H(m). This means H(m) reveals some information about 'm' (e.g. maybe if the LSB of m is 0, then the LSB of H(m) is 0). We can use this to our advantage by constructing a message with similar attributes. The lower the entropy, the faster a forged message with a hash collision can be constructed.
  8. That's not a fair description of hashing. I could define the worthless hashing function H : {0,1}^* -> {0,1}^n such that H(x) = 0 for all x in {0,1}^* which would fit your criteria (it's impossible to get the input data, since everything just maps to 0!). The idea behind hashing is you take some value 'x' and try to give it a unique value 'y' that takes up less space. By simple counting argument this is impossible, but the goal is to do as best as we can. To wit, a good hashing function H:{0,1}^*->{0,1}^n would have a uniform distribution of mappings from the domain to the range. From a practical standpoint, you'd want for all a, b in {0,1}^n, |Pr[H(x) = a] - Pr[H(x) = b]| is negligible. Additionally, a good cryptographic hash (i.e. a collision resistant hash function) ensures that finding a collision H(x) = H(y), where x != y, is computationally infeasible. There are a couple of "types" of collisions based on how much information you are given, but the most typical is just finding any two distinct x and y that satisfy H(x) = H(y). If you were to implement a hashing function for a hash table, you would want your hashing function to uniformly distribute its output based on the distribution of your input. While a cryptographic hash would typically satisfy this constraint, it would be overkill. First, assume that the plaintext is just ASCII encoded English. Next, note any patterns between the ciphertexts. In this case, there's an obvious header "[r10cs]" and the pattern [-0][0..9][somebyte] consistently repeats (I'll refer to [-0][0..9] as the number field and [somebyte] as the cipher byte field). From the cipher byte field there's some substitution going on since there are a few nonalphanumeric/punctuation characters. Assuming the number field isn't just there for padding, it's likely that it is being used to compute the substitution character. Only a few cipher bytes fall outside of what we'd expect for English and the numbers in the number field are small, so the substitution is probably an addition or a subtraction on a plaintext byte with the number field. Try what's easiest first (e.g. numberfield + cipherbyte and numberfield - cipherbyte) and you'll find you get the plaintext from the latter.
  9. What do you even mean by that?
  10. This is infeasible for a variety of reasons. Decompilation is pretty worthless for what you want to do; a lot of information is lost in the compilation process. While a decompiler may be able to generate valid C given a binary, it won't be pretty. Don't expect meaningful variable names, function names, or aggregate data types. Even if you manage to make sense out of whatever is in the rom (which will be a decompiled mess, I presume) the game is going to be pretty tightly tied to the architecture. If you don't want to emulate the chips that handle graphics, input, sound, etc, you're going to have to rewrite all of that from scratch. Oh, and the code might be written to take advantage of the fixed clock speed of the processor. Good luck getting the timing right. How slow are N64 emulators on modern hardware anyway? What are the bottlenecks? It would make more sense to improve an emulator than try to port even a single game. You might be interested in texture packs, though.
  11. What a charming tale of foxes, gloves, and ruin.
  12. "Low level"? Are you suggesting Erlang and SML are more suitable for systems programming than C? I'm skeptical, to say the least.
  13. I suggested a freshman book because they typically lack rigor and focus on computations, which makes it *easier* than it should be. Since when were integrals not a part of calculus?
  14. Integration is not as simple as differentiation namely due to the lack of a chain-rule analogue. Even worse, many functions can't even be integrated as a composition of elementary functions. I'd suggest some freshman calculus book if you're just starting out, but it really depends how in-depth you want to get.
  15. I agree the BBC article is fluff, but jumping to the conclusion "oh this is bullshit!" is absurd. There were hardly any details. Just because it has no apparent use does not mean it's invalid mathematics (I guess that is what you mean by flaw in his reasoning?). His main argument is that it's useful for computer arithmetic since you won't encounter any NaN's. So there's your "use". Unfortunately it's rife with edge cases, so don't expect to see it adopted. Not to mention the word "nullity" is already used to indicate the dimension of a null space, so the name is a bit confusing. He didn't just give 0/0 a name-- he has provided a superset of axioms for the reals. Whether these axioms are useful in a pure mathematical sense is a value judgement. You keep on using the word "real". Could you please provide a rigorous mathematical definition? I'd like to hear it!