• Content count

  • Joined

  • Last visited

Everything posted by ChZ

  1. There are probably a lot of programs out there that rely on there being a user with the name 'root' (I know ubuntu has some scripts that make that assumption since I just tried it on a VM). If you don't care about breaking things for what you perceive as increased security, just modify your /etc/passwd, /etc/passwd-, and /etc/group files so that 'root' is renamed to whatever you like. I'm going to note here that the security gains in the face of whatever system stability losses you'll incur aren't that great. Consider what happens when you keep the 'root' user and someone tries to brute force the password. Suppose you use a password that is not dictionary based (passwd even yells at you if you use a weak password these days) that's sufficiently long, say 8 characters, chosen from the space of capital/lower case letters, numbers, and symbols. You should wind up with around (26*2+20)^8 potential passwords (this is conservative, there are slightly more typable characters). Now, say your system can handle a login attempt every microsecond (again, this is conservative, I'm unaware of any system this fast). This means to exhaust the entire space you'll need ((26*2+20)^8/(10^6))/60/60/24/365 = ~23 years. If you have a uniform distribution of passwords then an attacker has a 50% chance of getting into the server after 11.5 years (assuming this person can make A MILLION LOGIN REQUESTS A SECOND). If we're talking remote intrusion from the internet, you're looking at three orders of magnitude more time (i.e. 1 request every millisecond => 23000 years). So it's up to you: experience new and exciting failures because you decided to violate some convention or accept that you have a system so insecure that it can be brute forced remotely in just a few thousand years. (alternatively, just disable root logins from ssh)
  2. Are you saying that ssh isn't secure because it doesn't rate limit failed logins? It's not clear what you're saying here. It's definitely possible to have a failed authentication delay in ssh. Anyway, if your password is relatively strong then the network latencies should be enough to make any remote brute force attack infeasible even without failure delays. Huh? If the filesystem isn't cleanly unmounted you just have to run a fsck on it and everything is good to go. This ensures that your filesystem is in a consistent state before you start writing to it. This is a Good Thing. If your boot filesystem isn't cleanly unmounted, the kernel will boot with the filesystem in read-only mode and then the init scripts should fsck it and remount. What do you consider to be "unix"?
  3. Yes, besides the socket, cache size, clock, power consumption, speedstep, sse3, NX pages, hyperthreading, entire microarchitecture, and so on, it is basically the same thing as a PIII.
  4. lols i have a lot of "adultery" such as 3.5g if solaris only be like to boot really because i cant cheat you wanted to run on... just like $8 just order a good i can see wiki. stating that into something you wanted to board of like 1kb files and yeah id tend to display images and yeah it is breaking federal law and they were to not like micheal jacksoon doesnt do anything deeper than a pdf at 20mb say "i want to find operating system. and use the factory pre installed all of the same speed? people can tell its only like 5ft piece of the windows vista. then it will probably just have the annoying click it with the company was some missing files that thinks he's a opteron is running electrolsys in a p3 mobile but it was that probably fix the phone because it couldnt find the one like a locomotive one so does so he "came" cool im asking but seen it only 12volts.. you may work. :edit some 2 spark plugs, magnet and apparently was 1999 yet not sure about a few 4kv at that linux complain to pull out how do native apps when religion actually see if you could throw a post
  5. Looks like it does an exec("/bin/sh")
  6. What do you mean by padding the original input? From an information standpoint it's exactly the same. I was just saying that you may want a hash function that reveals information about the input. Perfect hashes fall in this category too. That's not a hash function because the size of your domain is the same as your range (I'm assuming F:R->R here). Collision resistance only makes sense when you have collisions. I'm fairly certain that a hash function that is collision resistant is also strong against a hash inversion attack like you describe (i.e. you are given H(x) and want to find some x' (which is not necessarily distinct from x) that satisfies H(x) = H(x')). The proof would involve assuming you had an algorithm for finding collisions that did better than what you could achieve via the birthday paradox, show there is a bias, then base your inversion around that bias. I said something similar to this at the end of my last post. On the other hand, resistance against hash inversion definitely does not imply collision resistance. The type of function where you can invert the input that has the properties you're interested in would likely be a pseudo-random permutation. In fact, there's a common construction for cryptographic hashes that uses PRPs.
  7. The conditions "accepts input" and "deterministic" are true of any function in the mathematical sense (however some people choose to abuse the definition of a function so that it can have nondeterminism, but let's ignore that...). Of course I was talking about what makes a desirable hash function; I said nothing about a definition. Saying hash functions are meant to make data unrecoverable is like saying a dozen tequila shots is meant make you puke all over the mayor. Sure, that's something that it can do, but that's only a useful side-effect of its initial design (i.e. to get you black out drunk). From a strict sense you can define a hash function as any complete mapping H:X->Y where |X| < |Y|. I assumed you were as well, but it appears you think a hash function is equivalent to a noninvertible mapping, possibly with additional constraints on the set of preimages. While |X| < |Y| is a sufficient condition for a function f:X->Y to be noninvertible, it's certainly not necessary. You may want a hash function that leaks information about the input. Suppose I wanted a hashtable with two buckets that hold elements keyed by some domain {0, 1, ..., n} (which I would map to the range {0,1}, i.e. the bucket indices). Additionally, suppose I was storing some input stream S(i) (i being the ith key in the stream) where Pr[s(i) = 0] = sum(Pr[s(i) = k], k, 1, n). If I want an even distribution of elements in each bucket, I'd hash H(x) = 0 if x = 0, and H(x) = 1 otherwise, which will result in both buckets containing approximately the same number of elements. However, note that if the domain is small you can deterministically recover the input with a non-negligible probability. Oops! That was a bit convoluted, but hopefully you can see the utility. From a cryptographic point of view (managing to vaguely stay on topic), unrecoverability is a consequence of collision resistance. While you can work this out for storing passwords on a server, the threat model tends to get overly complicated; instead, consider a simplified message authentication scenario. Say we had some file with important instructions and its associated hash. The file is distributed through some tamperable channel while the hash is distributed through some public but untamperable channel. To ensure the file is what we expected, the hash on the file is computed and compared against the given hash. If there's a mismatch, it's not the actual file. If the hash is not collision resistant, an attacker could construct a fake file with new important instructions that has the same hash and pass it off as the real thing. How does this relate back to unrecoverability? Well, suppose a message 'm' can be recovered with non-negilible probability from its hash H(m). This means H(m) reveals some information about 'm' (e.g. maybe if the LSB of m is 0, then the LSB of H(m) is 0). We can use this to our advantage by constructing a message with similar attributes. The lower the entropy, the faster a forged message with a hash collision can be constructed.
  8. That's not a fair description of hashing. I could define the worthless hashing function H : {0,1}^* -> {0,1}^n such that H(x) = 0 for all x in {0,1}^* which would fit your criteria (it's impossible to get the input data, since everything just maps to 0!). The idea behind hashing is you take some value 'x' and try to give it a unique value 'y' that takes up less space. By simple counting argument this is impossible, but the goal is to do as best as we can. To wit, a good hashing function H:{0,1}^*->{0,1}^n would have a uniform distribution of mappings from the domain to the range. From a practical standpoint, you'd want for all a, b in {0,1}^n, |Pr[H(x) = a] - Pr[H(x) = b]| is negligible. Additionally, a good cryptographic hash (i.e. a collision resistant hash function) ensures that finding a collision H(x) = H(y), where x != y, is computationally infeasible. There are a couple of "types" of collisions based on how much information you are given, but the most typical is just finding any two distinct x and y that satisfy H(x) = H(y). If you were to implement a hashing function for a hash table, you would want your hashing function to uniformly distribute its output based on the distribution of your input. While a cryptographic hash would typically satisfy this constraint, it would be overkill. First, assume that the plaintext is just ASCII encoded English. Next, note any patterns between the ciphertexts. In this case, there's an obvious header "[r10cs]" and the pattern [-0][0..9][somebyte] consistently repeats (I'll refer to [-0][0..9] as the number field and [somebyte] as the cipher byte field). From the cipher byte field there's some substitution going on since there are a few nonalphanumeric/punctuation characters. Assuming the number field isn't just there for padding, it's likely that it is being used to compute the substitution character. Only a few cipher bytes fall outside of what we'd expect for English and the numbers in the number field are small, so the substitution is probably an addition or a subtraction on a plaintext byte with the number field. Try what's easiest first (e.g. numberfield + cipherbyte and numberfield - cipherbyte) and you'll find you get the plaintext from the latter.
  9. What do you even mean by that?
  10. This is infeasible for a variety of reasons. Decompilation is pretty worthless for what you want to do; a lot of information is lost in the compilation process. While a decompiler may be able to generate valid C given a binary, it won't be pretty. Don't expect meaningful variable names, function names, or aggregate data types. Even if you manage to make sense out of whatever is in the rom (which will be a decompiled mess, I presume) the game is going to be pretty tightly tied to the architecture. If you don't want to emulate the chips that handle graphics, input, sound, etc, you're going to have to rewrite all of that from scratch. Oh, and the code might be written to take advantage of the fixed clock speed of the processor. Good luck getting the timing right. How slow are N64 emulators on modern hardware anyway? What are the bottlenecks? It would make more sense to improve an emulator than try to port even a single game. You might be interested in texture packs, though.
  11. What a charming tale of foxes, gloves, and ruin.
  12. "Low level"? Are you suggesting Erlang and SML are more suitable for systems programming than C? I'm skeptical, to say the least.
  13. I suggested a freshman book because they typically lack rigor and focus on computations, which makes it *easier* than it should be. Since when were integrals not a part of calculus?
  14. Integration is not as simple as differentiation namely due to the lack of a chain-rule analogue. Even worse, many functions can't even be integrated as a composition of elementary functions. I'd suggest some freshman calculus book if you're just starting out, but it really depends how in-depth you want to get.
  15. I agree the BBC article is fluff, but jumping to the conclusion "oh this is bullshit!" is absurd. There were hardly any details. Just because it has no apparent use does not mean it's invalid mathematics (I guess that is what you mean by flaw in his reasoning?). His main argument is that it's useful for computer arithmetic since you won't encounter any NaN's. So there's your "use". Unfortunately it's rife with edge cases, so don't expect to see it adopted. Not to mention the word "nullity" is already used to indicate the dimension of a null space, so the name is a bit confusing. He didn't just give 0/0 a name-- he has provided a superset of axioms for the reals. Whether these axioms are useful in a pure mathematical sense is a value judgement. You keep on using the word "real". Could you please provide a rigorous mathematical definition? I'd like to hear it!
  16. And if n is 0? To hell with imaginary numbers! Have you even bothered to read his paper? Do you have any training in abstract mathematics? The guy's making division a closed operation over the reals (well, over the 'transreals' as he calls them), it's nothing really spectacular. All he's doing is defining division by reciprocals instead of multiplicative inverses and comes out with a consistent system given some axioms. Is it useful? For most things, not really. Don't worry, 0/0 still is an indeterminate form.
  17. Ok, so you have very little experience and want to write an operating system? Sigh. Prepare to fail harder than you have ever failed before. But you're smart, yes? It's a lot more in-depth than application programming. You'll need to know the following: modern operating system design data structures C inside and out assembly system programming driver programming application binary interfaces architectures managing large code bases I'm still probably missing something important or possibly overemphasizing. Design is very, very important. You've thought about design more than "I want it to run on X architecture," right? Don't listen to kern_alert's advice. Picking one architecture and sticking to it will ensure that your code will be gloriously unportable. You'll need to abstract out system initialization, memory management, and other system details if you want to avoid an utter mess. You won't need as much assembly as everyone lets on (I have less than 800 lines for my amd64 port, and it can still be cut back), but you will still need a firm understanding of it. I'd be glad to offer help if you're actually serious. Shoot me an IM.
  18. Wasn't there some jpeg buffer overflow vulnerability a while back in IE that allowed for arbitrary code execution?
  19. xscreensaver
  20. OK, discussion about the viability of actually performing a DoS aside, here's the script: #!/bin/bash while [ true ]; do wget -O /dev/null 2>/dev/null; done It redirects all downloaded data to /dev/null, so you shouldn't have much (if any) disk activity associated with the process.
  21. Yeah, it's not as there's some definition INT_MAX in the headers provided by the compiler. By that line of reasoning I could argue if a pointer is set to NULL and something dereferences it and the mmu causes the program to segfault, it's the mmu's fault and we have to adjust to that - not the fact that people shouldn't have been touching NULL in the first place. Then use a variable of fixed size. There's a reason why we have int8_t, int16_t, int32_t, etc. Plus, if you concern yourself only with memory size, you're going to end up taking performance hits from alignment issues (although I'm not sure how much of an issue they are today..). Let's take the x86-64, because that's the 64-bit arch I know best (even though it's just a kludge built on another series of kludges and it's probably a very poor example). It's not as bloated as you may think. Only data pointers are fully 64-bit. ints are only 32-bits. Addresses within the program are 31-bits (using a medium memory model - gcc hasn't implemented the large model). Plus, the CISC instruction set limits the size of the code generated to a managable amount. *AND* there are more registers available, which means there can be less stack activity, and therefore faster code. Plus there's another layer of translation added to the kernel if there's 32-bit suport. I'm not saying a 32-bit userland is inherently bad on a 64-bit platform, just that it's not necessarily better than a 64-bit userland. Off of the top of my head, I can name a few applications that can benefit from a 64-bit userland: video editing, CAD, and databases. With that being said, if your code can't easily be made 64-bit clean, your code is braindead. I just got over fixing a large project of mine (about 30k lines) so that it would be 64-bit clean. The only thing I had to fix was casting pointers to 32-bit integers into intptr_t size integers. It was a trivial fix. Really. I'm using a 64-bit platform right now with a 64-bit userspace. It works fine for me.
  22. Care to explain that? I have no idea how you're differentiating 'support' from 'run'. If it supports it then it can run it.
  23. The e_machine field in the elf header will say i386 regardless of the target processor (so as long as it's x86). There's no machine type defined for each different x86 processor. Even though the machine type says 80386, it should be targetted correctly regardless. I wouldn't worry about it.
  24. It looks like you were overwriting an area of the nonvolatile memory, which would wipe out the bios settings. I'm not sure where the battery backed memory is mapped, but that 0-0x20000 seems like it would have it somewhere. It probably would do the same on modern PCs.
  25. I'd like to note that a couple things that the article bashes ext2 for aren't correct. ext2 supports large files through using some fields that would otherwise go unused (the acl fields I believe). The article implies that the inode size is fixed, whereas it actually isn't. Although the directories in ext2 are stored in a linear format there's support of a btree type listing with hashing which is an order of magnitude faster than the old format. A lot of those things were true before the dynamic revision came out, but not any more. Personally, I use jfs since reiserfs decided it would be cool to trash all of my data.