lattera

Members
  • Content count

    514
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by lattera

  1. Due to some unfortunate incidents with the administrators of CryptIRC, BinRev has moved its official IRC server. You will need to reregister your handles and channels. Work is still being done on the IRC server, including stats and kwotes. Please be patient as we work to resolve any bugs and add necessary features. The new server is located at irc.binrev.net. You can join in the same channel as always (#binrev). Please start making the transition off CryptIRC and on to irc.binrev.net. Thanks for the patience and all you do to further the community.
  2. Distributing live malware could be viewed as unethical and illegal and is not permitted by BinRev.
  3. I'll be getting in on the 3rd and leaving the 8th. I'm speaking on the 5th at 3pm about Runtime Process Infection, the talk is called Runtime Process Insemination. On twitter, I'm @lattera. My Google+ profile can be found at http://0xfeedface.org/+
  4. IRC's back up.

  5. IRC is currently down. We're looking into it.

    1. StankDawg

      StankDawg

      IRC is back up @ irc.binrev.net ! Just a little hardware problem, nothing major.

  6. what is it that you're trying to find? Im willing to bet no one is going to give up their password for their msdn account I need a Admin MSDN account. I will buy if need be. Thanks. Thread closed and account banned due to illegal activity and requests thereof.
  7. We don't do that here.
  8. would like to congrat the Stankiest Dawg ever.

  9. Virtualization is great for developers. It allows us to test different scenarios, keep organized, and maintain a safe and sane environment. I only develop inside VMs. I hate cluttering my main OS install with non-production-ready code, especially if I'm dealing with touchy things like the kernel. Virtualization in the enterprise allows for server consolidation, cloud hosting, failsafes, etc. I use virtualization heavily at work. I use multiple computers and multiple VMs on each computer for a vuln-dev lab. If virtualization wasn't an option, My employer would have to provide me with over ten servers if virtualization technology didn't exist. However, virtualization isn't the end-all-be-all solution. Sometimes you need to test your project on real hardware or in real-life situations. As with all decisions, evaluate your needs and see if virtualization is a good option.
  10. I tend to use the OS that fits the job best. On my laptop, I run OSX. On my workstation at work, I use Solaris 11 Express. On my vuln-dev lab, I use a mixture of Linux, Windows, and Solaris. I'm more biased towards Solaris because of ZFS, Dtrace, Xen, and Crossbow.
  11. Is FAT32 your only option? Seems like NTFS is the way to go these days. My biggest gripe about FAT32 is that you can't store files bigger than 2GB.
  12. Hacking is very much alive. Take a look at full-disclosure. Take a look at the industry. I would be considered a whitehat hacker--I get paid to hack (legally, of course). I think you just need to know the scene. The scene is much broader these days, encompassing groups of script-kiddies who somehow get their hands on 0days to very talented individuals. You'll find varying degrees of expertise and maturity in all hacking communities. It's definitely hard to pinpoint a definition of hacking. Is it merely finding vulnerabilities and writing exploits? Is it using developed exploits against others for profit or fame? Is it limited to the digital world? I'll leave the definition up to you; but suffice it to say that whatever hacking is, it isn't dead.
  13. it was. I just brought it back up. Sorry for the extended downtime... I was on my honeymoon.

  14. just bought a T-Mobile G2.

  15. We don't permit illegal activities in this community.
  16. You've mistaken us for the wrong type of people. We don't do that here.
  17. is wondering when the TSA became a terrorist-fighting organization.

  18. I'm not in the phreaking scene at all, but congrats! Good job. It's nice to have a BinRev peep in such a great hacking/phreaking eZine.
  19. Great job on the tutorial. I'd love to see the same thing, but with pfSense.
  20. My employer's flagship product generates thousands of PDFs. We have three different copies of our product: a development copy for Quality Assurance testing of recently-written code, a staging copy to test pushing a new versions of our product, and a production copy that our users utilize. Each copy requires 42GB worth of PDFs. New PDFs are generated every day. To maintain a sane development environment, we pull fresh copies of the PDFs every month from production. We pull from production to staging, then from production to development. The PDFs are stored on two separate servers that both run NTFS. We use Microsoft SyncToy to sync the PDFs across environments. The process can take several hours for each environment. The network load is high due to the PDFs being stored on multiple servers. I recently had an idea. What if we store the PDFs on our ZFS NAS? We could use ZFS snapshotting and rsync to refresh the environments. We can do that on a regular basis via a cron job. ZFS snapshots take a few seconds and rsync is a really efficient tool. No network traffic will be involved since the synchronization is taking place all on the same server. Here are the commands we would run: DATE=`date '+%F_%T'` zfs snapshot tank/site_data/prod/PDFs@$DATE rsync -a /tank/site_data/prod/PDFs/.zfs/snapshot/$DATE/ /tank/site_data/dev/PDFs/ I really like this solution. Right now, we have to jump through a lot of hoops to sync up these PDFs. This will save us time, space, and internal bandwidth. This article originally posted on my tech blog.
  21. Here is the final shell script: #!/usr/bin/bash function snapshot { echo [*] Snapshotting $1 @ $2 echo zfs snapshot tank/shares/cifs/CompanyData/$1@$2 echo " [+] Snapshot of $1 @ $2 done" } function sync { echo [*] Syncing $1 with snapshot... echo rsync -aA /tank/shares/cifs/CompanyData/$2/.zfs/snapshot/$3/ /tank/shares/cifs/CompanyData/$1/ echo " [+] Syncing $1 done!" } if [ $# -eq 0 ]; then DATE=`date '+%F_%T'` snapshot Prod $DATE sync Dev Prod $DATE sync Alpha Prod $DATE elif [ $# -eq 1 ]; then DATE=`date '+%F_%T'` snapshot Prod $DATE sync $1 Prod $DATE elif [ $# -eq 2 ]; then DATE=`date '+%F_%T'` snapshot $2 $DATE sync $1 $2 $DATE fi
  22. I just looked at it with dtrace. I'll take a look at it with ltrace on a linux system this week sometime. I need to spin up a linux VM. shawn@shawn-work:~$ uname -a SunOS shawn-work 5.11 oi_147 i86pc i386 i86pc Solaris shawn@shawn-work:~$ cat test.d pid$target:libc:srand:entry { trace(arg0); } shawn@shawn-work:~$ pfexec dtrace -s test.d -c ./test dtrace: script 'test.d' matched 1 probe 16 60 58 83 23 49 19 96 72 70 49 92 5 47 60 dtrace: pid 21456 has exited CPU ID FUNCTION:NAME 0 71978 srand:entry 1289239423 edit[0]: I know that showing dtrace doesn't help much in your case, but I thought I'd take the time to market it anyways.
  23. It really depends. For me, if the project is just some hobbyist tool meant to solve a little problem, I likely won't write documentation. I'll let the code document itself. However, if the project is meant to be more serious, I'd document it both inside the source through comments and through API documentation. If the code is obscure, but meant to be reused within a few years, I'll likely just comment the code.
  24. What OS/distro and compiler are you using?