Jump to content


Photo
- - - - -

Why should I use VM?


  • Please log in to reply
2 replies to this topic

#1 bardolph

bardolph

    DDP Fan club member

  • Members
  • 50 posts
  • Gender:Male

Posted 01 January 2011 - 01:35 AM

I don't get it, what's the use of it, why not just install any OS ?
I tried it a couple of times and it seemed such a waste of resources.

#2 lattera

lattera

    Underground Shizzleness

  • Members
  • 511 posts
  • Gender:Male

Posted 01 January 2011 - 03:16 AM

Virtualization is great for developers. It allows us to test different scenarios, keep organized, and maintain a safe and sane environment. I only develop inside VMs. I hate cluttering my main OS install with non-production-ready code, especially if I'm dealing with touchy things like the kernel. Virtualization in the enterprise allows for server consolidation, cloud hosting, failsafes, etc. I use virtualization heavily at work. I use multiple computers and multiple VMs on each computer for a vuln-dev lab. If virtualization wasn't an option, My employer would have to provide me with over ten servers if virtualization technology didn't exist.

However, virtualization isn't the end-all-be-all solution. Sometimes you need to test your project on real hardware or in real-life situations. As with all decisions, evaluate your needs and see if virtualization is a good option.

#3 army_of_one

army_of_one

    SUP3R 31337 P1MP

  • Members
  • 282 posts

Posted 30 January 2011 - 11:04 PM

I don't get it, what's the use of it, why not just install any OS ?
I tried it a couple of times and it seemed such a waste of resources.


lattera's post was spot on. Also, if you use a safety-critical microkernel and paravirtualization, the virtualization can greatly improve the security of a system. For the best COTS example, look up Green Hill's INTEGRITY RTOS and Padded Cell virtualization. The Nizza Security Architecture and the Perseus Security Architecture (w/ Turaya Security Kernel) are OSS examples. QubesOS tries to do the same thing but their Trusted Computing Base (TCB) is way too big imho.

The idea is that you combine a secure microkernel, some small trusted services running on it, maybe the board support package for isolated drivers, and then run most of the apps in a deprivileged virtual machine. This is usually Linux paravirtualized in ring 1 or 3 or Windows using Intel VT. The non-critical stuff runs in the VM. Security-critical applications, like a firewall or password manager, run outside the untrusted VM directly on the microkernel. The microkernel implements a policy whereby everything is isolated by default and the various processes can only communicate in very specific ways. So, a compromised Linux/Win VM can't access your private key: it can only ask you to sign something. Most of these systems come with a trusted path, like the Nitpicker GUI, that has sole access to the graphics card and ensures that one partition can't spoof another. Only the app with focus gets the keyboard, so keylogging is largely defeated. So, if you were digitally signing an order, the untrusted VM couldn't intercept the password that decrypts your private key and a secure viewer could show you what you were signing to be sure no MITM attack is happening. These systems are really awesome and are only made possible by virtualization and robust OS's like Integrity, OKL4, Turaya and LynxSecure. I attached an example.

Note: These security assurances do NOT apply to Xen, VMware, Virtualbox, etc. Their TCB is very large and complex. Unlike the aforementioned systems, the developers don't have their primary focus on security. It's more about resource consolidation, testing, running legacy apps, etc. But virtualization with aerospace-grade microkernels is a good holdover until we get a truly secure platform, like the upcoming SecureCore from Princeton.

Attached Files






BinRev is hosted by the great people at Lunarpages!