We are in the middle of a project to virtualize our primary file server. As you can imagine, this is a large project for us. We had a lot to consider and plan out before moving forward. All in all, it has been an eye-opening endeavor.
One of the biggest challenges was deciding on how the new implementation would look. We knew we wanted to use a SAN for storage, rather than the DAS we have now. After considering multiple solutions, we went with a Dell MD3000i. This iSCSI SAN solution is a pretty solid platform, though not perfect (what technology is?!) Along with the MD3000i, we also got a MD1000. We are using a combination of SAS and SATA disks, depending on our intended use of the storage.
Our next decision was regarding the server itself. Did we want to use a physical box, or a virtual machine? We decided on a VM. I liked to idea of easy portability and hardware independence. We currently use Microsoft Virtual Server and will be moving to Hyper-V later this year. Our VMs will move to the new platform with no problems. And, because we are storing all of the VM files on a SAN, moving VMs is as easy as re-mapping LUNs to new physical boxes.
So, we had our decisions on storage and server. As I got in to using the MD3000i, I found that things weren't going to be quite as smooth as I was hoping (ignorance really is bliss!). For one thing, LUNs larger than 2TB are not possible, which means that VHDs larger than 2TB are not really possible (without some hacking). Also, with our platform, there isn't really a way to get a VM to directly use disk storage. We have to use VHDs. But, I am getting ahead of myself...
We spent considerable time on how best to provide multiple TBs of storage from the SAN to our VM file server. The items listed below detail things:
One physical server with 1TB of DAS. Most users have a personal folder mapped to a drive letter and a shared folder mapped to a drive letter. The security in the shared folder is not exactly coherent or structured. It is pretty much a free-for-all. Users can create folders anywhere and most users can access more information that they need. ACLs are also a bit unwieldy, using (mostly) user accounts rather than groups. Finally, we were out of space; our 1TB was full.
One virtual server with 2TB disk for shared storage. I created a 2TB LUN on the SAN and mapped it to the VM host server. I then created a 2TB VHD file on this disk and attached that VHD to the file server VM. So, the file server VM has two disks; a boot disk (on a separate LUN) and a data disk. When we need to add storage to the server, it is as simple as creating another LUN and another VHD. Users (now all users) are still getting a personal storage drive and a shared storage drive.
The biggest difference (from the users' perspective) is the shared storage. My related entries here and here detail what we are doing. I have moved a portion of our shared data to the new server and users have really liked the new solution. They most appreciate how 'clean' the shared drive is now (thank you, ABE!) Also, departments such as HR and Accounting were a bit startled by the fact that access to their files wasn't quite as limited as they thought. They like the fact that now people can't even see, let alone access, their files. So, this new implementation has been a big win as far as users are concerned.
On the management side of things, life will be much simpler as well. Each top-level folder will only have four ACL entries. We will be able to know which folders a user has access to simply by looking a group membership. New folder creation is a snap with the PowerShell script I wrote. Things are organized, clean, structured, and (most importantly) known. The system is not largely self-documenting.
We currently use BackupExec for backups and have put an agent on the VM. So we back up things just as you would with a physical box. But, we are going to be implementing System Center Data Protection Manager later this year. This should give us a more robust D2D2T backup solution for our VM environment.