Your basic ITPro blog... What's going on at work, what I'm interested in.

Thursday, December 17, 2009

Odd TCP/IP Behavior in Hyper-V Virtual Machines

I have two Hyper-V hosts running a total of around 20 VMs. I recently came across some odd behavior that ended up in a call to Microsoft Support, as I couldn’t figure it out on my own and we didn’t want to spend any more time on it ourselves. Basically, I was seeing the following:

cap1

As you can see, ping times were all over the place. We found a solution in a combination of KB articles and blog posts.

 

RESOURCES:

KB938448

KB895980

http://fawzi.wordpress.com/2009/10/28/hyper-v-domain-controller-negative-ping-results/

http://joystickjunkie.blogspot.com/2009/04/erratic-or-negative-ping-times-on-hyper.html

http://blogs.msdn.com/tvoellm/archive/2009/02/18/why-does-my-avg-disk-write-sec-counter-keep-climbing.aspx

 

I have three VMs that are multi-proc and all three of them were doing this. All three VMs are running a flavor of Windows Server 2003. I don’t know if this happens with other OSs on multi-proc VMs… I am guessing not. With the /usepmtimer switch added in the boot.ini file, all three are now working as expected. I hope that the Hyper-V team is working on a solution to this so that boot.ini file manipulation is not required in the future.

Thursday, December 10, 2009

List of Accounts in Local Administrators Group

Not all of this code is original. Thank you to the many many people in the Powershell community who freely share their code, expertise, and talent with the rest of us. In that spirit, here’s my script for reporting accounts in the local Administrators group on domain workstations. I hope it helps others.

NOTE: This script requires the Quest AD Cmdlets

------------------------------------------------------------------------------

$ErrorActionPreference = "SilentlyContinue"

$a = New-Object -comobject Excel.Application
$a.visible = $True

$b = $a.Workbooks.Add()

$c = $b.Worksheets.Item(3)
$c.Name = "Un-Pingable Machines"
$c.Cells.Item(1,1) = "Machine Name"
$c.Cells.Item(1,2) = "Logon Account"
$c.Cells.Item(1,3) = "Report Time Stamp"
$d = $c.UsedRange
$d.Interior.ColorIndex = 19
$d.Font.ColorIndex = 11
$d.Font.Bold = $True

$c = $b.Worksheets.Item(2)
$c.Name = "Good Machines"
$c.Cells.Item(1,1) = "Machine Name"
$c.Cells.Item(1,2) = "Logon Account"
$c.Cells.Item(1,3) = "Report Time Stamp"
$d = $c.UsedRange
$d.Interior.ColorIndex = 19
$d.Font.ColorIndex = 11
$d.Font.Bold = $True

$c = $b.Worksheets.Item(1)
$c.Name = "Violators"
$c.Cells.Item(1,1) = "Machine Name"
$c.Cells.Item(1,2) = "Logon Account"
$c.Cells.Item(1,3) = "Report Time Stamp"
$d = $c.UsedRange
$d.Interior.ColorIndex = 19
$d.Font.ColorIndex = 11
$d.Font.Bold = $True

$worksheetOneRow = 1
$worksheetTwoRow = 1
$worksheetThreeRow = 1

$filter = "Administrator",
    "Domain Admins",
    "Enterprise Admins",
    "crmadmin",
    "EXService",
    "RTCDomainServerAdmins",
    "SymBEServices",
    "Backup",
    "BackupExec"

$computers = Get-QADComputer | Where-Object {$_.OSName -notmatch "server"} | %{$_.Name}

$group = "Administrators"

foreach ($computer in $computers)
{
    $ping = new-object System.Net.NetworkInformation.Ping
    
    $Reply = $ping.send($computer)
    
    if($Reply.status -eq "success")
    {
        $users = $false
        $needHeader = $true
        
        $g = [ADSI]("WinNT://$computer/$group,group")
        $userList = $g.psbase.invoke("Members")
        foreach ($user in $userList)
        {
            $entry = $user.GetType().InvokeMember("AdsPath","GetProperty",$null,$user,$null)
            $match = $false
            foreach ($i in $filter)
            {
                if ($entry -match $i)
                {
                    $match = $true
                }
            }
            if ($match -eq $false)
            {
                if ($needHeader)
                {
                    $worksheetOneRow = $worksheetOneRow + 1
                    $c = $b.Worksheets.Item(1)
                    $c.Cells.Item($worksheetOneRow,1) = $computer.ToUpper()
                    $c.Cells.Item($worksheetOneRow,3) = Get-Date
                    $needHeader = $false
                }
                $c.Cells.Item($worksheetOneRow,2) = $entry
                $worksheetOneRow = $worksheetOneRow + 1
                $users = $true
            }
        }
        
        if (-not $users)
        {
            $worksheetTwoRow = $worksheetTwoRow + 1
            $c = $b.Worksheets.Item(2)
            $c.Cells.Item($worksheetTwoRow,1) = $computer.ToUpper()
            $c.Cells.Item($worksheetTwoRow,3) = Get-Date
            $c.Cells.Item($worksheetTwoRow,2).Interior.ColorIndex = 4
            $c.Cells.Item($worksheetTwoRow,2) = "No Invalid Users"
        }
        
        $users = $false
        $g = ""
        $userList = ""
        $Reply = ""
    }
    else
    {
        $worksheetThreeRow = $worksheetThreeRow + 1
        $c = $b.Worksheets.Item(3)
        $c.Cells.Item($worksheetThreeRow,1) = $computer.ToUpper()
        $c.Cells.Item($worksheetThreeRow,3) = Get-Date        
        $c.Cells.Item($worksheetThreeRow,2).Interior.ColorIndex = 3
        $c.Cells.Item($worksheetThreeRow,2) = "Not Pingable"
    }
}

$c = $b.Worksheets.Item(1)
$d = $c.UsedRange
$d.EntireColumn.AutoFit()
$c = $b.Worksheets.Item(2)
$d = $c.UsedRange
$d.EntireColumn.AutoFit()
$c = $b.Worksheets.Item(3)
$d = $c.UsedRange
$d.EntireColumn.AutoFit()

Wednesday, December 2, 2009

Oddity with Hyper-V and Virtual Machine Manager (VMM)

Every once in a while, one of my Hyper-V hosts will show up in VMM as needing attention. Specifically, the status will show “Needs Attention”, rather than OK. Attempting to refresh the host gives me an “Error (2912)” and/or an “Error (2927)”. In the past, I would attempt to fix this by restarting the WS-Management (WinRM) service. This would almost always result in the service hanging, stuck on ‘Stopping’. From there, my only solution has been a host reboot. Not exactly what I would like. Well, today, I found a solution that did not involve me shutting down ten VMs and rebooting my Hyper-V host box.

I got to the same point as in the past. But, while researching for a better solution, I ran across this blog post about killing a service hung on ‘stopping’.

After reading through it, I found the PID of my service and ran the ‘taskkill /PID xxxx /F’ command, using the PID of my WinRM service. (UPDATE: To get the PID, run ‘sc queryex WinRM’) It looked like it worked, because my RDP connection to the server instantly went dead. But, in a few seconds I got my RDP session back (not sure what happened there…)

I was then able to start WinRM and refresh my host in VMM.

Not exactly elegant, but I didn’t have to reboot my VM host… and that’s something!

Sunday, November 22, 2009

Sprint and Palm – Don’t Shoot Yourself in the Foot!

I want to start by saying that I love my Palm Pre! And, I really like Sprint as a carrier. I always have great coverage everywhere I travel. The Pre is a really nice phone. Palm’s WebOS is great and will only get better (it is only at version 1.3.1). Give it a few major versions to really grow.

That being said, Sprint and Palm are facing a major uphill battle. First, Sprint is a little dog in a big fight, when compared to Verizon, AT&T, etc. Whenever I see smartphone coverage, Sprint is never referred to in terms of market leadership. Likewise, Palm has been slipping in prominence for years and is, when you look at the numbers, just a bit-player in all of this. Android 2.0 and the Droid phone have just made a crowded field even more crowded.

So, how is Sprint and Palm supposed to compete against Verizon, AT&T, Android, iPhone, WinMo, etc. etc. etc.?!

Fortunately, I think there is an answer, and it is staring them right in the face… if only they will be brave enough to see it and act (I know they see it, but will they act?!).

Openness… the answer is openness!

Sprint and Palm need to embrace the rich ‘hacker’ user-based community that is developing around WebOS. There are some awesome things happening that are really extending the functionality of this platform. Homebrew apps outnumber ‘official’ App Catalog apps. There are over 100 patches for WebOS available that do all sorts of cool things! People are building themes for WebOS that allow users to personalize their phone in great ways. In short, the users themselves are passionate about this platform and are doing some amazing work at growing and extending it.

In short, people are passionate about, and hungry for, this platform! I am not the only one who loves my Palm Pre!

I believe that Palm’s WebOS (and Sprint) will be a success to the degree that Palm and Sprint embrace this environment. They need to create an environment and promotes and encourages this sort of development. Remove restrictions, publish resources for developers, and DON’T HINDER APPS THAT GENERATE THE MOST EXCITEMENT!

I am specifically talking about tethering!!

It seems like ‘tethering’ is a bad word in the smartphone business. Everyone wants to tell you how great their phone is, how many apps are available, how great the network coverage is…. they tout their ‘unlimited data plans’… Then tell you about all the limitations! Unlimited data should mean that I should have no limitations on my data use! That’s a service I would pay for.

Tethering is that ‘killer app’ that Sprint and Palm needs. It appeals to casual users, geeks, and working professionals. They should have no limits on tethering and should be promoting that they have truly unlimited data, including tethering. Work with your app developers. Incorporate their best ideas into the OS. Give them the freedom to develop apps that your users want (and that other platforms don’t have). You want to grow, DO SOMETHING DIFFERENT THAN THE OTHER GUYS!

I am very happy being a Sprint/Pre owner. I only hope that, when my contract comes up, that I will still be. But, that it not in my hands… that is up to Sprint and Palm.

Thursday, November 12, 2009

OpenDNS – A Great Tool at a Great Price

So, this week I have sold out to OpenDNS! I have known about them for some time, but have never really dug in to their services. But, with the recent release of their premium services (they still have a free version, which I HIGHLY recommend!), their buzz has gone way up.

This week, I decided to create an account and put OpenDNS on my network at home. It took all of five minutes and the system works great. Account creation took just a minute and configuring my home router (an old Linksys) took just another minute. With that done, and filtering set up and stats turned on, I was ready to go! Category-based site filtering began working immediately. But, that is not all this service offers. I am still learning about the other features.

What prompted this, you may ask? Several factors converged on me this week to urge action.

First, I read a few items, like this one and this one. It just became time to really consider adding some protection to my network for my family. Second, my son, despite my protestations, continues to grow up! As is more normal that it probably should be, kids are using computers more and more and at an earlier age. The last thing I need is for my 6-year-old to stumble upon material he doesn’t need to be seeing.

This topic of discussion continued at work. We had been looking at Google Web Security services, to add to our Postini e-mail management services already in place. Other services in this arena include tools like Websense. However, the more I read and researched, the more I kept gravitating to OpenDNS. First of all, the entry price-point can’t be beat (FREE!!). And, as you move into their newly-offered premiere services, they are still extremely price-competitive.

I would highly suggest and recommend that you look in to OpenDNS and their offerings. Check it out for both your home and enterprise! I think you will be glad you did!

Monday, August 31, 2009

Backup-To-Disk Problems with BackupExec 12.5 to a Virtual Disk on a MD3000i

…A long title for a weeks-long problem!

Well, it has been a long time since I have updated this blog. But, that doesn’t mean nothing has been going on! :-)

For the past couple of weeks, I have been troubleshooting a problem with my backups. I use BackupExec, so that shouldn’t really surprise anyone! But, in this case, the problem (as best as I can tell) turned out to lie elsewhere.

Here’s the skinny…

I do all of my backups to tape, except for my Exchange backups. They go to disk. I have a 3TB disk on my MD3000i that I use for this. That way, I can make the most efficient use of BackupExec’s GRT technology. It was working fine for a while until (as is often the case in Windows environments) it just stopped working.

My backup-to-disk jobs started failing with the error code: E00084AF

Symantec’s KB had a number of articles that spoke to the issue, but nothing seemed to work. I spent about a week on my own trying to solve the problem, running updates, tweaking the registry, deleting/recreating jobs… Nothing worked, so it was then time to call Symantec Tech Support.

Now, like most people, I DO NOT like calling tech support, especially for large companies. This has nothing to do with my ego and everything to do with the fact that, in most cases, the first-level support is likely a guy just like me… someone who kinda-knows the product, is sitting in front of a computer either reading from a ‘tech-support script’ or just searching their own KB as you describe your problem to them. I know they are trying to be helpful, but you end up spending most of your time re-hashing everything you have already tried! </rant>

I will say this, however… the Symantec guys were willing to ‘spend the time’ with me on this. I never felt rushed by them or brushed aside. I appreciated that.

Anyway, none of this troubleshooting helped and we all went into the weekend scratching our heads, wondering what we were going to try/look at next. Then, over the weekend, I had an idea…

As a Windows guy I have learned that, sometimes, you just need to start over. For example, if a distribution list in Outlook isn't working right, you may have to just delete it and re-create it (or add then remove someone). I have come across similar situations many, many times... Situations where 'touching' an object somehow resets things and gets it working again. Sometimes it's just a matter of changing a setting, saving, and then changing the setting back.

This is essentially what I did with my virtual disk on my MD3000i. I went into the management console of my MD3000i and changed the 'ownership/preferred path' of the virtual disk from one controller module to the other. Then, after a server reboot I ran a test job and it worked. The backups have been running fine ever since.

I have no idea what initiated this issue, or where it originated. That is the most frustrating part. I am just glad that things are working again!

Monday, July 13, 2009

Powershell and E-mail

There are times when I need to notify a group of people of a change made on our network file system. Perhaps the contents of the folder has changed and I need to let everyone who has access to that folder know. Perhaps permissions to a folder has changed (someone has been added or removed) and I want to notify everyone with rights to the folder.

This is normally an annoyingly manual process. Cull names from the security tab and generate a list of people, then paste them in to a mail message, etc… you get the idea.

So, I decided I would see if I could write a Powershell script to do the heavy lifting for me. Specifically, I want my script to:

  • Gather the e-mail addresses of everyone with access to a shared folder
  • Create an e-mail message and address to these people
  • Save this message in my Drafts folder for further processing

Really a simple task, but, if automated, will save me tons of time.

The script is not yet written, but I have the basics down. Of course, it is ridiculously simple with PowerShell (and the Quest AD Cmdlets)

Here is the basic framework I have thus far…

# Get Email address of group members
$addrs = Get-QADGroup <GroupName> | Get-QADGroupMember | select email

$ol = New-Object -comObject Outlook.Application

$mail = $ol.CreateItem(0)

#Address mail
foreach ($addr in $addrs)
{
    $mail.Recipients.Add($addr.email)
}

$mail.Subject = "Some Subject"
$mail.Body = "Some Body"

#Save to drafts
$mail.Save()

As you can tell, there is a lot of work yet to do. Input, validation, etc., etc.  But, in just a few lines of code, this script is already performing tasks that would take me minutes to do. I love how easy it is to access AD objects and COM objects and pass data back and forth.

Sunday, June 21, 2009

Perspective

I invest a lot of time (measured in actual minutes and hours) on computers. My job is in IT, managing dozens of Windows servers, dozens of Dell and Cisco switches/routers/WAPs/etc., over a hundred Cisco IP phones and their users, multiple software packages and all the other ‘trimming’s that come with a typical SMB systems installation. I spend many more hours reading and learning about technology, trying to keep up on trends, learn about what’s on the horizon, develop my skills on solutions we have in place. Much of my free time is spent on the computer, playing games, watching TED Talks, Stumbling, etc. All this to say, I’m no different than most of you, I am guessing…

I spend a lot of time on computers.

But, today is Father’s Day. For me, this is a day of perspective. Because, when I look into the eyes of my two sons, when my 5 year old runs up to me and gives me the longest hug I’ve had in a long time and tells me, “I’m so glad you are my father.”, well I am reminded of what is really important.

I just want to say to all you fathers out there, Happy Father’s Day. I hope and pray that this is a day of joy and happiness for you.

Thursday, June 18, 2009

BAD_ADDRESS = bad!

I was working to deploy some new IP phones on our Gilbert campus, and kept getting DHCP address assignment errors. The phones would sit there ‘configuring IP’… Just sitting there. Meanwhile, my DHCP scopes was filling up with leases to “BAD_ADDRESS”. Do a web search for “DHCP BAD_ADDRESS” and you will get a good idea of the problem.

While some reported this problem being associated with Mac clients or other IPv6 clients on the network, this was not my problem at all. My problem was simple duplicate IP addresses on the network. The tough part of this was that there were no DNS entries for the offending IP address and no valid DHCP leases for these IPs. Yet, I was able to ping the addresses, so something out there was using these addresses.

I tried using ping/arp to find the devices on the network, but did not have any success until a network engineer I was talking to suggested that go to my core router/switch on the network and do my ARP lookups on that device. I had been doing them from my workstation and a couple of edge switches. This was the key and I had struck gold. My core switch (managing all of my VLANs) had all of these IP/MAC entries in its ARP table.

From there, I was able to find the actual devices that has these BAD_ADDRESSes. This exposed the root problem that turned out to be an interesting residual from a previous issue I had worked on. It turns out that there were a number of phones on my network that were still configured to use the now-defunct IP address from our old multi-homed configuration. So, essentially, their DHCP server no longer existed. Thus, they had little choice but to hold on to their assigned IP address for dear life, hoping and praying that, someday, their long-lost DHCP server would return. Little did they know that the server was sitting right next to them, just with a new IP address. I quickly generated a list of these devices and rebooted them. They immediately found the DHCP server and got an IP address.

But, back to the BAD_ADDRESS issue… My DHCP scope had no record (no active leases) for these residual IP addresses being held by these orphaned devices. So, when I plugged a new phone in, my DHCP server was more than happy to attempt to hand those IP address out. From what I have gathered, the basic steps in DHCP go something like this (super-simplified and possibly not even right):

  • Client makes request
  • Server pulls an unused address from the appropriate scope
  • Server responds to client with this IP address and associated network configuration
  • Client verifies that IP address is actually available (not currently on the network)
    • SUCCESS! Client keeps the network configuration and is happily on the network
    • FAILURE! Client reports back to DHCP server that IP is already in use
      • DHCP adds entry in its DHCP lease DB for this IP address, assigning it to ‘BAD_ADDRESS’
      • Start process over with next available IP address

Once all devices were talking to the correct DHCP server, this problem simply went away. My new phones were immediately configured and working.

Wednesday, June 17, 2009

File Store saga

So, we had an issue with a Dynamic disk in a VM. This disk would not be active after a VM restart. I had to manually reactivate it. In doing this, my shares and ABE settings were lost and had to be reset.

After some research, I found that there were two options to fix this that did not include backup-rebuild-restore. These options are:

  • Attach disk to IDE rather than SCSI.
  • In Registry, change the 'START' value in “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\storvsc” from 3 to 0.

The first option seemed the ‘better’ choice, as it is just using the tools/software, rather than reg-hacking. So, that is the option I went with first. But, our disk size ended up causing problems with this. We experienced some file corruption (thank God for backups) and ended up having to move the disk back to SCSI. So, it was off to Plan B…

… which worked perfectly. After changing the registry value and doing a couple of test reboots, everything looked good and stable. Then, we just had to ‘clean up’ our corrupted files. Users a re still trickling in with files that can’t be opened. But, a quick restore from our pre-problem backups is fixing things in most cases.

WHAT I LEARNED:

  1. Verify and Clarify! Do your research and develop a plan. Then, verify that plan, not just the steps/technologies/ideas, but the actual plan! Run through it one more time. Get one more pair of eyes on it. Verify that the actual steps you are planning on taking are solid. My discussions on this topic lead me to believe that moving from SCSI to IDE was the best approach, but I didn’t run my actual plan by other engineers. I am confident my flaw would have been caught had I done so.
  2. Take precautions! I could have/should have taken extra precautions before executing my plan. I had recent backups, but not up-to-the-minute backups. Should have done that. Is it too much to ‘expect’ failure and prepare accordingly? Maybe not…
  3. Be thankful for the Grace of God found in His people! My co-workers were/are awesome! I am humbled and grateful for their understanding and grace during this ordeal.
  4. Don’t rush. I has anxious to get this fixed. And, because of that, I rushed things. Oh, I didn’t feel like I was rushing things at the time. But looking back (isn’t hindsight great?!) I see now that I should have taken more time to contemplate this issue. Overconfidence? Perhaps…

Also, as a result of this, we made some changes to our DR plans… Specifically, we increased the frequency of our file store backups… from once a day to every six hours.

Wednesday, June 3, 2009

Dynamic Disks and Hyper-V VMs… Not So Much!

As per here, disks in Hyper-V VMs should be Basic (not Dynamic), or they will start up as inactive when you boot the VM. So, you have to reactivate the disk every time you reboot the VM. This also causes any shares on the volume(s) to disappear.

Don’t ask me how I know this…

Also don’t ask me if I am going to enjoy performing the ‘fix’ on a disk with over 2TB of data on it…

<weep>

BackupExec… Oh How I Hate Thee!

I just have to say it out loud. This has got to be the worst software ever conceived of by man. Why is it, when I create and the start a job, it just sits there for (seemingly) EVER?! I created a restore job and, after determining that I wanted to change the job, I canceled it, made changes, and started it again… NOPE! Instead, it just sits there… going on an hour now!! No alerts waiting for a response… just sitting there.

I really hate this software!

I could rant about so many things about BackupExec (now trying version 12.5) that I hate… D2D performance being very poor, jobs constantly getting stuck, etc. etc.

The question is, where can I go?

Saturday, May 9, 2009

Carbonite = WIN!

So, our home computer crashed and we had to get a new one. We had been talking about this for a while anyway, but circumstances caused us to get one sooner than we wanted.

We got a new Lenovo (and new 20” LCD monitor) from Fry’s Electronics. It is a nice setup.

Last year, we got backup service from Carbonite and used it to back up all of our data files (office docs, pictures, e-mail, videos, etc). This backup came in handy over the weekend! File restores from Carbonite worked like a charm, though it isn’t the fastest service on the planet. It certainly earned its wage this weekend!

So, we are pretty much back up and running. As part of our ‘upgrade’, we went from an XP system to a Vista system. New UI, but we are getting used to it.

Of course, there are already some upgrades we want to do… new webcam, a second hard drive, new scanner (no Vista drivers for our current one).

The fun never ends…

P.S. If you are interested in Carbonite, let me know and I will send you an invite… Customers get credit for referrals. Thx!

Tuesday, May 5, 2009

Upgrading our Virtualization Infrastructure, and Other Updates…

We are going to be adding a second Hyper-V virtualization host soon. We currently have four VM hosts, three running Virtual Server and one running Hyper-V. Our goal, as you can probably imagine, is to get off of Virtual Server completely. This second Hyper-V host will just about get us there. We will be able to put our most critical VMs on Hyper-V and keep our ‘lesser’ VMs on Virtual Server. This is critical for us, as the performance and stability gains are impressive. As one example, our main file server (a VM) was, until recently, hosted on Virtual Server. It would literally take from 1-3 hours to reboot on that VM hosting platform. After moving this VM to Hyper-V, it now takes 1-3 minutes to reboot. Also, backups have increased in speed by almost 50%, greatly reducing our backup window, and expected recovery window. So, our data is getting safer faster and can be retrieved faster. All very good things!

I have also been spending some time looking at more depth in to Server 2008. We aren’t ready to upgrade our Domain yet, but the File Services role for Windows Server 2008 has some impressive new features. And, since we are nearing file server capacity and need to expand, this might be a good time to upgrade. But, just doing the research as of now.

In either case, the march towards Windows Server 2008 is under way. We’ve got quite a way to go, but we are moving in that direction.

Finally, we are expanding our server room on our Mesa campus. This will help us consolidate our server resources to a single location, as well as give us some breathing room in our racks. As part of the expansion, we have added a few more power circuits, which were sorely needed. Overall, this will be a big win for us as we look to grow/update our systems.

Wednesday, April 22, 2009

When was an Outlook Calendar item created?

I had never been asked this question before, nor had I ever needed this information myself before.

Looking at the properties of an Outlook Calendar item, you can see the ‘Modified’ date/time. But, this isn’t necessarily the CREATION date/time. I couldn’t find a way to see this data from within the Outlook UI. So, I turned to PowerShell. I figured the value is there, I just needed to get to it.

** If you know of an easier way, please let me know.

This excellent blog post gave me most of what I needed.

From that, I created the following snippet that does the trick:

# Connect to Outlook Calendar
$outlook = New-Object -ComObject Outlook.Application
$session = $outlook.Session
$session.Logon()
$calendarItems = $outlook.session.GetDefaultFolder(9).Items
$calendarItems.Sort("[Start]")
$calendarItems.IncludeRecurrences = $true

$filter = "[End] >= '4/19/2009' AND [Start] <= '4/22/2009'"

foreach ($appt in $calendarItems.Restrict($filter))
{
    $appt | Select CreationTime, LastModificationTime, Subject
}

At some point, I would like to make this into a function… accepting input for date ranges, etc.

Also, I want to figure out how to look at other people’s calendars. I am thinking I need to pass parameters to Logon(), but did not get my initial attempts to work.

Saturday, March 28, 2009

Multi-Homed Domain Controller = FAIL!

Apparently, this is not a good idea…

I am just finishing my crawl walk down a long and winding path. It all started when we began having problems authenticating our wireless clients against our IAS server. We have a DC running IAS. This DC also runs an app for our VoIP phones. As such, this DC has two NICs, one on our DATA VLAN and one on our VOICE VLAN.

The IAS authentication problem would show up sporadically. Using WireShark, we would see authentication requests coming from the WLC to our IAS box, but no responses going back out. Things would just ‘black hole’ at the IAS box. I ended up opening tickets with both Cisco and Microsoft on this problem. Until we found a solution, our only sure-fire way to fix things (for a time) was to reboot the IAS/DC server.

It didn’t take long to notice that the WLC was working as expected. So, we focused on the Microsoft side of the equation. To their credit, Microsoft stuck with us as we worked through this. We had this ticket open for a few weeks and ran through various levels of support and various engineers. It wasn’t until we got to “level 3” support at Microsoft that we found the problem. This engineer suspected something that no one (me included) thought to even check… Could requests be coming in on one NIC and going out the other? As they say… EUREKA!

Of course, the first thing we had to do was wait… because, you know, we couldn’t exactly trigger this problem, or time it, or predict it. It would just happen all of a sudden. But, the next time we saw the problem, I ran WireShark on both interfaces. Sure enough, requests were coming in on one NIC and going out the other. The WLC didn’t like that, not one bit.

So, we had found our problem. Unfortunately, fixing the problem isn’t as easy as disabling one of the NICs. I mean, that works in the short-term, but it is not a solution. The phone paging system uses the voice VLAN NIC, as do our phones. We had a couple of phones give fits trying to register with the CallMan last week. I had disabled the voice VLAN NIC. Re-enabling this brought my phones back up.

This particular issue was easily resolved by putting an IP Helper address on the voice VLAN on the router. Phones now get their DHCP responses from the data VLAN.

But, we still have to fix the paging app. It has to have a NIC on the voice VLAN, so it looks like we will be migrating this app to its own box… Probably a better solution anyway.

Moral of the story: multi-homed DCs can cause problems… Also, don’t try to do too much on your DCs (or any box, for that matter).

Tuesday, March 10, 2009

GMail’s Archive feature in Outlook… PART 2

I am following up on my original post, available here.

These are the actual steps (as best as I remember them) that I followed to create this setup.

  1. Open Outlook
  2. Click Tools|Macro|Macros
  3. Enter ‘MoveTo_Archive’ (without quotes) as the Macro Name
  4. Click ‘Create’
  5. Paste code into Module1
  6. NOTE: Change the code in the MoveTo_Archive Sub, replacing <MyMailbox> with your actual mailbox name
  7. ALSO: Make sure the ‘_Archive’ folder exists in your mailbox.

That should do it. Hope this helps.

…not sure what I would screencap.

Wednesday, March 4, 2009

Powershell – Preserving history for future sessions

Typing ‘exit’ in a Powershell console, as you would imagine, exits the console. I use PowerShellPlus almost exclusively (and love it!). Typing ‘exit’ while in PowerShellPlus (known affectionately as simply “teh+”) gives you the option of closing the app or starting a new, clean console session. I often use this as a quick way to clean out my console environment.

I have also written a small function, named ‘exitt’, that exits AFTER securing the console. I sign all of my scripts. But sometimes, while working on something, I will set my Execution Policy to ‘remotesigned’ for a while. But, I pretty much always set my Execution Policy to ‘allsigned’ before exiting. That way, nothing unsigned by myself will accidentally (or maliciously… am I paranoid?) run the next time I start Powershell.

Anyway, back to my point…

When I type ‘exit’ or run my ‘exitt’ function and stay in teh+, my history is wiped out. This usually isn’t a problem. But, there are times when I want a clean, fresh console AND my history. To that end, I tweaked my ‘exitt’ function and my profile a bit.

(NOTE: I did this before really researching things on the Internet. There are probably better ways of handling this, but this works for me).

The meat of my ‘exitt’ function looks like this:

  1. param  
  2. (  
  3.     [Parameter(Position=0, Mandatory=$false, ValueFromPipeLine=$false)] 
  4.         [switch]$history = $false 
  5.  
  6. # Call Secure-Console function to set executionPolicy to AllSigned 
  7. Secure-Console 
  8.  
  9. # If switched, export history for future use. Otherwise, blow out history 
  10. if ($history
  11.     Get-History | Export-Clixml "c:\scripts\hist.xml" -Force 
  12. else 
  13.     Remove-Item "c:\scripts\hist.xml" -Force -ea SilentlyContinue 
  14.  
  15. #Close Program 
  16. exit 

So, if I run the function with the –history switch, it writes the current history out to an XML file.

Then, my profile has this bit:

  1. if (Test-Path "c:\scripts\hist.xml"
  2.     Add-History (Import-Clixml "c:\scripts\hist.xml"
  3.     Remove-Item "c:\scripts\hist.xml" -Force -ea SilentlyContinue 

Pretty simple, and it works for me.

After writing this, I did a quick search in the ‘tubes and came across JSnover’s solution to this. Maybe I will do that first next time. 

:-)

Monday, March 2, 2009

GMail’s Archive feature in Outlook…

I love this feature. This, plus GMail’s great search, keeps my Inbox clean while making all of my past mail easily accessible. Of course, all you GMailers out there already know this.

Now, when it comes to Outlook… well… not so much.

I used to just delete stuff and use the ‘Deleted Items’ folder as my archive folder. But, that is not really an ideal solution. So, I thought that I would create an Archive folder and then move message to that instead. After some searching, I found and modified a Macro. This code is not original to me. Unfortunately, I didn’t document where I got it, so I can’t give proper credit. I even searched some this morning, looking for the original again, with no luck. But, whoever you are, thank you!

Tie this to a button and a key combo, and you have a nice archive folder. Works great for me… My Inbox is clean and I know where to look for past emails.

 

Sub MoveMessages(strFolder As String)
    Dim olkItem As Object, _
        olkFolder As Outlook.MAPIFolder
    Set olkFolder = OpenMAPIFolder(strFolder)
    If TypeName(olkFolder) = "MAPIFolder" Then
        For Each olkItem In Application.ActiveExplorer.Selection
            olkItem.UnRead = False
            olkItem.Save
            olkItem.Move olkFolder
        Next
    End If
    Set olkFolder = Nothing
    Set olkItem = Nothing
End Sub

Sub MoveTo_Archive()
    MoveMessages "\<MyMailbox>\_Archive"
End Sub

Function OpenMAPIFolder(szPath)
    Dim app, ns, flr, szDir, i
    Set flr = Nothing
    Set app = CreateObject("Outlook.Application")
    If Left(szPath, Len("\")) = "\" Then
        szPath = Mid(szPath, Len("\") + 1)
    Else
        Set flr = app.ActiveExplorer.CurrentFolder
    End If
    While szPath <> ""
        i = InStr(szPath, "\")
        If i Then
            szDir = Left(szPath, i - 1)
            szPath = Mid(szPath, i + Len("\"))
        Else
            szDir = szPath
            szPath = ""
        End If
        If IsNothing(flr) Then
            Set ns = app.GetNamespace("MAPI")
            Set flr = ns.Folders(szDir)
        Else
            Set flr = flr.Folders(szDir)
        End If
    Wend
    Set OpenMAPIFolder = flr
End Function

Function IsNothing(obj)
  If TypeName(obj) = "Nothing" Then
    IsNothing = True
  Else
    IsNothing = False
  End If
End Function

Thursday, February 19, 2009

Score 1 for Hyper-V!

In my last post, I lamented that it was possible that Hyper-V was failing where Virtual Server was not. Happily, it turns out I was completely wrong.

The culprit turned out to be Windows Server 2003 itself. Specifically, Windows Server 2003 Standard SP2. This platform has a problem working well with our application. Our Windows Server 2003 R2 Enterprise SP2 box worked fine hosted on both Virtual Server and (more importantly) Hyper-V.

So, we will be migrating our production application to the new OS, hosting our VM on our Hyper-V box. Which is what we wanted all along.

I have to say that our successful outcome on this project was due to diligent testing. We kept testing different configurations until we had multiple test results, each differing by only one variable. We were then able to clearly define the problem piece of the puzzle.

Thanks to our team for helping with this!

Wednesday, February 11, 2009

Hyper-V Fails where Virtual Server 2005 R2 Succeeds

Hyper-V FAIL! Well, it appears we have to take a step back to move forward. This is a VERY disappointing situation and I am hoping that someone at Microsoft (Hyper-V team) will stumble across this and take interest.

First, go read this blog post. Don’t worry, it will only take a minute… I will wait for you.

.

.

.

Are you back? Good.

We ran further tests today. Specifically, I created a VM on one of our Virtual Server 2005 R2 hosts and we ran the printing tests. We had no problems at all. So, our current reality is:

  • Physical server: no problems
  • Virtual Server 2005 R2 VM: no problems
  • Hyper-V VM: CRASH!

At this point, our options are pretty clear…

  1. Migrate our VM from Hyper-V to Virtual Server 2005 R2, or
  2. Install our workload onto a physical server

As you can guess, neither of these options are ideal. Option 1 is taking a step backward to a technology that is being left behind. I don’t want to have a workload on that platform that can’t be moved to the current offering. Option 2 defeats the whole purpose of virtualization and all of the perceived benefits it offers!

So, here we sit, wondering what to do and where to turn for solutions. We have done some research and wonder if items like this are helpful:

The print process crashes under heavy stress on a computer that is running Windows Server 2003 or Windows XP Professional x64 Edition if the computer uses hyper-threading technology.

I went down a rabbit-hole for a while on hyper-threading and virtual platforms… Not sure if I am heading in the right direction.

I would really appreciate any thoughts or suggestions you might have.

I will post more as it becomes available.

Monday, February 9, 2009

Starting VMs that try to share DVD drive

This was an odd one…

I created a new VM on my Hyper-V host. The VM would not start. It turns out that I had two VMs on this Host both trying to use the hosts DVD drive as its own. This doesn’t work.

Found a description and the solution here.

Moral of the story: Make sure an ‘off’ VM is not trying to use the DVD drive if it is already assigned to an ‘on’ VM.

Wednesday, January 28, 2009

Installing SQL Server 2008 Express Edition Management Studio

image I have a new VM that I am configuring as a development platform. I am installing some of the Visual Studio Express Edition tools, specifically Visual Web Developer 2008 EE and Visual C# 2008 EE. I’m not a developer, but I enjoy dabbling. This will give me the tools to do that.

I first installed Visual Web Developer (did this a week ago). As part of this install, it installed SQL Server 2008 Express Edition. While this installed the database engine, it did not install the SQL Server Management Studio (SSMS). Now, SSMS for version 2005 was a separate download and install. This version, however, is not able to manage a 2008 install. So, I went looking for the 2008 version of the SSMS. It turns out that there are multiple versions of the SQL Server 2008 EE product…

It looks like the WebDev install included only the Runtime version of SQL Server EE. So, if I wanted to get the Management Studio, I had to use one of the other two distros of this product. Here’s where the snag came that I wanted to share.

The install of the SQL Server 2008 Express with Tools allows you to install a new instance or add to an existing instance. My first thought was to add to an existing instance (as I already had an installation of this product). However, when I went through this step, it did not allow me to add the Management Studio. I think it saw that the currently-installed instance was the ‘Runtime Only’ version, so it did not offer me the Management Studio.

After wrestling with this for a few minutes, I decided I would just install a new instance, hoping to get access to the Management Studio that way. This worked even better than I wanted. I was able to select ONLY the Management Studio on the new instance install page. This page listed ‘Instance resources’ and ‘Shared resources’. Management Studio was under ‘Shared resources’. I selected it and left everything else unchecked.

Worked like a charm.

This wasn’t completely intuitive (you can’t add to an existing install if the existing install is the Runtime Only version). So, you have to use the ‘install a new instance’ option.

What’s Up with What’s Up Gold?!

imageI want to start by saying that I really like this product.  We are currently running What’s Up Gold (WUG) v12.3.1 to monitor over 100 devices on our network.  We are monitoring a combination of servers, switches, routers, websites, and more. Our implementation is not complete, but we are constantly adding to it; adding monitors, notification, etc. We are especially focusing on notification now. Currently, I have my WUG dashboard open on my second monitor throughout the day. So, I can see real-time performance of our systems. The BIG ‘killer-app’ feature for me is the history that WUG keeps for the metrics it monitors.

This has been especially useful when evaluating storage usage, bandwidth utilization, and CPU/memory usage on some of our high-load systems. We have caught things that would have become problems BEFORE they became problems. I feel like this tool has paid for itself in these scenarios! Also, we have recently been looking at the Windows NT Service monitor feature. This is cool. You can monitor services on machines and, if WUG sees that a service is stopped, can re-start it automatically. Very nice! This, combined with alerting, gives us good active monitoring and remediation capability. It’s always best to know about, and fix, a problem before your users have to report it to you. WUG makes this available.

With that all being said, I have to rail on some thing that I really DON’T like about What’s Up Gold. Like I said above, we are running version 12.3.1. You would think that a product as mature as this (in its 12th version) would be a bit more ‘enterprise’ friendly. I have had an experience over the past few weeks that has led me to a contrary opinion.

When installing WUG, you have the option of using a local ‘Express Edition’ SQL database or a full-blown SQL Server 2005 installation. We originally installed WUG on a machine and used a local DB. As you probably know, SQL Server Express Edition and a 4GB limit on its database size. Well, over time, our WUG database grew to 4GB and then promptly when ‘caput’! We had a choice… delete history or migrate the database. We have a SQL Server 2005 installation with capacity to spare, so our decision was to migrate the database.

Following the ‘Migration Guide’ from Ipswitch, I was able to move the database form the local SQL engine to our SQL Server. Everything seemed to be fine… until I tried to add another device to WUG. When I tried this, I got…

clip_image002

And, this is where things became very difficult. First, I have to state that I am not a DBA. But, I can find my way around the SQL Server Management tools, run queries, etc. Anyway, with this error in hand, I went where I always go when I have a question… Google. I also went to WUG’s website to search their KB. It was about this time that my frustration with WUG, specifically their help and technical support, really began to blossom. Their KB articles are very poorly written; much too vague and general. Any discussion threads I found related to this issue was just populated by frustrated users, not much in the way of constructive input on the part of WUG technicians.

I found a few vague references to ‘sp_dropserver’ and ‘sp_addserver’, but no real explanation of why or how I would use these to fix this particular problem. Nor was I able to find an actual explanation of the problem itself.

After a while of frustrating searching, I contacted WUG technical support and opened a ticket with them. I sent them a detailed description of the problem I was having and how it came about. I sent them the picture shown here so they would see exactly what I saw, hoping to get some… you know… support. Instead, I got the following e-mail in response:

-----------------------

Hello Derek,

   You will find the steps to resolve the issue in the migration guide:

http://www.whatsupgold.com/wugdbmg

  As with all SQL operations this should be performed by a SQL DBA.

----------------------

Helpful, huh? I can really see what they are paying these tech support guys for! Such great analysis! Such support! I had already used the Migration Guide to help with the original migration. Further, the error message references the ‘sp_addlinkedserver’ command, but the Migration Guide talks about the ‘sp_dropserver’ and ‘sp_addserver’ commands. I responded to tech support, indicating my concerns and issues I had with running these commands on my SQL Server. I let them know that I executed the commands and got an error, which I sent along. Their ‘oh-so-helpful’ response was:

-----------------------

Hello Derek,

   Please have your DBA execute the commands.

-----------------------

Yes, that was the full extent of their response. Never mind that I had said that I DID RUN THE COMMANDS! Never mind that I had reported that the commands generated an error. So, I again asked if they could please help explain the error and help me determine a solution to this problem. I reiterated that I ran the commands and got an error and that the problem was still not resolved.

Their response:

-----------------------

Hello Derek,

     Operation of WhatsUp Gold with a full SQL database should be done with the assistance of a qualified DBA. Ipswitch does not provide these services. Our partners can provide services in such areas.

http://www.whatsupgold.com/partners/index.aspx

-----------------------

Nice, huh?! Was I asking them to provide ‘qualified DBA’ services?! No, I was not! I was asking them to tell me why THEIR PRODUCT wasn’t working. The fact that THEIR PRODUCT uses a database server should not preclude them from having to provide support if the problem involves the database! This was ridiculous. This clown wasn’t even TRYING to help. And, he never did, I am sad to say.

So, it took more research, trial-and-error, and a bit of luck, to find a solution to this problem. The various posts, KB articles (from Ipswitch and Microsoft), and other resources all hinted at parts of the problem. But, Ipswitch should have definitive support for this problem. The fact that they don’t and are not helpful is deplorable.

So, we will continue to use What’s Up Gold… But, I can’t imagine a scenario where I would actually try to use their tech support services again… They are completely worthless! I have been in this industry to 20 years or so and they ‘provided’, without exception, the worst service I have ever experienced.

Thursday, January 15, 2009

Setting up SNMP on Windows Server 2008

Had an odd little ditty today…

image I needed to set up SNMP on a Windows Server 2008 box so that I could add it to our What’s Up Gold monitoring system. As you probably know, Server 2008 relies heavily on the concepts of Roles and Features. Well, SNMP is a Feature. So, in Server Manager I added the SNMP Feature.

Then, like in previous versions of Windows, I went to the Services UI, scrolled down to ‘SNMP Service’, right-clicked, and selected Properties.

I expected to see a nice series of tabs that would allow me to configure the SNMP Service with things like ‘community names’ and ‘trap destinations’. Instead, I only got the standard tabs you see on most Services. The service was installed and running, but there was no way to configure it.

Turns out, the ‘fix’ to this is simple… Log out and log back in. After logging back in, all teh SNMP configuration tabs were there.

You shouldn’t have to do that, but it is only a small inconvenience.

Friday, January 9, 2009

First Powershell Wrapper Functions for SourceGear Vault Client

I have written three functions so far. They cover the main tasks I perform when working with my Vault for my Powershell scripts.

They are:

  • In-VaultFile
  • Out-VaultFile
  • Get-VaultCheckoutList

A couple of items to note:

  • Getting the command-line to work was a bit tricky
  • The ‘vault.exe’ app outputs XML, which made generating feedback really nice. Of course, I didn’t know about [xml]$var at first. But, once I read about that, things moved along nicely.
  • I don’t like that I have to store my username and password in the cleartext in my script file. Not sure what options I have on that one. I certainly don’t want to have to type it out every time!
  • The Vault Command Line Client has a lot of other functions available. I will probably be adding wrappers for some others in the near future… Like creating new folders, viewing history, etc.

I love that we can expose only certain functions from within a module. That is cool!

Lastly, I just want to say THANK YOU to Mr. Snover and his team! I haven’t had this much fun working on computers in quite a while! Sys Admin is FUN AGAIN!

Anyway, here’s the code…

#################################################################
# FUNCTION: In-VaultFile
#
# WRITTEN BY: Derek Mangrum
#
# 2009-01-08 : Initial Version
#################################################################
function In-VaultFile
{
<#
.SYNOPSIS 
    Checks a file in to SourceGear Vault
.DESCRIPTION 
    Checks a file in to SourceGear Vault
.NOTES 
    Author     : Derek Mangrum  derek.mangrum@gmail.com 
    Requires   : PowerShell V2 CTP3 or later
.LINK 
    http://grinding-it-out.blogspot.com/
.EXAMPLE 
    In-VaultFile c:\scripts\file01.ps1 "This is my comment"
.EXAMPLE
    dir c:\scripts\Modules -Recurse -Filter *.psm1 | In-VaultFile -Comment "My comments"
.PARAMETER File
    The file that you want to check in.
.PARAMETER Comment
    Comments for the check in operation. Required.
#>
    
    param 
    ( 
        [Parameter(Position=0, Mandatory=$true, ValueFromPipeLine=$true)]
            [string]$File,
        [Parameter(Position=1, Mandatory=$true, ValueFromPipeLine=$false)]
            [string]$Comment
    )

    BEGIN
    {
    } #END BEGIN
    
    PROCESS
    {
        checkIn-File $File $Comment    
    } #END PROCESS
    
    END 
    {    
    } #END END
}

function checkIn-File
{
    param
    (
        [string]$File,
        [string]$Comment
    )
    
    if (Test-Path $File)
    {
        $item = (Resolve-Path $File) -replace 'C:', '$'
        $item = $item -replace '\\', '/'
        $user = userInfo
        $Comment = $Comment -replace " ", "_" 
        $command = "cmd.exe /C `"C:\Program Files\SourceGear\Vault Client\vault.exe`" CHECKIN -host $($user.Host) -user $($user.name) -password $($user.pass) -ssl -repository $($user.Repository) -comment $Comment $item"
        
        [xml]$result = Invoke-Expression $command
        
        if ($result.vault.result.success -eq "yes")
        {
            Write-Host "SUCCESS: " -ForegroundColor Green
            $result.vault.'#comment'
        }
        else
        {
            Write-Host "FAIL: " -ForegroundColor Red 
            $result.vault.'#comment'
        }
    }
    else
    {
        Write-Host "No such file: " -ForegroundColor Red -NoNewline
        $File
    }
}

#################################################################
# FUNCTION: Out-VaultFile
#
# WRITTEN BY: Derek Mangrum
#
# 2009-01-08 : Initial Version
#################################################################
function Out-VaultFile
{
<#
.SYNOPSIS 
    Checks a file out from SourceGear Vault
.DESCRIPTION 
    Checks a file out from SourceGear Vault
.NOTES 
    Author     : Derek Mangrum  derek.mangrum@gmail.com 
    Requires   : PowerShell V2 CTP3 or later
.LINK 
    http://grinding-it-out.blogspot.com/
.EXAMPLE 
    Out-VaultFile c:\scripts\file01.ps1
.EXAMPLE
    dir c:\scripts\Modules -Recurse -Filter *.psm1 | Out-VaultFile
.PARAMETER File
    The file that you want to check in.
#>
    
    param 
    ( 
        [Parameter(Position=0, Mandatory=$true, ValueFromPipeLine=$true)]
            [string]$File
    )

    BEGIN
    {
    } #END BEGIN
    
    PROCESS
    {
        checkOut-File $File
    } #END PROCESS
    
    END 
    {    
    } #END END
}

function checkOut-File
{
    param
    (
        [string]$File
    )
    
    if (Test-Path $File)
    {
        $item = (Resolve-Path $File) -replace 'C:', '$'
        $item = $item -replace '\\', '/'
        $user = userInfo
        $command = "cmd.exe /C `"C:\Program Files\SourceGear\Vault Client\vault.exe`" CHECKOUT -host $($user.Host) -user $($user.name) -password $($user.pass) -ssl -repository $($user.Repository) $item"

        [xml]$result = Invoke-Expression $command
        
        if ($result.vault.result.success -eq "yes")
        {
            Write-Host "SUCCESS: " -ForegroundColor Green
            $result.vault.'#comment'
        }
        else
        {
            Write-Host "FAIL: " -ForegroundColor Red
            $result.vault.'#comment'
        }
    }
    else
    {
        Write-Host "No such file: " -ForegroundColor Red -NoNewline
        $File
    }
}

#################################################################
# FUNCTION: Get-VaultCheckoutList
#
# WRITTEN BY: Derek Mangrum
#
# 2009-01-08 : Initial Version
#################################################################
function Get-VaultCheckoutList
{
<#
.SYNOPSIS 
    Lists all items currently checked out.
.DESCRIPTION 
    Lists all items currently checked out.
.NOTES 
    Author     : Derek Mangrum  derek.mangrum@gmail.com 
    Requires   : PowerShell V2 CTP3 or later
.LINK 
    http://grinding-it-out.blogspot.com/
.EXAMPLE 
    Get-VaultCheckoutList
#>
    
    BEGIN
    {
    } #END BEGIN
    
    PROCESS
    {
        getList
    } #END PROCESS
    
    END 
    {    
    } #END END
}

function getList
{
    
    $user = userInfo
    $command = "cmd.exe /C `"C:\Program Files\SourceGear\Vault Client\vault.exe`" LISTCHECKOUTS -host $($user.Host) -user $($user.Name) -password $($user.pass) -ssl -repository $($user.Repository)"

    [xml]$result = Invoke-Expression $command
    
    Write-Host "The following items are currently checked out"
    Write-Host "---------------------------------------------"
    
    if ($result.vault.result.success -eq 'yes')
    {
        foreach ($item in $result.vault.checkoutlist.checkoutitem) 
        {
            $item.checkoutuser.localpath
        }
    }
    else
    {
        Write-Host "ERROR" -ForegroundColor Red
    }
}

function userInfo
{
    $user = @{}
    $user.Name = 'MyName'
    $user.Pass = 'MyPassword'
    $user.Host = 'MyHost'
    $user.Repository = 'MyRepository'
    
    return $user
}

Export-ModuleMember In-VaultFile
Export-ModuleMember Out-VaultFile
Export-ModuleMember Get-VaultCheckoutList

Blog Archive

Additional Info

My photo
email: support (AT) mangrumtech (DOT) com
mobile: 480-270-4332