First, let it be perfectly clear that I know very little about configuring network gear. My CLI experience has been pretty much limited to creating a VLAN. That said, I had to take on the task of deploying a Cisco Nexus 2148t in order to get some new ESXi hosts out on the appropriate network. Fortunately, Scott Lowe wrote an blog post titled Connecting a Nexus 2148 to a Nexus 5010 in which deemed the process “incredibly simple and not really worthy of a blog post.” Fortunately, the blog post was succinct and accurate, exactly what I needed so hereby deem it priceless.
Being a complete newbie to networking, but also valuing the product of my labors, I decided I needed to port-channel two 10 GB links to each of the Nexus 2148t for link redundancy. This decision was supported by a blog post titled Why EtherChannels should be used for FEX interfaces. The essence of the procedure is pretty simple:
- Create the port-channel
- Set switchport mode of the port-channel to ‘fex-fabric’
- Select switchports, set mode ‘fex-fabric, add to port-channel
Mine looks a little like this:
nexus01(config)# int port-channel21 nexus02(config-if)# switchport mode fex-fabric ^ % Incomplete command at '^' marker. nexus02(config-if)#
Of course, that's not really what I was hoping would happen. A little google'ing, some head scratching and another blog post titled Some NX-OS features can’t be manually enabled gave me a bit of a clue. I tried:
nexus01(config-if)# show run | i feature feature telnet feature udld feature interface-vlan feature lacp feature vpc rule 5 permit show feature environment rule 4 permit show feature hardware rule 3 permit show feature module rule 2 permit show feature snmp rule 1 permit show feature system
I see nothing about fex, someone enlightened me to the following command.
nexus01(config-if)# show feature Feature Name Instance State -------------------- -------- -------- tacacs 1 disabled lacp 1 enabled interface-vlan 1 enabled private-vlan 1 disabled udld 1 enabled vpc 1 enabled fcoe 1 disabled fex 1 disabled
More help from the person at CCIEZone.com.
nexus01(config-if)# feature fex nexus01(config)# show run | i feature feature telnet feature udld feature interface-vlan feature lacp feature vpc feature fex rule 5 permit show feature environment rule 4 permit show feature hardware rule 3 permit show feature module rule 2 permit show feature snmp rule 1 permit show feature system nexus01(config)# show feature Feature Name Instance State -------------------- -------- -------- tacacs 1 disabled lacp 1 enabled interface-vlan 1 enabled private-vlan 1 disabled udld 1 enabled vpc 1 enabled fcoe 1 disabled fex 1 enabled
That looks better. Now, Scott's steps get me the rest of the way through.
nexus01(config)# interface port-channel 21 nexus01(config-if)# switchport mode fex-fabric nexus01(config-if)# interface eth1/3 nenexus01xus(config-if)# switchport mode fex-fabric nexus01(config-if)# channel-group 21 mode on nexus01(config-if)# interface eth1/4 nexus01(config-if)# switchport mode fex-fabric nexus01(config-if)# channel-group 21 mode on nexus01(config-if)# interface port-channel 21 nexus01(config-if)# fex associate 100
nexus01(config-if)# show int port-channel21 port-channel21 is up Hardware: Port-Channel, address: 0005.9b24.994a (bia 0005.9b24.994a) MTU 1500 bytes, BW 20000000 Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA Port mode is fex-fabric full-duplex, 10 Gb/s Beacon is turned off Input flow-control is off, output flow-control is off Switchport monitor is off Members in this channel: Eth1/3, Eth1/4 Last clearing of "show interface" counters never 1 minute input rate 5680 bits/sec, 0 packets/sec 1 minute output rate 4629528 bits/sec, 951 packets/sec Rx 221027 input packets 35441 unicast packets 111604 multicast packets 73982 broadcast packets 35 jumbo packets 0 storm suppression packets 120356896 bytes Tx 55103126 output packets 1854476 multicast packets 1245154 broadcast packets 20480285 jumbo packets 41342764374 bytes 0 input error 0 short frame 0 watchdog 0 no buffer 0 runt 0 CRC 0 ecc 0 overrun 0 underrun 0 ignored 0 bad etype drop 0 bad proto drop 0 if down drop 0 input with dribble 0 input discard 0 output error 0 collision 0 deferred 0 late collision 0 lost carrier 0 no carrier 0 babble 0 Rx pause 0 Tx pause 2 interface resets nexus01(config-if)# show fex detail FEX: 100 Description: FEX0100 state: Online FEX version: 4.1(3)N2(1a) [Switch version: 4.1(3)N2(1a)] FEX Interim version: 4.1(3)N2(1a) Switch Interim version: 4.1(3)N2(1a) Extender Model: N2K-C2148T-1GE, Extender Serial: JAF1418BKEE Part No: 73-12009-06 Card Id: 70, Mac Addr: 54:75:d0:3a:d7:02, Num Macs: 64 Module Sw Gen: 12594 [Switch Sw Gen: 21] pinning-mode: static Max-links: 1 Fabric port for control traffic: Eth1/3 Fabric interface state: Po21 - Interface Up. State: Active Eth1/3 - Interface Up. State: Active Eth1/4 - Interface Up. State: Active Fex Port State Fabric Port Primary Fabric Eth100/1/1 Down Po21 Po21 Eth100/1/2 Down Po21 Po21 Eth100/1/3 Down Po21 Po21 ...
Done, save and go home.
Here is some Cisco documentation covering all of this. http://www.cisco.com/en/US/docs/switches/datacenter/nexus2000/sw/configuration/guide/rel_4_0_1a/FEX-Config.html
Simon Seagrave and Simon Long have launched a new website today. The website is called vBeers and was created to provide an opportunity for virtualization enthusiasts and professionals to meet and enjoy discussing all things virtualization and anything else in the world of tech.
Represented is the Twin Cities contingent of virtualization professionals who appreciate a pint with peers. Go check it out!
Please don’t ever tell VMware I wrote this…
I’ve been preparing a Proof of Concept for using a published browser as a method of mitigating the risk of allowing employee internet access. I don’t make the rules, I just provide solutions to the senior management. I won’t argue the requirement here, it’s just a requirement. For this, I’m publishing a browser from a DMZ like network segment with full internet access via XenApp 6 and allowing, via NetScaler, internal user access. Firefox was chosen as the browser to publish before I understood that Internet Explorer default preferences would be a lot easier to control.
This is only for Firefox 3, Firefox 4 appears to have changed the file layout and I don’t have full instructions assembled yet. Specifically, this was created and tested on Firefox 3.6.13.
Following are modifications I’ve collected for eliminating some of the obnoxious behavior within Firefox, hopefully making it a bit more user friendly. The following involves editing of configuration files, a text editor which properly maintains/displayed line breaks is preferred. Try Notepad++.
- Disable default browser check
- Disable automatic updates and reminders
- Configure proxy settings
- Disable the “Welcome to Firefox” tab on first load
- Disable “Save Tabs” reminder on closing
- Disable the “Know Your Rights” button on first load
- Disable “Import Settings” wizard on first load
- Set default homepage
When installing the Unisphere host agent on a server with multiple IP addresses address used for registration with the array is arbitrarily chosen. This can cause issues with communications with the array and seems to be significantly worse when using FC0E. In recent experiences I found that my server, SUSE Linux, would register as an initiator and not a host.
There doesn’t actually appear to be any specific documentation on this with Unisphere though there are lingering Navisphere documents out there. What I eventually found that confirmed what I needed was this document on Deploying Oracle Database 11g on EMC Unified Storage.
The solution: Create a file on / named agentID.txt with exactly two lines of text, the first being the FQDN of the host and the second line of the IP address. Remember, Linux case sensitive so the file must really be named “agentID.txt”. Once the file has been completed restart hostagent. A restart immediately resolved the issue and my host registered successfully.
Configuration of the host agent on Windows is similar though the path for the agent ID file will obviously be different.
Trying to start something new here by coordinating semi-regular meetups for like minded folks in the virtualization industry for lengthy discussion of the art of smoke at mirrors. Keep an eye out of twitter hashtag #TCvBeers, hopefully this will continue at locations in and around the twin cities.
Inaugural TC vBeers
- Date: 22 January, 2010
- Time: 12:30 PM
- Location: The Bulldog, St. Paul
Please feel free to get venues in the queue for vBeers somewhat more convenient for those outside of the cities!
Awesome find here, and even more awesome of a job by the VMware Site Recovery Manager development teams. Here’s the explanation, screen shots likely wouldn’t help a whole lot and I’d need to obfuscate quite a bit since I’m dealing in a corporate environment and not a lab.
Background, I’m migrating to a new SAN using SVMotion as the solution. Replication occurs using RecoverPoint and CLARiiON splitters. New LUN were created, assigned to a Consistency Group and full replication sync completed. New storage was zoned and masked at both protected and recovery sites. New datastores were created on the Protected site. In SRM, perform a rescan of storage. New replicated volumes appeared with errors as no virtual machines had been placed on them yet. I started SVMotion of all virtual machines from a datastore that comprised one existing and fully protected Protection Group. As virtual machines relocated to new storage “invalid” virtual machine errors were displayed in SRM. Once all virtual machines from the old protection group completed a SVMotion I went back to SRM and found that the original Protection Group still existed, contained all of it’s original virtual machines and displayed no errors.
I expected that, upon completion of the SVMotion, the original Protection Group would show in an error state and contain no virtual machines. I also suspected that I’d be able to create a new Protection Group from the newly deployed storage.
Upon investigation what I found had really occurred is that SRM modified the existing Protection Group, removed the configuration for the old storage, added configuration for the new storage and re-applied protection to all virtual machines. Keep in mind that these datastores where is separate Consistency Groups, therefore not candidates for addition to a single Protection Group.
This is pure awesomeness for us userland types, all I need to do now is complete my storage migration and just verify SRM is taking care of itself. No mass destruction, recreation of Protection Groups and Recovery Plans necessary!
Nothing new here, short and sweet. We all know about Microsoft’s (previously Sysinternals) BGInfo and it’s usefulness in a server environment. Arne Fokkema, blogged on adding VMware Tools version to BGInfo a ridiculously long time ago though I just found it today. Thanks to Duncan Epping for an English translation on Arne’s post.
What I realized, in the process of deploying this on some servers, is that I had no idea where the all users Startup folder was in Windows Server 2008. With some poking I found it location at “C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup”. Cool, but could Microsoft had made it a bit deeper in directory structure?
Ah, but wait, I said short and sweet but this isn’t all of the information. VMware Tools, prior to the release of vSphere 4.1, are located as per the referenced posts. vSphere 4.1 Tools installations had to screw with us a little by renaming the binary. The binary to be used when configuring VMware Tools version display is now: “C:\Program Files\VMware\VMware Tools\vmtoolsd.exe”.
Screen shot of end product (Obfuscated and not fully implemented, you’ll get the point though):
Short and sweet, check out the Coffee with Thomas podcast with my tweeps from the vCommunity Trust Inc. I didn’t make this one but conversation was great still!
“This episodes very special guests are the Board members of vCommunity Trust Incorporated. The board consists of Paul Valentina (twitter: @sysxperts) whom is the author of sysxperts.wordpress.com Chris Cicotte (Twitter: @Chris_Cicotte) author of Randomelectrons.com Caroline Orloff (Twitter: @corloff) and Tim Oudin (Twitter: @toudin) whom is the author of timoudin.com. Please tune into this special podcast and listen as the vCommunitytrust Board members gives great insight and perspective into: *Creation of vCommunity Trust *vCommunity leveraging social media influence *How to become a Board member *vCommunity Education Trust Fund *How the vCommunity is helping todays youth become IT specialist *Why Paul is a Friend to angry SysAdmins *Caroline’s new blog name (exclusive) *How Chris helped Ford Motor Company *and much much more Disclaimer: The opinions of the guests and host of this podcast are their personal opinions and not those of their employer.”
I was assigned a project to implement SRM in a recently acquired NetApp environment. In the process of doing so I needed to create a few NFS exports and was encountering failures when attempting to mount the storage.
vSphere vCenter 4.1
vSphere ESX 4.0 U1
NetApp FAS3140, ONTAP 7.3.1 P2
Volume was created using the Add Volume wizards with mostly default values. After running the modify exports wizard and adding ESX servers as root hosts I was still unable to mount the volume.
Events entry for the ESX host read:
Restored connection to server <nfs server> mount
point /vol/vsphere_drvol mounted as 2b60d786-dbe81c1d-0000-000000000000 (drvol_fl03).
Tasks log read:
Create NAS datastore <esx host> An error occurred during host configuration.
Error during the configuration of the host: Cannot open volume: /vmfs/volumes/280e61a8-9xxxxxx
Looking around, and it took several hours of frustration I finally found that the default security setting style on the QTree was NTFS. Changing this to Unix resolved all issues.
- Volumes -> Qtrees -> Manage
- Select Volume
- Change drop down from NTFS to Unix
- Apply and get on with life
Bulletins from HP have been released that clearly state that data loss can occur without the NMI Sourcing Drivers installed in ESX hosts. Jason Boche has blogged highlights of this recently. Below are instructions for installation of the drivers via http. I used an apache server with the NMI bundle in web root and performed the installation via a vMA appliance using `vihostupdate`.
Apache host: ‘santa’
ESX host: esx205
Current NMI bundle: hp-nmi-bundle-1.0.02
I have used short hostnames (inconsistently) instead of FQDN and have also taken what must be a one line command and escaped carriage returns with “\” to run the command on multiple lines, both for sake of better screen shots.
0) Put the server in maintenance mode