Skip to main content

Unable to See and add new ESXi hosts in Nexus 1000v


After the VMWare upgrade from 5.x to 6.x and Nexus 1000v upgrade from 4.2 to 5.2 you are unable to add new hosts into the Nexus 1000v distributed switch, although the older hosts are seed added to the N1Kv distributed switch and running fine without any issues.

This happens because Nexus 1000v has no knowledge of new versions of vCenter Server in its postgress database.

You have to manually add the new version in the vCenter database to support the new Version.

First you need to log in the the VCDB on command line, and for that you need to find the userID and password.

To get the userID and password, open C:\ProgramData\VMware\vCenterServer\cfg\vmware-vpx\vcdb.properties

vcdb.properties file contents should look like this

driver = org.postgresql.Driverdbtype = PostgreSQLurl = jdbc:postgresql://localhost:5432/VCDB
username = vcpassword = {FNr2Aad>ws8Xo<Qpassword.encrypted = false

Grab the username and password (default userID happend to be "vc" and the password

To add the new version

Go to this Path on DOS Prompot
C:\Program Files\VMware\vCenter Server\vPostgres\bin\

Run Command
C:\Program Files\VMware\vCenter Server\vPostgres\bin>psql -U vc VCDB
Enter password as found above in the the file at C:\ProgramData\VMware\vCenterServer\cfg\vmware-vpx\vcdb.properties
Password for user vc:

Show database
SELECT * FROM VPX_DVS_COMPATIBLE;

Insert the new version into the database with follwoing command
insert into VPX_DVS_COMPATIBLE VALUES
(42,'esx','6.0+');
(42,'embeddedEsx','6.0+');

Here 42 is the device ID and can be seen in the first column of the output of command
SELECT * FROM VPX_DVS_COMPATIBLE;

Again show the database bases, it should now list the support for vCenter 6.0+ version.

Show database
SELECT * FROM VPX_DVS_COMPATIBLE;

You should see the New vCenter version has been added to the database.

Exit the Database command prompt by typing \q

Restart the vCenter Server and add the hosts to Nexus 1000v normally.

Comments

Popular posts from this blog

How to import Putty Saved Connections to mRemoteNG

Just started using mRemoteNG and its being very cool to connect to different remote connection with different protocols e.g Window Remote Desktop, VNC to Linux, SSH, HTTP connection etc. from a single application. As new user I configured some remote desktop connection which was quite easy to figure out. But when I wanted to add SSH connections, it came in my mind to import all of the saved connections in the putty. But I couldn't figure it out how can it be done, though it was quite easy and here are the steps. Open your mRemoteNG Create a folder if you want segregation of multiple networks Create a new connection Enter the IP address of remote server under connection in Config pane Under the config pane, select protocol " SSH version 2 ".  Once you select protocol to SSH version 2 you are given option to import putty sessions, as shown in the snap below. In the above snap, I have imported CSR-AWS session from my saved sessions in Putty.

Authoritative DNS Servers Delegation and Internal DNS Explained

DNS (Domain Name System) plays a critical role in how users and systems find resources on the internet or within internal networks. Whether it's managing an internal domain in an enterprise or delegating parts of a domain for traffic distribution, DNS setups vary widely depending on needs. In this blog post, we’ll break down the different types of DNS setups, including authoritative DNS servers, DNS delegation, and how internal DNS functions within organizations. 1. Authoritative DNS Server An Authoritative DNS server is the final source of truth for a specific domain. When someone queries a domain (e.g., example.com ), the authoritative DNS server for that domain holds the DNS records (A records, CNAME, MX, etc.) and responds with the corresponding IP address. Key Points: Who can host it? Authoritative DNS servers are often hosted by domain registrars (e.g., GoDaddy, Namecheap) or cloud DNS providers (e.g., AWS Route 53, Cloudflare). However, organizations can also host their ...

BGP MED: Managing Inbound Traffic with Multi-Exit Discriminator

The Multi-Exit Discriminator (MED) is used in BGP to control inbound traffic into your AS. It tells a neighboring AS which entry point into your network it should prefer when there are multiple links between your AS and the neighboring AS. The lower the MED value , the more preferred the path. MED is only honored between the same neighboring AS . Example Scenario : You are connected to ISP1 via two routers, CE1 and CE2 , and want to control which router ISP1 uses to send traffic into your AS. Network Topology : CE1 (connected to ISP1): 10.0.1.1/30 CE2 (connected to ISP1): 10.0.2.1/30 iBGP Router (Internal) connected to both CE1 (10.0.1.2/30) and CE2 (10.0.2.2/30). Configuration on CE1 (Lower MED, More Preferred) : Create a route map to set the MED to 50 for CE1: route-map SET_MED permit 10 set metric 50 Apply this route map to the neighbor in the BGP configuration for CE1: router bgp 65001 neighbor 10.0.1.1 remote-as 65000 neighbor 10.0.1.1 route-map SET_MED out Configuratio...