Continuing on from our previous post, we will now proceed with our four step process of deploying Tanzu Community Edition (TCE) for the NSX Application Platform (NAPP). To recap, the four steps we are working from are:
- Install Tanzu Community Edition
- Prepare to deploy clusters
- Deploy a management cluster
- Deploy a workload cluster
Just like last time, you can click on each step above to review the official documentation. As we covered Steps 1 and 2 in our prior post, we’re now ready to move onto Step 3, which is deploying our TCE Management Cluster.
3. Deploy a management cluster
Deploying the management cluster is a fairly straightforward affair. As noted in the documentation, by issuing the ‘tanzu management-cluster create –ui‘ command, you’ll be presented with a guided wizard to assist you in deploying a TCE Management cluster.
Select ‘Deploy‘ under ‘VMware vSphere‘ on the screen below:

From this point, you’ll be taken to a series of 9 steps that must be completed to get your TCE Management cluster deployed. However, for the purposes of deploying NAPP, we can skip several of these steps entirely. In fact, below you’ll find we marked the steps you may skip with ‘skip‘; to skip a step, just click the ‘Next’ button on that screen.
A note on skipping steps: While we’ve elected to skip the steps that are not mandatory deploying NAPP, you might prefer to configure them in your environment. For example, while configuring Identity Management is outside of the scope of this series, you may wish to investigate it’s configuration, depending on your needs.
The full list of steps to deploy your TCE Management cluster is below:
- IaaS Provider
- Management Cluster Settings
- VMware NSX Advanced Load Balancer – skip
- Metadata – skip
- Resources
- Kubernetes Network
- Identity Management – skip
- OS Image
- Register TMC – skip
For each step above, you may click the associated link to reach the official documentation for that step. As with other portions of our series, rather than walking you through step by step, we will instead identify or clarify information as needed.
- Iaas Provider

This step is quite clear: you use it to connect to the vCenter that will host the TCE Management cluster. This is also where you will populate the SSH public key you created as detailed in the the ‘Procedure‘ section of the ‘Prepare to Deploy Clusters‘ page. You can open the public key via any text editor and copy/paste the contents.
2. Management Cluster Settings

Here you will choose between a ‘Development‘ model or a ‘Production‘ model. The key difference is ‘Development‘ deploys a single node as the K8s control plane for the TCE Management Cluster, while ‘Production‘ deploys 3 nodes. Unless you’re deploying NAPP in a lab only, we recommend the ‘Production‘ model for control plane availability.
Next you’ll select the drop down for ‘Instance Type‘, which allows you to pick the size of the VMs that will be deployed. We recommend at least ‘medium‘(2 vCPU, 8 GB RAM, 40 GB disk) as we observed a noticeable performance difference during NAPP deployment when using a ‘small‘ Instance Type.
For our NAPP deployment, you need to name the management cluster (We went with ‘k8s-mgmt‘), and enter the first of the two cluster IP addresses you previously reserved in the ‘Control Plane Endpoint‘. You can see in the above we entered ‘172.16.90.150‘.
Lastly, since we are presuming that you do not have access to the NSX Advanced Load Balancer for this series, leave the ‘Control Plane Endpoint Provider‘ set to ‘Kube-vip’. All other settings have been left to their default. With that, you may click ‘Next’ at the bottom.
3. VMware NSX Advanced Load Balancer Settings – skip
As we are not using the NSX Advanced Load balancer, just scroll to the bottom and click ‘Next‘ at the bottom.
4. Metadata – skip
This is where you can enter optional metadata about the TCE Management cluster, including K8s labels. As this isn’t needed for our NAPP deployment, you may just click ‘Next‘ at the bottom.
5. Resources

Here you select the vSphere VM folder in which the TCE Manager Nodes reside, the datastore to use for storage, and the vSphere cluster to which the Nodes will be deployed. In our lab, we only have a single cluster; hence, that’s why you only see ‘Cluster-1‘ listed. Once this data is all populated, click ‘Next‘ at the bottom.
6. Kubernetes Network

For this step, select the Distributed Port Group to which the Management TCE nodes will be attached. Here we’ve selected a distributed port group named ‘vlan90 – Tanzu TCE‘. The ‘Cluster Service CIDR‘ and the ‘Cluster POD CIDR‘ entries are prepopulated, and are based on K8s defaults. If these addresses conflict with other addressing in your network, you may change them.
If you require the use of a proxy server for external access, you may enable and configure it here. This will be applied to all of the TCE Management nodes as they are deployed. As we don’t require this in our environment, it’s been left disabled.
7. Identity Management – skip
For the purposes of your NAPP deployment, we’ve opted not to enable Identity Management for our TCE Management cluster. The result is that only the ‘admin‘ account for the cluster will be able to interact with it via the Tanzu cli.
If you desire, you may enable this feature and then select between OIDC and LDAPS configurations for authentication. As we are not leveraging this feature, click the button beside ‘Enable Identity Management Settings ‘ to disable Identity Management. Click ‘Next‘ at the bottom to more forward.
8. OS Image

Here we choose the template you previously created (as mentioned in our previous post; you may review the official procedure here) to act as the template for all of the TCE Management cluster nodes that are about to be provisioned. Once you’ve done that, click ‘Next‘ to move forward.
9. Register TMC – skip
As we are not using Tanzu Mission Control in this series, you may click ‘Next‘ at the bottom.
Once you’ve moved through all steps, you will be prompted to click the ‘Review Configuration‘ button:

Clicking the ‘Review Configuration‘ button allows you to review the selections you made in each section; at the bottom of this page (shown below), you can elect to ‘Edit Configuration‘ if you need to make changes, or you can select ‘Deploy Management Cluster‘ which starts the process of creating your TMC Management cluster.

At the bottom of the page above, you can see what the Tanzu cli command would be if you executed this via the cli rather than via the UI. Of particular interest to us, you can see the YAML file that has been created to instantiate your TCE Management cluster. If you opt to edit this file (in the above it’s a file called ‘tai0sb5pcr.yaml‘), you can see the results of everything that was selected/populated in the previous steps.
As we will leverage this file in the deployment of our future TCE Workload Cluster, note it’s name and location. You can even use the ‘Copy CLI Command‘ button and paste the result somewhere in your notes.
Once you click the ‘Deploy Management Cluster‘ button, your screen will change to show the progress of the TCE Management cluster. An example of this process as it is running is below.

The deployment process can take some time, so try to be patient as the TCE Management cluster is created. Once all of the bubbles on the left have a green check mark, the process is complete. If you look at the log output at the bottom, the last three lines show the following:
ℹ [0214 22:35:58.50085]: init.go:88] Management cluster created!
ℹ [0214 22:35:58.50089]: init.go:89] You can now create your first workload cluster by running the following:
ℹ [0214 22:35:58.50099]: init.go:90] tanzu cluster create [name] -f [file]
You can see this in the below image:

At this point, you can close the installer UI, as you’ve successfully installed a Tanzu Management Cluster! You can now interact with this cluster using the Tanzu cli or kubectl via your bootstrap machine (as it has the tanzu cli, kubectl, and a populated .kube/config file in your home directory.) For instance, to get a rundown of the Management Cluster, you can issue this command: tanzu management-cluster get. This yields the below output:

From here, you can see that we have 3 control plane nodes up and operational, and one worker node. If you wish, you can take a look via vCenter and see the four node VMs that comprise your TCE Management Cluster. You may also gather data using kubectl to interrogate the Management cluster with commands such as:
- kubectl get nodes
- kubeclt get ns
- kubectl get pods -A (gets all pods in all namespaces)
- kubectl get deployments -A (gets all deployments in all namespaces)
That’s obviously a tiny subset of the commands you can issue against your new TCE Management Cluster, but it’s just to highlight that while we are using Tanzu to get us up and going, you’re still ultimately using native Kubernetes; as such, any commands you read about to pull data via kubectl will likely be applicable here.
Wrap-Up
In today’s post we successfully deployed a Tanzu Management cluster in our environment. In our next post, we will us our newly instantiated TCE Management Cluster to deploy a TCE Workload Cluster.
We look forward to seeing you in our next post!