Last Updated On : 7-Apr-2026
When determining the compute capacity for a VMware Cloud Foundation VI Workload Domain, which three elements should be considered when calculating usable resources? (Choose three.)
A. vSAN space efficiency feature enablement
B. VM swap file
C. Disk capacity per VM
D. Number of 10GbE NICs per VM
E. CPU/Cores per VM
F. Number of VMs
Explanation: When determining the compute capacity for a VMware Cloud Foundation
(VCF) VI Workload Domain, the goal is to calculate the usable resources available to
support virtual machines (VMs) and their workloads. This involves evaluating the physical
compute resources (CPU, memory, storage) and accounting for overheads, efficiency
features, and configurations that impact resource availability. Below, each option is
analyzed in the context of VCF 5.2, with a focus on official documentation and architectural
considerations:
A. vSAN space efficiency feature enablementThis is a critical element to consider.
VMware Cloud Foundation often uses vSAN as the primary storage for VI Workload
Domains. vSAN offers space efficiency features such as deduplication, compression, and
erasure coding (RAID-5/6). When enabled, these features reduce the physical storage
capacity required for VM data, directly impacting the usable storage resources available for
compute workloads. For example, deduplication and compression can significantly
increase usable capacity by eliminating redundant data, while erasure coding trades off
some capacity for fault tolerance. The VMware Cloud Foundation 5.2 Planning and
Preparation documentation emphasizes the need to account for vSAN policies and
efficiency features when sizing storage, as they influence the effective capacity available
for VMs. Thus, this is a key factor in compute capacity planning.
B. VM swap fileThe VM swap file is an essential consideration for compute capacity,
particularly for memory resources. In VMware vSphere (a core component of VCF), each
powered-on VM requires a swap file equal to thesize of its configured memory minus any
memory reservation. This swap file is stored on the datastore (often vSAN in VCF) and
consumes storage capacity. When calculating usable resources, you must account for this
overhead, as it reduces the available storage for other VM data (e.g., virtual disks).
Additionally, if memory overcommitment is used, the swap file size can significantly impact
capacity planning. The VMware Cloud Foundation Design Guide and vSphere
documentation highlight the importance of factoring in VM swap file overhead when
determining resource availability, making this a valid element to consider.
C. Disk capacity per VMWhile disk capacity per VM is important for storage sizing, it is not
directly a primary factor in calculatingusable compute resourcesfor a VI Workload Domain
in the context of this question. Disk capacity per VM is a workload-specific requirement that
contributes to overall storage demand, but it does not inherently determine the usable CPU
or memory resources of the domain. In VCF, storage capacity is typically managed by
vSAN or other supported storage solutions, and while it must be sufficient to accommodate
all VMs, it is a secondary consideration compared to CPU, memory, and efficiency features
when focusing on compute capacity. Official documentation, such as the VCF 5.2
Administration Guide, separates storage sizing from compute resource planning, so this is
not one of the top three elements here.
D. Number of 10GbE NICs per VMThe number of 10GbE NICs per VM relates to
networking configuration rather than compute capacity (CPU and memory resources).
While networking is crucial for VM performance and connectivity in a VI Workload Domain,
it does not directly influence the calculation of usable compute resources like CPU cores or
memory. In VCF 5.2, networking design (e.g., NSX or vSphere networking) ensures
sufficient bandwidth and NICs at the host level, but per-VM NIC counts are a design detail
rather than a capacity determinant. The VMware Cloud Foundation Design Guide focuses
NIC considerations on host-level design, not VM-level compute capacity, so this is not a
relevant element here.
E. CPU/Cores per VMThis is a fundamental element in compute capacity planning. The
number of CPU cores assigned to each VM directly affects how many VMs can be
supported by the physical CPU resources in the VI Workload Domain. In VCF, compute
capacity is based on the total number of physical CPU cores across all ESXi hosts, with a
minimum of 16 cores per CPU required for licensing (as per the VCF 5.2 Release Notes
and licensing documentation). When calculating usable resources, you must consider how
many cores are allocated per VM, factoring in overcommitment ratios and workload
demands. The VCF Planning and Preparation Workbook explicitly includes CPU/core
allocation as a key input for sizing compute resources, making this a critical factor.
F. Number of VMsWhile the total number of VMs is a key input for overall capacity
planning, it is not a direct element in calculatingusable compute resources. Instead, it is a
derived outcome based on the available CPU, memory, and storage resources after
accounting for overheads and per-VM allocations. The VMware Cloud Foundation 5.2
documentation (e.g., Capacity Planning for Management and Workload Domains) uses the
number of VMs as a planning target, not a determinant of usable capacity. Thus, it is not
one of the top three elements for this specific calculation.
Conclusion: The three elements that should be considered when calculating usable
compute resources arevSAN space efficiency feature enablement (A),VM swap file (B)
, andCPU/Cores per VM (E). These directly impact the effective CPU, memory, and
storage resources available for VMs in a VI Workload Domain.
When sizing a VMware Cloud Foundation VI Workload Domain, which three factors should be considered when calculating usable compute capacity? (Choose three.)
A. NSX
B. vSphere HA
C. vSAN
D. NIOC
E. Storage DRS
Explanation: When sizing a VMware Cloud Foundation (VCF) VI Workload Domain, calculating usable compute capacity involves determining the resources available for workloads after accounting for overheads and system-level requirements. In VCF 5.2, a VI Workload Domain integrates vSphere, vSAN, and NSX, and certain factors directly impact the compute capacity available to virtual machines. Based on the official VMware Cloud Foundation 5.2 documentation, the three key factors to consider are vSphere HA, vSAN, and NIOC.
An architect is designing a new VMware Cloud Foundation (VCF)-based Private Cloud
solution. During the requirements gathering workshop, a stakeholder from the network
team stated that:
A. Availability
B. Performance
C. Recoverability
D. Manageability
Explanation: In VMware Cloud Foundation (VCF) 5.2, design qualities (non-functional
requirements) categorizehowthe system operates. The network team’s requirements focus
on redundancy and routing diversity, which the architect must classify. Let’s evaluate:
Option A: Availability
This is correct. Availability ensures the solution remains operational and accessible. “N+N
redundancy” (e.g., dual active components where N failures are tolerated by N spares) for
physical networking components eliminates single points of failure, ensuring continuous
network uptime. “Diversely routed inter-datacenter links” prevents outages from a single
path failure, enhancing availability across sites. In VCF, these align with high-availability
network design (e.g., NSX Edge uplink redundancy), makingavailabilitythe proper
classification.
Option B: Performance
Performance addresses speed, throughput, or latency (e.g., “10 Gbps links”). Redundancy
and diverse routing might indirectly support performance by avoiding bottlenecks, but the
primary intent is uptime, not speed. This doesn’t fit the stated requirements’ focus.
Option C: Recoverability
Recoverability focuses on restoring service after a failure (e.g., backups, failover time).
N+N redundancy and diverse routingpreventdowntime rather than recover from it. While
related, the requirements emphasize proactive uptime (availability) over post-failure
recovery, making this incorrect.
Option D: Manageability
Manageability concerns ease of administration (e.g., monitoring, configuration).
Redundancy and routing diversity are infrastructure design choices, not management
processes. This quality doesn’t apply.
Conclusion: The architect should classify the requirement asAvailability (A). It ensures
the VCF solution’s network remains operational, aligning with VCF 5.2’s focus on resilient
design.
As part of the requirement gathering phase, an architect identified the following
requirement for the newly deployed SDDC environment:
Reduce the network latency between two application virtual machines.
To meet the application owner's goal, which design decision should be included in the
design?
A. Configure a Storage DRS rule to keep the application virtual machines on the same datastore.
B. Configure a DRS rule to keep the application virtual machines on the same ESXi host.
C. Configure a DRS rule to separate the application virtual machines to different ESXi hosts.
D. Configure a Storage DRS rule to keep the application virtual machines on different datastores.
Explanation: The requirement is to reduce network latency between two application virtual
machines (VMs) in a VMware Cloud Foundation (VCF) 5.2 SDDC environment. Network
latency is influenced by the physical distance and network hops between VMs. In a
vSphere environment (core to VCF), VMs on the same ESXi host communicate via the
host’s virtual switch (vSwitch or vDS), avoiding physical network traversal, which minimizes
latency. Let’s evaluate each option:
Option A: Configure a Storage DRS rule to keep the application virtual machines on
the same datastoreStorage DRS manages datastore usage and VM placement based on
storage I/O and capacity, not network latency. ThevSphere Resource Management Guide
notes that Storage DRS rules (e.g., VMaffinity) affect storage location, not host placement.
Two VMs on the same datastore could still reside on different hosts, requiring network
communication over physical links (e.g., 10GbE), which doesn’t inherently reduce latency.
Option B: Configure a DRS rule to keep the application virtual machines on the same
ESXi hostDRS (Distributed Resource Scheduler) controls VM placement across hosts for
load balancing and can enforce affinity rules. A “keep together” affinity rule ensures the two
VMs run on the same ESXi host, where communication occurs via the host’s internal
vSwitch, bypassing physical network latency (typically <1μs vs. milliseconds over a LAN).
TheVCF 5.2 Architectural GuideandvSphere Resource Management Guiderecommend this
for latency-sensitive applications, directly meeting the requirement.
Option C: Configure a DRS rule to separate the application virtual machines to
different ESXi hostsA DRS anti-affinity rule forces VMs onto different hosts, increasing
network latency as traffic must traverse the physical network (e.g., switches, routers). This
contradicts the goal of reducing latency, making it unsuitable.
Option D: Configure a Storage DRS rule to keep the application virtual machines on
different datastoresA Storage DRS anti-affinity rule separates VMs across datastores, but
this affects storage placement, not host location. VMs on different datastores could still be
on different hosts, increasing network latency over physical links. This doesn’t address the
requirement, per thevSphere Resource Management Guide.
Conclusion: Option B is the correct design decision. A DRS affinity rule ensures the VMs
share the same host, minimizing network latency by leveraging intra-host communication,
aligning with VCF 5.2 best practices for latency-sensitive workloads.
An architect is documenting the design for a new VMware Cloud Foundation-based
solution. Following the requirements gathering workshops held with customer stakeholders,
the architect has made the following assumptions:
The customer will provide sufficient licensing for the scale of the new solution.
The existing storage array that is to be used for the user workloads has sufficient capacity
to meet the demands of the new solution.
The data center offers sufficient power, cooling, and rack space for the physical hosts
required by the new solution.
The physical network infrastructure within the data center will not exceed the maximum
latency requirements of the new solution.
Which two risks must the architect include as a part of the design document because of
these assumptions? (Choose two.)
A. The physical network infrastructure may not provide sufficient bandwidth to support the user workloads.
B. The customer may not have sufficient data center power, cooling, and physical rack space available.
C. The customer may not have licensing that covers all of the physical cores the design requires.
D. The assumptions may not be approved by a majority of the customer stakeholders before the solution is deployed.
Explanation: In VMware Cloud Foundation (VCF) 5.2, assumptions are statements taken
as true for design purposes, but they introduce risks if unverified. The architect must
identify risks—potential issues that could impact the solution’s success—stemming from
these assumptions and include them in the design document. Let’s evaluate each option
against the assumptions:
Option A: The physical network infrastructure may not provide sufficient bandwidth
to support the user workloadsThis is correct. The assumption states that the physical
network infrastructure “will not exceed the maximum latency requirements,” but it doesn’t
address bandwidth. In VCF, user workloads (e.g., in VI Workload Domains) rely on network
bandwidth for performance (e.g., vSAN traffic, VM communication). Insufficient bandwidth
could degrade workload performance or scalability, despite meeting latency requirements.
This is a direct risk tied to an unaddressed aspect of the network assumption, making it a
necessary inclusion.
Option B: The customer may not have sufficient data center power, cooling, and
physical rack space availableThis is incorrect as a mandatory risk in this context. The
assumption explicitly states that “the data center offers sufficient power, cooling, and rack
space” for the required hosts. While it’s possible this could be untrue, the risk is already
implicitly covered by questioning the assumption’s validity. Including this risk would be
redundant unless specific evidence (e.g., unverified data center specs) suggests doubt,
which isn’t provided. Other risks (A, C) are more immediate and distinct.
Option C: The customer may not have licensing that covers all of the physical cores
the design requiresThis is correct. The assumption states that “the customer will provide
sufficient licensing for the scale of the new solution.” In VCF 5.2, licensing (e.g., vSphere,
vSAN, NSX) is core-based, and misjudging the number of physical cores (e.g., due to host
specs or scale) could lead to insufficient licenses. This riskdirectly challenges the
assumption’s accuracy—if the customer’s licensing doesn’t match the design’s core count,
deployment could stall or incur unplanned costs. It’s a critical risk to document.
Option D: The assumptions may not be approved by a majority of the customer
stakeholders before the solution is deployedThis is incorrect. While stakeholder
approval is important, this is a process-related risk, not a technical or operational risk tied
to the assumptions’ content. The VMware design methodology focuses risks on solution
impact (e.g., performance, capacity), not procedural uncertainties like consensus. This risk
is too vague and outside the scope of the assumptions’ direct implications.
Conclusion: The two risks the architect must include are:
A: Insufficient network bandwidth (not covered by the latency assumption).
C: Inadequate licensing for physical cores (directly tied to the licensing assumption). These
align with VCF 5.2 design principles, ensuring potential gaps in network performance and
licensing are flagged for validation or mitigation.
An architect is designing a VMware Cloud Foundation (VCF)-based private cloud solution
for a customer. During the requirements gathering workshop, the customer provided the
following requirement:
All SSL certificates should be provided by the company’s certificate authority.
When creating the design, how should the architect classify this stated requirement?
A. Recoverability
B. Security
C. Availability
D. Manageability
Explanation: In VMware Cloud Foundation (VCF) 5.2, requirements are classified using
design qualities as defined in VMware’s architectural methodology: Availability,
Manageability, Performance, Recoverability, and Security. These qualities help architects
align customer needs with technical solutions. The requirement specifies that “all SSL certificates should be provided by the company’s certificate authority,” which involves
encryption, identity verification, and trust management. Let’s classify it:
Option A: RecoverabilityRecoverability focuses on restoring services after failures, such
as disaster recovery (DR) or failover (e.g., RTO, RPO). SSL certificates relate to securing
communication, not recovery processes. TheVMware Cloud Foundation 5.2 Architectural
Guidedefines Recoverability as pertaining to system restoration, not certificate
management, making this incorrect.
Option B: SecuritySecurity encompasses protecting the system from threats, ensuring
data confidentiality, integrity, and authenticity. Requiring SSL certificates from the
company’s certificate authority (CA) directly relates to securing VCF components (e.g.,
vCenter, NSX, SDDC Manager) by enforcing trusted, organization-specific encryption and
authentication. TheVMware Cloud Foundation 5.2 Design Guideclassifies certificate usage
under Security, as it mitigates risks like man-in-the-middle attacks and aligns with
compliance standards (e.g., PCI-DSS, if applicable). This is the correct classification.
Option C: AvailabilityAvailability ensures system uptime and fault tolerance (e.g., HA,
redundancy). While SSL certificates enable secure access, they don’t directly influence
uptime or failover. TheVCF 5.2 Architectural Guideties Availability to resilience
mechanisms (e.g., clustered deployments), not security controls like certificates.
Option D: ManageabilityManageability focuses on operational ease (e.g., monitoring,
automation). Using a company CA involves certificate deployment and renewal, which
could relate to management processes. However, the primary intent is securing
communication, not simplifying administration. VMware documentation distinguishes
certificate-related requirements as Security, not Manageability, unless explicitly about
operational workflows.
Conclusion: The requirement is best classified asSecurity (B), as it addresses the secure
configuration of SSL certificates, a core security concern in VCF 5.2.
A customer is designing a new VMware Cloud Foundation stretched cluster using L2 nonuniform connectivity, where due to a past incident an attacker was able to inject some false routes into their dynamic global routing table. What design decision can be taken to prevent this when configuring the Tier-0 gateway?
A. OSPF MD5 authentication
B. Gateway Firewall with ECMP
C. Implicit deny for any traffic
D. BGP peer password
Explanation: The scenario involves designing a VMware Cloud Foundation (VCF)
stretched cluster with L2 non-uniform connectivity, leveraging NSX (a core component of
VCF) for networking. The customer’s past incident, where an attacker injected false routes
into their dynamic global routing table, indicates a security vulnerability in the routing
protocol. The Tier-0 gateway in NSX handles external connectivity and routing, typically
using dynamic routing protocols like BGP (Border Gateway Protocol) or OSPF (Open
Shortest Path First) to exchange routes with external routers. The design decision must
prevent unauthorized route injection, ensuring the integrity of the routing table.
Context Analysis:
Stretched Cluster with L2 Non-Uniform Connectivity: In VCF 5.2, a stretched cluster
spans multiple availability zones (AZs) with L2 connectivity for workload VMs, but the Tier-0
gateway uplinks may use L3 routing to external networks. “Non-uniform” suggests varying
latency or bandwidth between sites, but this does not directly impact the routing security
concern.
False Routes Injection:This implies the attacker exploited a lack of authentication or
filtering in the routing protocol, allowing unauthorized route advertisements to be accepted
into the Tier-0 gateway’s routing table.
Tier-0 Gateway:In NSX, the Tier-0 gateway is the edge component that peers with
external routers (e.g., top-of-rack switches or upstream routers) and supports dynamic
routing protocols like BGP and OSPF.
Routing Security in NSX:
NSX Tier-0 gateways commonly use BGP for external connectivity due to its scalability and
flexibility in multi-site deployments like stretched clusters. OSPF is also supported but is
less common for external peering in VCF designs.
Route injection attacks occur when an unauthorized device advertises routes without
validation, often due to missing authentication mechanisms.
Option Analysis:
A. OSPF MD5 authentication:OSPF supports MD5 authentication to secure routing
updates between neighbors. Each OSPF message is hashed with a shared secret key,
ensuring only trusted peers can exchange routes. This would prevent false route injection if
OSPF were the protocol in use. However, in VCF stretched cluster designs, BGP is the
default and recommended protocol for Tier-0 gateway uplinks to external networks, as per
the VMware Cloud Foundation Design Guide. OSPF is typically used for internal NSX
routing (e.g., between Tier-0 and Tier-1 gateways) rather than external peering. Without
evidence that OSPF is used here, and given BGP’s prevalence in such scenarios, this
option is less applicable.
B. Gateway Firewall with ECMP:The Gateway Firewall on the Tier-0 gateway filters
traffic, not routes. Equal-Cost Multi-Path (ECMP) enhances bandwidth by load-balancing
across multiple uplinks but does not inherently secure the routing table. While a firewall
could block traffic from malicious sources, it cannot prevent the Tier-0 gateway from
accepting false route advertisements in the control plane (routing protocol). Route injection
occurs at the routing protocol level, not the data plane, so this option does not address
theroot issue. The NSX Administration Guide confirms that firewall rules apply to packet
forwarding, not route validation, making this incorrect.
C. Implicit deny for any traffic:An implicit deny rule in the Gateway Firewall blocks all
traffic not explicitly allowed, enhancing security for data plane traffic. However, this does
not protect the control plane—specifically, the dynamic routing protocol—from accepting
false routes. Route injection happens before traffic filtering, as the routing table determines
where packets are sent. The VMware Cloud Foundation 5.2 documentation emphasizes
that routing security requires protocol-specific measures, not just firewall rules. This option
fails to prevent the described attack and is incorrect.
D. BGP peer password:BGP supports authentication via a peer password (MD5-based in
NSX), where each BGP session between the Tier-0 gateway and its external peers (e.g.,
physical routers) uses a shared secret. This ensures that only authenticated peers can
advertise routes, preventing unauthorized devices from injecting false routes into the
dynamic routing table. In VCF 5.2 stretched cluster deployments, BGP is the standard
protocol for Tier-0 uplinks, as it supports multi-site connectivity and ECMP for redundancy.
The NSX-T Data Center Design Guide and VCF documentation recommend BGP
authentication to secure routing in such environments, directly addressing the customer’s
past incident. This is the most relevant and effective design decision.
Conclusion: The architect should chooseBGP peer password (D)as the design decision
for the Tier-0 gateway. This secures the BGP routing protocol—widely used in VCF
stretched clusters—against false route injection by requiring authentication, aligning with
the scenario’s security requirements and NSX best practices.
An architect is designing a VMware Cloud Foundation (VCF)-based Private Cloud solution.
During the requirements gathering workshop with the customer stakeholders, the following
information was noted:
In the event of a site-level disaster, the solution must enable all production workloads to be
restarted in the secondary site.
In the event of a host failure, workloads must be restarted in priority order.
When creating the design documentation, which design quality should be used to classify
the stated requirements?
A. Availability
B. Manageability
C. Performance
D. Recoverability
Explanation: VMware’s design methodology (per VCF 5.2) uses design qualities to
categorize requirements based on their focus. The qualities include Availability,
Manageability, Performance, Recoverability, and Security. Let’s classify the two
requirements:
Requirement 1: In the event of a site-level disaster, the solution must enable all
production workloads to be restarted in the secondary siteThis describes the ability to
recover workloads after a site failure, focusing on restoring operations in a secondary
location. TheVCF 5.2 Architectural Guidealigns this withRecoverability, which covers
disaster recovery (DR) and the restoration of services post-failure.
Requirement 2: In the event of a host failure, workloads must be restarted in priority
orderThis involves restarting workloads after a host failure (e.g., via vSphere HA) with
prioritization, emphasizing recovery processes. While HA is often linked to Availability, the
focus here on “restarting in priority order” shifts it to Recoverability, as it addresses how the
system recovers from a failure, per VMware’s design quality definitions.
Option A: AvailabilityAvailability ensures system uptime and fault tolerance (e.g., HA
preventing downtime). While host failure recovery involves HA, the emphasis on
“restarting” and site-level DR points more to Recoverability than ongoing availability.
Option B: ManageabilityManageability focuses on ease of administration (e.g.,
monitoring, automation). Neither requirement relates to operational management but rather
to failure recovery processes.
Option C: PerformancePerformance addresses speed and efficiency (e.g., latency,
throughput). These requirements don’t specify performance metrics, focusing instead on
recovery capabilities.
Option D: RecoverabilityRecoverability ensures the system can restore services after
failures, encompassing both site-level DR (secondary site restart) and host-level recovery
(prioritized restarts). TheVCF 5.2 Design Guideclassifies DR and failover recovery under
Recoverability, making it the best fit.
Conclusion: Both requirements align withRecoverability, as they focus on restoring
workloads after failures (site-level and host-level), per VMware’s design quality framework.
| Page 4 out of 12 Pages |
| 2345 |
| 2V0-13.24 Practice Test Home |