Priya Jaiswal has built a reputation for reading inflection points before they hit the tape. With a background spanning market analysis, portfolio construction, and cross-border capital flows, she views AI infrastructure not just as a product cycle, but as a balance-sheet and supply-chain event that reshapes multiples. In this conversation with Sarah Vainstein, she unpacks why stronger AI server demand is re-rating expectations, how backlog shapes allocations, where pricing power is real versus rhetorical, and what margin defense looks like when component costs spike. Themes include demand visibility through deal funnels, allocation trade-offs across customer tiers, cost-mitigation plays amid rising DRAM/NAND, Nvidia-led qualification at scale, services-driven P&L leverage, and how governance—now under a permanent CFO—tightens decision rights for big capacity bets.
You raised the fiscal 2026 AI server revenue target to $25B from $20B—what changed, and what concrete demand signals or deal funnels support that jump? Walk me through the weekly metrics you track, and share one customer story that confirmed the upside.
The jump to $25B reflects proof, not hope: an $18.4B AI server backlog by quarter-end, fueled by $12.3B in new orders and $5.6B shipped, shows orders maturing into revenue at scale. Weekly, I look at funnel velocity—RFPs converted to binding POs, GPU allocation confirmations, and rack-ready slots booked against site power windows. I also track cancellation rates and configuration churn; when those stay low while multi-quarter delivery windows fill, your confidence rises. A telling moment was a hyperscale-adjacent customer—think xAI/CoreWeave profile—who advanced their deployment schedule after a successful pilot; the order moved from staged to front-loaded, and the team secured incremental capacity with Nvidia accelerators to lock timing. That pull-forward, echoed by public-sector wins like DOE and international demand like G42, justified the higher bar.
Backlog reached $18.4B with $12.3B in new orders and $5.6B shipped last quarter. How are you prioritizing allocations across hyperscalers, enterprises, and the public sector? Describe the scoring model, key trade-offs, and one case where you reshuffled production to hit a deadline.
Allocations are scored on three axes: strategic durability (multi-year refresh cadence), revenue quality (price integrity and services attach), and supply certainty (site readiness and power). Hyperscalers often score high on scale, but public-sector projects bring reputational and reference value; enterprises add margin mix and stickier services. A real trade-off happened when a public-sector site needed go-live aligned with a fiscal-year close; we re-slotted enterprise shipments by two weeks, redirected qualified racks, and used a weekend install window to hit the deadline without breaking price. The backlog gives cover to reshuffle, but the scoring makes sure we don’t chase short-term volume at the expense of margin or execution risk.
Your Q4 outlook is $31–$32B revenue and $3.50 EPS, well above consensus. What specific drivers bridge from Q3’s $27.01B and $2.59? Break down volume, price, and mix, and walk step by step from pipeline to shipments to revenue recognition.
The bridge leans on volume scaling and mix uplift from AI-optimized servers with Nvidia accelerators, supported by backlog conversion. Step one is pipeline hygiene: confirm GPU allocations, memory availability, and site readiness. Step two is systems integration and rack-level burn-in to smooth revenue recognition—acceptance criteria are locked upfront to prevent deferrals. Price contributes where supply is tight, while mix improves as higher-spec nodes land; services and support packages enhance the margin profile. From $27.01B and $2.59, the incremental top line comes from backlog drawdown and higher attach, while opex discipline and richer mix lift EPS toward $3.50.
With margin pressure from Super Micro and higher build costs, how are you protecting gross margin on AI servers? Detail bill-of-materials levers, pricing tactics, and services attach rates, and give one case where you improved per-unit economics quarter over quarter.
On the BOM, we qualify multiple memory and storage suppliers, standardize chassis where possible, and optimize power delivery to reduce over-spec components. Pricing is value-based: we defend list on constrained SKUs and package discounts only when tied to multi-quarter commitments. Services—deployment, managed support, and financing—buffer margin and smooth revenue. A recent example: by reusing a validated baseboard and updating firmware instead of swapping vendors midstream, we cut requalification time and negotiated modest price relief from an incumbent supplier, improving per-unit economics within the quarter.
DRAM and NAND prices are rising. What cost curves are you modeling for 2025, and how do those flow into list prices and contracts? Walk through a recent supplier negotiation, the hedges you used, and the thresholds that trigger customer repricing.
We model a higher baseline for DRAM and NAND into 2025, consistent with tight AI demand and broader data center builds. That flows into tiered list pricing: near-term constrained SKUs see earlier adjustments, while long-term contracts rely on index-linked clauses. In a recent negotiation, we secured allocation commitments in exchange for staggered price escalators and volume flexibility; we hedged with multi-sourcing and safety stock for critical DIMMs. Repricing triggers when component indices breach preset bands or when lead times stretch beyond agreed SLAs, at which point we present a menu of revised configurations to hold TCO.
Jeff Clarke called the cost moves unprecedented and said you will mitigate. What are the top three mitigation plays in order of impact? Describe the timeline, the operational owners, and a success metric you review in the weekly S&OP.
First, configuration standardization and reuse—owned by engineering—shrinks validation cycles; success is measured by time-to-qualify reduction. Second, strategic sourcing—procurement leads—locks allocations and moderates pricing; we track secured weeks of coverage on DRAM/NAND. Third, services attach—sales and delivery—raises gross margin per rack; we review attach rate and deferred revenue growth. These run in parallel over the quarter, with S&OP focus on backlog conversion velocity and on-time-in-full.
Your AI-optimized servers ship with Nvidia accelerators. How are you managing GPU allocation and firmware validation at scale? Outline the qualification process step by step, include typical lead times and failure rates, and share one lesson that sped up rack-level deliveries.
The flow is allocation confirmation, board-level qualification, platform integration, firmware validation with Nvidia, then rack burn-in and site acceptance. Lead times depend on accelerator supply and memory availability; we stage firmware validation early to de-risk. Hardware failure rates are low but non-zero, so we keep hot spares and parallelize burn-in to avoid serial bottlenecks. A key lesson: pre-building standardized racks and late-binding specific firmware images cut days from deliveries without compromising stability.
You secured DOE and G42 deals and count xAI and CoreWeave as customers. What distinguishes these deployments? Compare architecture choices, networking topologies, and cooling, and tell a story about overcoming a site constraint to hit go-live.
DOE leans toward research-grade flexibility with compartmentalized clusters; G42 prioritizes scale and regional compliance. xAI and CoreWeave push for rapid expansion and predictable cost envelopes. Networking shifts from leaf-spine for modular growth to higher-radix fabrics for ultra-dense clusters; cooling ranges from advanced air to liquid-assisted where power density demands. At one site with limited chilled water capacity, we rebalanced racks, adjusted airflow containment, and scheduled phased activation; the team hit go-live by sequencing workloads while additional cooling came online.
With demand outpacing supply, you may gain pricing power. Where are you actually taking price, and where are you holding? Provide examples by configuration, include realized ASP changes, and explain how you protect long-term relationships while meeting near-term targets.
We take price on configurations with constrained accelerators and high-density memory, where value is clear and alternatives are limited. We hold on more standard nodes or where long-term partnerships like public sector commitments matter. ASPs move up where services are bundled and deployment windows are tight; we protect relationships by offering roadmap visibility and capacity reservations in lieu of aggressive discounting. The goal is durable revenue quality, not one-quarter wins.
With David Kennedy now permanent CFO, what changes in cadence, KPIs, or capital allocation should stakeholders expect? Share the first 90-day agenda, the decision gates for big AI capacity bets, and one metric he challenged the team to improve.
Expect faster, crisper cadences: tighter S&OP link to cash conversion, clearer hurdle rates for capacity. The first 90 days focus on backlog monetization, component coverage, and services growth. Decision gates for AI capacity include committed backlog, secured supply, and margin thresholds. He challenged the team to improve order-to-revenue cycle time, which directly supports the raised annual revenue outlook of roughly $111.2–$112.2B and the adjusted EPS target of $9.92.
You shipped $5.6B of AI servers last quarter. What were average lead times by SKU, and how will you compress them? Walk me through factory throughput, critical path components, and one bottleneck you recently cleared.
Lead times vary by accelerator and memory configuration; the gating factor is often GPU allocation and DRAM availability. We compress timelines by pre-building common subassemblies, expanding factory throughput on final integration, and streamlining acceptance testing. The critical path runs through accelerator receipt, firmware validation, and rack burn-in. A recent bottleneck—late-stage firmware re-spins—was cleared by moving validation earlier and creating a rapid rollback path to keep racks flowing.
Third-quarter revenue of $27.01B slightly missed, while adjusted EPS of $2.59 beat. What surprised you most, and what did you fix immediately? Share the metrics that moved, the root cause analysis, and the corrective actions with dates.
The surprise was a timing mix: revenue slipped on shipment phasing, but margins held thanks to richer mix and services. Root cause traced to site readiness delays and component timing. Corrective actions included earlier readiness checks, firmer delivery windows, and tighter coordination between procurement and deployment teams. Those changes kicked in immediately post-quarter to support the stronger Q4 outlook of $31–$32B and $3.50 EPS.
As you target $25B in AI server shipments by fiscal 2026, what are the top execution risks ahead? Detail contingency playbooks for supply, logistics, and regulatory hurdles, and cite leading indicators that would make you reset guidance.
Risks are supply constraints, logistics snarls, and shifting regulatory frameworks. Contingencies include multi-sourcing critical components, alternate freight lanes with buffered transit times, and pre-cleared export pathways. Leading indicators we watch: slip in allocation confirmations, rising lead-time volatility, or unexpected approval delays. If those trend persistently, guidance gets revisited—discipline beats bravado.
How are services shaping the AI server P&L beyond hardware? Break down attach rates for deployment, managed services, and financing, and give a customer example where services pulled through extra hardware or expanded the footprint.
Services add resilience: deployment accelerates time-to-value, managed services deepen stickiness, and financing smooths budgets. While attach specifics vary by deal, the trajectory is up, and the margin contribution is meaningful. We saw a customer expand footprint after a managed services pilot delivered faster model iteration; they added nodes and extended support to a second site. Services turn a single rack into a roadmap.
In a tight market, how do you choose between booking new marquee logos and deepening existing accounts like xAI and CoreWeave? Describe the account scoring, deal review cadence, and a case where you traded short-term revenue for long-term capacity or margin.
We score accounts on growth runway, margin integrity, services pull-through, and reference value. Deal reviews run weekly, with executive attention on trade-offs. In one case, we deferred a smaller, near-term logo win to preserve capacity for an existing customer’s expansion that maintained pricing and bundled services; that decision favored durable economics over headlines. The backlog strength gives us the leverage to choose wisely.
Do you have any advice for our readers?
Treat AI infrastructure like a portfolio: diversify supply, price your risk, and invest where you have operating leverage. Lock flexible terms when costs are rising and favor configurations you can actually receive, not just spec. Build services muscle—it’s the ballast when components get choppy. And above all, align deployments with real workloads; the best hedge against volatility is delivering value the day the racks go live.
