True3D Labs

AI models for 3D video creation and playback

Research Engineer - Graphics

$150K - $250K0.10% - 4.00%New York, NY, US
Job type
Full-time
Role
Engineering, Machine learning
Experience
3+ years
Visa
US citizen/visa only
Connect directly with founders of the best YC-funded startups.
Apply to role ›
Daniel Habib
Daniel Habib
Founder

About the role

Location: New York City Office HQ

Employment Type: Full time

Department: Graphics Research

Overview
We are hiring a principal level Research Engineer with deep strength in computer graphics, rendering, and GPU systems. You will bridge frontier graphics research with production engines and ship rendering technology used in real products. The work spans exploration, rapid prototyping, rigorous visual evaluation, and dependable production deployment. Expect to push the limits of image quality, spatial intelligence, and interactive performance while keeping systems robust, scalable, and cost efficient.

Role
You will partner with research, engine, and product teams to design, build, and operate high performance graphics systems. You will define rendering architecture end to end, set technical standards, mentor others, and raise the bar for visual fidelity, frame time, code quality, and reproducibility.

Key responsibilities

• Architect real time rendering pipelines across rasterization, ray tracing, neural rendering, and volumetric or voxel based representations
• Design and implement scene representations including triangle meshes, voxel brickmaps, surfels or splats, signed distance fields, and implicit neural fields
• Develop high performance shaders and kernels in HLSL, GLSL, WGSL, CUDA, OptiX, Metal Shading Language, and Triton when appropriate
• Implement advanced techniques such as mesh shading, work graphs, bindless resources, descriptor heaps, asynchronous compute, sparse residency, transient memory allocators, tiled and clustered lighting, and GPU driven pipelines
• Build state of the art reconstruction and quality systems including temporal reprojection, spatiotemporal denoising, super resolution, upsampling, and reservoir resampling methods such as ReSTIR for direct and global illumination
• Own profiling and optimization using Nsight, PIX, RenderDoc, GPU counters, flame graphs, and hardware specific tools to reduce divergence, improve occupancy, and hit strict frame budgets
• Integrate rendering with content pipelines and engines including Unreal, Unity, Blender, and WebGPU runtimes and deliver production ready SDKs and services
• Build capture, playback, and evaluation infrastructure with strong observability, deterministic replays, and golden image tests
• Translate research insights into reliable production components that meet latency and throughput targets for interactive and streaming scenarios
• Share expertise through design reviews, mentoring, documentation, and reproducible research artifacts

Minimum qualifications
• PhD in Computer Graphics, Computer Science, or related field, or equivalent research track record with production impact
• Seven or more years building real time rendering or graphics systems, including significant time in fast paced or startup settings
• Strong publication record in top venues such as SIGGRAPH, TOG, HPG, EGSR, or equivalent impactful artifacts that are widely used in production
• Proven experience shipping high performance rendering technology in engines or products with strict frame budgets for desktop, mobile, or XR
• Mastery of C++20 and GPU programming with deep understanding of memory hierarchies, synchronization, explicit graphics APIs such as DirectX 12, Vulkan, or Metal, and modern shader toolchains
• Demonstrated ability to take ideas from paper to production with measurable wins in image quality and frame time
• Strong systems skills including profiling, performance tuning, reliability engineering, and cost awareness across CPU GPU and network boundaries
• Excellent communication and the ability to work across research, engine, and product teams

Preferred qualifications
• Contributions that are widely used in the community such as open source renderers, libraries, datasets, or benchmarks with visible adoption
• Experience in neural and differentiable rendering, 3D reconstruction, volumetric video, SLAM, geometric deep learning, or simulation
• Experience building and operating large scale rendering or training jobs on Kubernetes, Slurm, or Ray across public cloud environments and modern GPU hardware
• Experience with WebGPU and high performance graphics on the web
• Experience with compiler or IR work such as SPIR V, DXIL, PTX, graph capture, or custom scheduling and code generation for GPUs
• Track record of mentoring teams and setting best practices for rendering quality, performance, testing, and reproducibility
• Patents or awards that recognize technical contributions

Nice to have
• Shipped interactive graphics or 3D systems with strict real time constraints for VR or AR including foveated rendering and eye tracking integration
• Experience building remote or cloud rendering with hardware encoders, low latency transport, and content adaptive streaming
• Prior leadership in cross functional initiatives spanning content, data, infrastructure, and product

How to apply

Please include a CV, links to publications and code, and a brief summary of two projects that best represent your impact. For each project include target platforms, frame time budgets and achieved numbers for 60 fps, 90 fps, or 120 fps, triangle or voxel counts, GPU memory footprint and bandwidth constraints, key algorithms used, and measurable quality metrics. If relevant, include streaming bitrates, end to end latency, and user facing results.

On site in New York City required. Relocation support available.

About the interview

  • Recruiter screen 20 minutes
    Goal: motivation, location in NYC, work authorization, compensation range, start date.
    Quick bar: publication record or equivalent artifacts, experience training or serving large models, Python and C plus plus comfort.
  • Hiring manager deep dive 45 minutes
    Goal: end to end ownership.
    Prompts: walk through one system or model you built from idea to prod. Detail data scale, model scale, infra, metrics, cost, failures, and what you would change now.
    Signals: clarity, decision quality, tradeoffs, actual impact.
  • Portfolio and publications review 45 minutes
    Prework: panel reads two of the candidate’s papers or equivalent artifacts.
    In session: candidate defends novelty, ablations, limits, reproducibility, and lessons.
    Signals: research rigor, originality, evaluation hygiene, ability to ship.
  • Live coding 70 minutes
    Format: one practical problem, candidate chooses Python or C plus plus.
    Options to pick from
    a. Build a minimal training loop with correct seeding, mixed precision, gradient clipping, and a tiny metric dashboard.
    b. Write a performant kernel style function for a fused op or a small 3D primitive, then profile and optimize memory and throughput.
    Scoring: correctness first, then profiling, then maintainability and tests.
  • Systems design and scalability 60 minutes
    Prompt: design a training and evaluation pipeline for a spatial intelligence model used in production with daily model refreshes. Include data curation, feature stores, versioning, distributed training, eval slices, rollout, guardrails, on call plan, and cost model.
    Signals: architecture under real constraints, reliability, cost awareness, observability.
  • Research talk 30 minutes plus 15 minutes Q and A
    Candidate presents recent work of their own.
    Signals: depth, taste, problem framing, ability to teach complex ideas.
  • Graphics or 3D focus round 45 minutes
    Candidate chooses one
    a. Neural rendering and differentiable rendering fundamentals
    b. 3D reconstruction and spatial perception
    c. GPU performance engineering and memory models
    Signals: genuine expertise in at least one of these areas.
  • Product and collaboration case 40 minutes
    Prompt: a PM and infra engineer describe a product goal with strict latency and cost targets. Candidate proposes a research and delivery plan with milestones, offline to online metrics, success criteria, and de risk experiments.
    Signals: translation from research to product, prioritization, stakeholder management.
  • Leadership and values 30 minutes
    Topics: raising the bar, mentoring, code and research standards, handling setbacks, authorship ethics, open source posture.
    Signals: team builder, owner mindset, integrity.
  • Writing assessment async 45 to 60 minutes
    Prompt: write a one page plan to improve quality or latency for a spatial model. Include hypothesis, experiment design, metrics, risks, and a rollback plan.
    Signals: crisp written communication, experimental rigor.
  • Reference checks three calls
    Who: former manager, senior peer, cross functional partner.
    Focus: independence, technical bar, collaboration, mentoring, delivery under ambiguity, reliability in production.
  • Decision and calibration
    A single rubric with four outcomes for each dimension: strong hire, hire, no hire, strong no hire.
    Dimensions
    a. Research excellence and publications or equivalent artifacts
    b. Systems and performance engineering
    c. Training and serving large models at scale
    d. Graphics or 3D depth
    e. Product impact and judgment
    f. Communication and leadership

Hire requires strong hire on at least two technical dimensions and no lower than hire on the rest.

About True3D Labs

At True3D we are building the next medium after film. Our team works at the edge of graphics, compression, and AI to turn moving pictures into experiences you can stand inside. This is not incremental work. It is a reinvention of how video is captured, streamed, and remembered.

We are a small focused crew with roots at places like Meta and TikTok. Our compass points forward. We build with curiosity, intensity, and craft, and we share our experiments in public at splats.com. If you join us, you will be expected to do the best work of your career and to shape both the research frontier and the systems that bring it to life.

You will work with peers who set a high technical bar and who care about storytelling as much as they care about code. You will ship quickly, push past what is thought possible, and see your work ripple across research, media, and culture.

If you want to help create the medium that will replace flat video and you thrive when the challenge is steep and the impact is lasting, you will feel at home here.

True3D Labs
Founded:2020
Batch:W21
Team Size:4
Status:
Active
Location:New York
Founders
Daniel Habib
Daniel Habib
Founder