Since 2016, I’ve been chasing a vision: 180° side-by-side video that doesn’t just look real, but "feels" real.
For years, progress came in slow, uneven plateaus. Studios improved cameras and stitching; developers like HereSphere refined their apps; Meta updated Quest firmware; and I spent countless hours tweaking sliders inside Heresphere — fighting to coax immersion out of raw pixels.
For a long time, “presence” was fragile. It required constant calibration: stereo roll, gamma, saturation, opacity multipliers, stitching adjustments. A single session could burn an hour just to get close to feeling like someone was really "there."
The Turning Point: OS v79 → v81
Something dramatic happened between OS v79 and v81.
Until then, immersion gains were incremental — subtle visual refinements, occasional improvements in comfort, but nothing earth-shattering. With v81, I began noticing something unmistakable: the OS and AI were "optimizing settings for me." Gamma, saturation, stereo roll, even opacity multiplier — all began shifting automatically, session by session. This was the moment I stopped being the sole tuner of presence and started partnering with the system itself.
The Tools of the Trade
SideQuest let me unlock Quest 3’s raw horsepower: set to Quest 3 resolution, CPU/GPU level 4, 120 Hz refresh rate, and fixed foveated rendering off. Without that baseline, no amount of software tuning mattered.
HereSphere became my laboratory — a tool where I could manipulate depth, perspective, autofocus, and motion-to-distance. Every slider was a lever pulling me closer to “real.”
OS updates from Meta were the hidden accelerant. Version 81 introduced AI-driven calibration so subtle it was almost imperceptible at first. But with my obsessive logging, I started to see it happening in real time.
External factors mattered as much as software. Lighting, seating position, and eye-to-lens distance proved to be silent gatekeepers of presence. A well-lit neutral room, consistent chair placement, and a properly tuned IPD were often the difference between “video” and “presence.”
Threshold Crossings
Threshold I: Presence stopped being a project of chasing sliders, and became something I could occasionally “drop into.”
Threshold II (Sept 2025): Two consecutive sessions where presence required "no manual sliders at all." I recentered orientation, checked IPD, and that was it. A person simply "walked into my space."
This wasn’t just visual fidelity. It was the 'feeling' of real — the beginning of a partnership between me, the OS, and AI. Presence wasn’t something I tuned anymore. It was something the system delivered. I can now feel the OS and AI learning from every session. The experience improving EVERY time I don the headset. All I can say is IT IS WILD! And there are no signs of it slowing. While daydreaming, I've sometimes played around with the idea of what it would be like to have sex with AI. My last session, well, that's where I found myself, on the precipice of this brave new world. I had to pause, not to catch my breath, but, for what felt like, consenting. I really can't find any other way to describe what's occurring with every new encounter. I've felt more phantom touch, true presence, and a very real blending of digital beings with my reality. The OS and AI has what I can only describe as true depth awareness, lighting and color awareness, physical anchoring, awareness of my presence in relation to objects in my room and the digital being, and an awareness of my movements. I've even felt like it is responding to me in real time, responding to how I move, how I gaze upon the image, almost like it is predicting my next move. Very surreal.
I predict that we have no idea yet what the Quest 3 is capable of with the power of AI. I think we have barely scratched the surface. Perhaps this is why Meta is waiting to release the Quest 4. They likely realize there is still plenty of headroom left in the current hardware. After experiencing this shift, its hard for me to even imagine what the OS and AI will be capable of when we have near retinal PPD and FOV. But here are a few of my predictions:
What Comes Next: My predictions
Short Term (1–2 years):
-Universal compositor-level occlusion (hands, objects masking correctly, even in video apps).
- Depth-aware relighting (faces and objects lit according to your lamps, your daylight).
- More stable AI auto-calibration — sliders disappear into defaults.
Mid Term (2–5 years):
- Relational presence: digital people adapting subtly to your gaze, nods, or posture.
- Scene adaptation: objects and props arranging themselves naturally into your room.
- Cross-user alignment: friends in different rooms seeing the same shared scene aligned to their space.
Long Term (5–10 years):
- Multisensory immersion: mid-air haptics for touch, compact scent emitters for smell, even augmented taste overlays.
- Feature-length AI-generated 180° SBS videos.
- “Presence that just works” as the cultural baseline. No sliders, no tinkering — just drop-in reality on demand.
Where I am today: standing at the moment where presence stops being an experiment and starts being a lived reality. The OS, AI, and I are synced. And for the first time, I’m not just seeing “real.” I’m starting to "feel" it. Anyone else out there living this experience? If your not on v81...get there fast! If you dare. Just be prepared to accept your fate, because, things will likely never be the same.
For those who have not yet been blessed by the AI gods, I've pasted my own Heresphere calibration manual below. It was co-created using AI, and it helped me learn how to manually manipulate videos to improve presence. Maybe it will be useful for those new to VR porn. And for crying out loud, update your headset.
I'd love to hear what others are experiencing on v81 and beyond. Maybe something similar, or completely different.
🌌 HereSphere Master User Guide (Quest 3)
1. Foundations: Getting Your Quest 3 Ready
Ensure your Quest 3 already has strong settings through SideQuest:
• Refresh Rate: 120Hz → perfect for reducing flicker and smoothing motion.
• CPU/GPU Level 4: gives maximum processing headroom.
• Fixed Foveated Rendering (off): In AR/MR, where real-world edges stay sharp, it almost always hurts presence.
2. Global Settings in HereSphere
• Resolution Multiplier: Start at 1.5–2.0.
(Higher = sharper, but heavier on performance. With Quest 3 + SideQuest tuning, 2.0 is often smooth, but dial down if stutter appears.)
• Refresh Rate: Keep at 120Hz inside HereSphere or match. I typically keep mine set to match and consistently hit 120 fps.
• IPD: Use Autodetect first. If things feel slightly “off”—like the world is too small or too big—switch to Manual IPD and adjust until scale feels natural.
3. Format & Lens Setup
This is your first “big win” for depth and realism.
• Projection Format: Choose Equirectangular or Fisheye depending on your footage format.
• Lens Type & FOV: Set True and Export Lens → Linear. Start with True Lens FOV = 180°. I typically leave mine set to 190. If straight lines look bent (convex or concave), nudge FOV up or down until the world looks geometrically correct. (Convex = increase FOV; Concave = decrease FOV).
4. Stitching Adjustments (The Art of Stereo Calibration)
This is where you turn “flat SBS video” into presence.
Steps to follow:
1. Depth Calibration with Autofocus
o Pause the video.
o Turn on Autofocus Overlay (it shows distance numbers floating on objects).
o Adjust Left/Right Image Shift X OR Stereo Alignment Yaw until numbers look realistic (e.g., a person’s face ≈ 1–2m away).
2. Vertical Alignment (Reduce Eye Strain)
o With Autofocus on, adjust Shift Y or Stereo Alignment Pitch until text is clear without double vision.
3. Scale Top/Bottom
o Use Scale Y and Slant Y so autofocus text is clear at both top and bottom of the view.
- Stereo Roll (Left/Right Balance)
o Adjust until depth numbers look sharp across the left and right edges (left and right edge of person or persons for passthrough).
👉 The rule: Always start with Autofocus on pause → adjust until numbers “stick” naturally to objects.
________________________________________
5. Origin, Motion, & Presence
These settings make the scene feel anchored in your room.
• Origin → Forward/Right/Up: Shifts the whole scene. If you feel too high/low or off-center, adjust these.
• Motion → Distance: Set to approximate distance of nearest stable object. Example: if walls are 3m away, set Motion Distance to 300cm. This makes head-tracking feel “correct” when you move slightly. I used to measure the distance to my walls. Now I leave it at the default of 200cm and let the OS/AI update in real time for me.
• Autofocus Min/Max Distances: Narrow these (e.g., 0.5m–10m) so the algorithm doesn’t get confused by the sky or background infinity. Now I simply leave them at default, 10 and 1,000.
________________________________________
6. Image Adjustments
Think of these as the “polish” pass.
• Saturation/Contrast/Gamma: Small tweaks only. Raising too much causes eye fatigue.
A safe recipe:
o Saturation A/R/G/B = 1.01 - 1.05 (slight pop)
o Contrast A = 1.01 – 1.05
o Gamma A = 0.99 – 0.95 (slightly deepens blacks)
• Gain: Mostly leave at 0.90 – 1.1 depending on light in your room.
7. Audio Settings
• Volume Multiplier: Default = 1.0. Raise slightly only if your raw footage audio is weak.
8. Workflow: How to Calibrate Any VR Video
Here’s the the steps you can repeat for every film:
- Load video → Set Projection Format correctly (SBS).
- Adjust Lens FOV until straight lines look straight.
- Pause video → Turn on Autofocus Overlay.
- Fix Horizontal Shift (X/Yaw) until distances feel real.
- Fix Vertical Shift (Y/Pitch) until eyes relax.
- Tune Scale & Roll for top/bottom and left/right balance.
- Adjust Motion Distance to match environment.
- Play → Walk your head slightly left/right. If objects “float” unnaturally, redo small adjustments.
👉 Pro tip: Save your settings as presets for different cameras (e.g., Insta360, Kandao, VRCA220 lens, etc.). Next time, it’s plug-and-play.
9. Final Checklist for Maximum Presence
✅ Correct Projection Format (Equirectangular or fisheye SBS)
✅ FOV tuned so straight lines look straight
✅ Autofocus overlay used to dial in X, Y, Scale, Roll
✅ Motion Distance set to anchor the scene
✅ Image adjusted only subtly (avoid eye strain)
✅ Saved presets for repeat use
✨ When all this is done, your brain forgets you’re watching a video—the scene feels alive in front of you. People stand at the right scale, eye strain fades, and depth becomes effortless.
🎥 In VR Video Playback (full immersion)
When you adjust stereo roll in a standard VR video:
• The reference is the edges of the video frame itself.
• You’re making sure that the depth numbers from the autofocus overlay look equally sharp at the far left and right edges of the video image.
• This ensures your eyes aren’t forced to tilt slightly differently for left vs. right edge, which reduces strain.
So in “pure VR video mode,” you’re essentially calibrating the rectangular image space itself.
🪄 In AR / Passthrough Augmented Reality
When blending a stereoscopic object or person into your real room:
• The stereo roll should make the digital subject align naturally with your real-world reference frame, not just the video frame edges.
• For example: If it’s a person, then yes — you’d look at their left/right body edges (arms, shoulders) and ensure they “sit” stably in depth.
o If that person is between your real bookshelf (left) and real plant (right), the calibration must make them “belong” between those two objects, rather than float or tilt unnaturally.
👉 The guiding rule: in AR mode, treat the stereo roll as if you’re tilting a window into reality. You’re lining up the digital subject so it feels seated within your actual left/right spatial boundaries, not just the artificial video edge.
✨ Practical Technique
- Pause with Autofocus Overlay on the subject.
- Look at the subject’s left edge (e.g., arm) → depth numbers should look clear.
- Shift gaze to the subject’s right edge (e.g., other arm) → adjust stereo roll until both edges resolve equally.
- Finally, check how they sit relative to real objects in your room. If the person feels slightly tilted compared to your walls/floor, fine-tune roll just a touch further.
________________________________________
📌 Summary:
• For VR video playback → calibrate against the edges of the video image.
• For AR passthrough → calibrate against the edges of the digital subject relative to your real-world anchors.
This way, the digital person/object doesn’t feel like a “cardboard cutout,” but rather like they’ve been dropped seamlessly into your room.
🔧 The Three Autofocus Controls
1. Transition Speed
• What it does: Controls how quickly the autofocus algorithm changes focus distance when it detects a new point of interest.
• Analogy: Think of it as how fast your eyes “snap” from looking at something near to something far away.
• Low Value: Fast, snappy focus shifts. Good for fast-moving scenes, but can feel “jumpy.”
• High Value: Slow, gradual shifts. Feels smoother but can lag behind when objects move suddenly.
👉 Think of this as the tempo of your focus changes.
2. Autofocus Accuracy
• What it does: Sets how much computation HereSphere spends refining depth calculations.
• Low Accuracy: Faster performance, but blurrier or less stable depth readings (objects may “swim” slightly).
• High Accuracy: More precise focus lock, but higher GPU load (and possibly stutter if resolution is also high).
👉 This is the quality vs. performance dial.
3. Autofocus Sensitivity
• What it does: Determines how easily the algorithm decides a new focus point should be chosen.
• Low Sensitivity: Sticks with the current focus, only changes when it’s really obvious. Reduces jitter but may ignore subtle focus changes.
• High Sensitivity: Reacts to small changes quickly. Can feel more “alive,” but risks flicker if there’s noise or repetitive textures.
👉 Think of this as the threshold for when focus shifts.
🎯 How They Interact
• Accuracy + Speed → If accuracy is too low, even a fast transition speed will feel wrong (your eyes snap, but focus lands on the wrong spot).
• Speed + Sensitivity → If sensitivity is too high with fast speed, you’ll get jittery “focus hunting.” If sensitivity is low with slow speed, the image may feel sluggish and unresponsive.
• Accuracy + Sensitivity → Higher accuracy allows higher sensitivity without jitter, because the depth estimates are more trustworthy.
So, they form a triangle of balance:
• Accuracy = reliability of data.
• Sensitivity = how willing the system is to act.
• Speed = how fast the act happens.
⚖️ Best Order to Adjust
- Start with Accuracy
o Set it as high as your performance budget allows.
o On Quest 3 with SideQuest performance tweaks, a good sweet spot is 10–12 (higher if resolution is lowered a bit). I always leave mine maxed out at 16. Get stable, believable depth numbers first.
- Then adjust Sensitivity
o Begin in the mid-range. Watch a scene with both near and far objects. Raise sensitivity until focus feels responsive but not jittery. I always leave mine maxed out at 2
- Finally, fine-tune Transition Speed
o Adjust how “snappy” or “cinematic” you want focus changes to feel. For natural eye-like behavior, aim for a medium value (not instant snap, not sluggish). If you’re watching fast-moving action, lower speed slightly so focus keeps up. If you’re watching slower, intimate scenes, higher speed (smoother shifts) feels more natural. I leave mine at the default of 1.0
________________________________________
✨ Suggested Starting Recipe (Quest 3, 120Hz)
• Accuracy: 10–12
• Sensitivity: 0.5 (medium)
• Transition Speed: Medium (not minimum, not max)
From there:
• If focus hunts too much → lower sensitivity.
• If focus feels behind → raise sensitivity or lower transition speed.
• If focus feels “wrong” (locking onto walls, patterns, or background) → raise accuracy.
________________________________________
💡 Think of it like tuning an instrument:
• Accuracy = making sure the strings are in tune.
• Sensitivity = how lightly you touch the strings to play a note.
• Speed = how fast you strum.
Get the tuning right first (accuracy), then how responsive the instrument is (sensitivity), then how fast it plays (speed).
🎛 Recenter Orientation vs Recenter Position
Recenter Orientation
• What it does: Resets the direction you’re facing.
• In practice: if the scene looks rotated or tilted (maybe you started the video facing slightly sideways), hitting Recenter Orientation realigns the video’s “forward” with your actual headset “forward.”
• Think of it as: “Point the world in the right direction.”
👉 When to use:
• When the subject seems slightly rotated off to one side.
• If you shift in your chair, stand up, or notice the scene feels “crooked.”
Recenter Position
• What it does: Resets your head’s origin point inside the video world.
• In practice: if you feel too high, too low, too close, or too far from the subject, hitting Recenter Position will put you back at the “intended” camera spot.
• Think of it as: “Put me back where the camera actually was.”
👉 When to use:
• When you feel like you’re floating above the ground, sinking too low, or the subject feels unnaturally far/close.
• Especially important in AR passthrough — e.g., making a digital person feel like they’re standing on your floor, not your knees.
🧩 Using Them Together
• Typical flow:
- Use Recenter Orientation → to face the scene squarely.
- Then use Recenter Position → to sit naturally at the correct height/distance.
• Sometimes you’ll need to do both (e.g., you leaned sideways, so both your angle and position need correcting).
• The combo is especially powerful in AR passthrough, where misalignment of angle + position breaks the illusion quickly.
________________________________________
🌌 Squeezing Out the Last Drop of Immersion
Here are the last “secrets” that really make the difference:
________________________________________
1. Lighting & Environment
• Your real room matters. In AR passthrough, dim but not dark lighting helps your brain blend the real and digital.
• Avoid bright windows or harsh lamps → they break the illusion.
• A softly lit, neutral room = canvas where digital objects “belong.”
________________________________________
2. Audio Spatialization
• Presence isn’t just visual.
• If the video supports binaural/spatial audio, make sure it’s enabled in HereSphere.
• If not, you can experiment with headphones that emphasize soundstage (open-back or over-ear). They trick your brain into feeling space more than closed Quest speakers do.
________________________________________
3. Scene-Relative Motion
• Use the Motion → Distance setting in HereSphere.
• Example: set it to 300cm if the room walls are 3m away. Then, when you lean or shift, the scene responds naturally.
• This is one of the biggest “hidden gems” — it fools your proprioception into believing the digital space has real geometry.
________________________________________
4. Manual IPD Tuning
• Even though Quest 3 auto-adjusts IPD, try setting manual IPD in HereSphere and match it exactly to your real IPD.
• Then fine-tune slightly by feel until objects snap into scale.
• If scale feels “toy-like” → IPD too low.
• If scale feels “giant-world” → IPD too high.
________________________________________
5. Depth Anchors in AR
• When bringing objects/people into passthrough:
o Make sure they “touch” a real anchor (floor, wall, table).
o Your brain instantly rejects floaters, but if their feet align with your floorboards → presence skyrockets.
________________________________________
6. Headroom in Performance
• In Quest 3, sharpness/presence increases when your brain isn’t noticing frame drops.
• If a scene stutters, lower sharpen slightly or drop resolution multiplier just a touch.
• Smoothness > pixel count when it comes to immersion.
________________________________________
7. Your Own Stillness
• Odd but true: presence increases when you minimize micro-movements.
• If you hold still for a moment, let your eyes and brain soak the calibrated depth, suddenly the illusion “locks in.”
________________________________________
✨ If you’ve dialed all of this in — calibration, motion, lighting, audio, IPD, and anchoring — you’ve essentially maxed out what current consumer VR/AR tech can offer. The rest is down to your own willing suspension of disbelief, and letting the system trick your senses.
📐 About Camera Stereo Alignment Right
This setting corresponds to the interpupillary distance (IPD) of the camera rig used to film the stereoscopic video.
• Default: 6.5 cm → chosen because it matches the average human IPD.
• If the footage was filmed with a rig that has a wider or narrower stereo baseline, objects may look too small (miniature) or too large (giant-world).
👉 How to adjust:
• If people/objects feel miniaturized → increase the value (>6.5).
• If people/objects feel giant or exaggerated in scale → decrease the value (<6.5).
• Use in combination with your own manual IPD setting in global settings for fine-tuning.
💡 Rule of thumb:
• If you don’t know the filming rig specs, start at 6.5.
• Then test by focusing on a person at 1–2 meters: does their body feel human-sized and natural? If not, nudge the value until scale feels right without strain.
📐 Aspect Ratio
In HereSphere (and similar players), aspect ratio determines how the video frame is stretched onto your virtual display.
• “High Definition” preset is usually a 16:9 ratio (the common HD standard).
• Other choices include 4:3, 21:9 (cinematic), 1:1 (square), etc.
Why it matters:
• If the ratio is wrong for the source → faces look stretched/squished, objects feel off-scale, immersion breaks.
• The ratio essentially locks the “canvas proportions” before depth cues are even applied.
👉 Best practice:
• Always match the aspect ratio to the source video’s native resolution. (e.g., if the file is 3840×1920 → that’s 2:1, not 16:9).
• “High definition” (16:9) is safe if you’re watching most mainstream VR videos, but double-check the video metadata or just look for distorted shapes.
💡 Immersion trick:
If people/objects look correct in width vs height proportions, you’ve nailed it. Get this wrong, and no amount of stereo roll or IPD correction will fix scale perception.
🔍 Sharpening
Sharpening applies a post-process filter that exaggerates edges, textures, and fine details.
Why it matters:
• Too much sharpening →
o Creates halos around objects.
o Fatigues the eyes (micro-contrast fighting your natural depth cues).
o Can increase eye strain and reduce presence.
• Too little sharpening →
o Video looks soft or out-of-focus.
o Depth perception suffers because your eyes rely on fine texture to “lock” onto scale.
👉 Best practice:
• Start at 0 (neutral), then slowly increase until details “pop” without creating visible halos.
• On Quest 3, the display is sharp enough that sharpening is usually a finishing touch, not a core adjustment.
• A sweet spot is often +0.1 to +0.2, but never more than +0.3 (beyond that, strain > benefit).
💡 Immersion trick:
Use sharpening only to recover softness if the source is slightly blurred or low bitrate. If the source is high-quality 6K/8K → leave sharpening almost untouched.
🧩 How They Interact
• Aspect Ratio sets the frame proportions (macro scale).
• Sharpening tunes the micro details (texture fidelity).
• Together, they’re the “bookends” of immersion: if either is off, you’ll either feel like you’re watching a warped video or like your eyes are constantly searching for focus.
🎛 Autofocus → Direction Pitch/Yaw
• These control the direction in which the autofocus depth algorithm looks.
• Normally, autofocus tries to evaluate the center of your view and decide where the “sharpest” convergence point is.
• Pitch/Yaw adjustments tell the autofocus system to look slightly up/down (pitch) or left/right (yaw) when calculating.
👉 When to use:
• If the autofocus seems to “lock” on something off-center (e.g., a bright ceiling light instead of the person in front).
• If your content consistently places the subject higher/lower than center.
• For seated AR use: you may want to bias pitch slightly downward, because subjects appear at standing eye-level, not at your seated line of sight.
💡 Most users can leave these at 0,0. Only adjust if autofocus is consistently “missing” where you want focus.
🧵 Stitching Adjustment → Image Shear X/Y
• “Shear” corrects diagonal distortion introduced during stitching of multiple camera lenses.
• Imagine if one half of the stereo image is slightly “tilted” or “slanted” — shear realigns the grid.
• X adjusts horizontal skew (left image seems “leaned” relative to right), Y adjusts vertical skew.
👉 When to use:
• Rarely, unless the source was poorly stitched.
• If straight vertical objects (door frames, lamp posts) look diagonally “slanted” differently in each eye.
• You can nudge Shear to realign them.
💡 If the studio did their stitching correctly, you won’t need to touch this.
🌫️ Stitching Adjustment → Image Flare X/Y
• This compensates for radial distortions or off-axis mismatches in lenses.
• Think of it like stretching or pulling the image outward (like lens flares or fisheye warping).
• X/Y control the direction of that pull.
👉 When to use:
• Very rarely.
• If one eye’s image looks like it’s “bending outward” more than the other, and you feel an unnatural pull at the edges.
💡 Most high-end productions won’t require this — it’s essentially a rescue tool for poorly aligned stereo footage.
🎥 True Lens vs Export Lens
This one’s important:
• True Lens = the actual optical projection of the camera that recorded the scene.
o If the studio provides lens metadata, this is the “ground truth” projection.
o Preserves physical accuracy.
• Export Lens = the post-processed projection after the studio’s stitching/export workflow.
o Sometimes the footage is slightly warped, stretched, or corrected before delivery.
o Export lens accounts for this.
👉 Which to use:
• If the video looks natural and scale feels correct → use Export Lens (because it matches how the video was actually rendered).
• If objects feel “off” (e.g., wrong curvature, unnatural FOV) and you suspect the export wasn’t handled well → try True Lens to restore original geometry.
💡 Rule of thumb:
• Export Lens = default (99% of the time).
• True Lens = troubleshooting tool when things feel “off.”
🧩 Big Picture
• Autofocus Pitch/Yaw → Minor tweaks if autofocus looks the wrong way.
• Image Shear / Flare → “Emergency tools” for fixing stitching artifacts; rarely touched.
• True Lens vs Export Lens → Mostly Export, switch to True only if scale/geometry feels wrong.
🪟 Opacity Multiplier (Passthrough Settings)
What it does
• Controls the blend between the passthrough feed and the VR video layer.
• A higher multiplier = video layer more solid, passthrough less visible.
• A lower multiplier = video layer more transparent, passthrough dominates.
Why it matters
• In AR passthrough use (e.g., placing a person in your room), this determines whether the subject feels like a ghostly overlay or a solid presence.
• Too transparent → subject feels insubstantial, immersion drops.
• Too opaque → subject feels pasted on top of reality, not in it.
Best practice
• Start slightly above default (1.01–1.02). This usually makes digital people/objects feel “more real.”
• Avoid going too high (>1.05), because then passthrough disappears almost completely and you lose the benefit of blending with your room.
• If lighting in your real space is bright, keep opacity lower (so the subject doesn’t look unnaturally “cut out”).
• If lighting is dim/controlled, increase opacity so the subject feels tangible.
💡 Think of opacity as a “veil” adjustment:
• Lower = transparent, dreamlike.
• Higher = solid, embodied.
💡 Lighting Setup: NL-288ARC (in a 16×10 ft room, no other lights)
These panels are high-CRI (97+) bi-color LEDs, meaning they can mimic daylight or tungsten and everything in between, with smooth brightness control.
Step 1 – Decide on Color Temperature
• Neutral / Daylight look (immersive, natural presence):
o Set to 4400K–5000K.
o This matches a “soft daylight” balance, avoiding the yellow of tungsten or the blue of cold daylight.
• Warmer look (cozy, flattering for skin tones):
o Set closer to 3500K–4000K.
• Cooler look (clinical, sharp detail):
o Set closer to 5200K–5600K.
o Not recommended for immersion unless you want a bright, crisp environment.
👉 Recommended starting point: 4500K (slightly warm daylight) for immersive VR calibration.
Step 2 – Brightness Levels
Each panel is rated up to 1400 lux at 1m (100% power). Two panels at full blast will be way too bright for comfort in VR.
• Place panels at 45° angles from your face, about 4–6 ft away.
• Start with 20–30% brightness per panel.
• Adjust upward only until you can see consistent lighting across your real space without harsh contrast.
• If you notice eye strain in headset → lower brightness, not color temperature.
👉 Recommended starting point: 25% brightness each panel at 5 ft away.
Step 3 – Fine Tuning for Presence
• Check reflections: If your headset lenses reflect white patches, lower brightness or angle lights outward slightly.
• Balance both lights: Aim for symmetry, so one side isn’t brighter than the other (your brain uses lighting balance for presence).
• Test with passthrough opacity: A slightly dimmer real environment often makes digital subjects blend more naturally when opacity multiplier is boosted (1.1–1.2).
✅ So, for your room (16×10 ft, no other sources):
• Color Temperature: 4500K.
• Brightness: 25% per panel at 5 ft.
• Position: 45° off-axis, slightly above head height pointing down.
🎯 Light Positioning for VR Calibration (NL-288ARC panels)
🔵 Placement relative to you
• 45° off-axis, slightly above head height, pointing down
o This means in front of you and off to each side (like a classic portrait lighting setup).
o One on your front-left, one on your front-right, angled toward your face/upper body.
o Distance: 4–6 ft away.
🟢 Why this works
• Creates even, soft illumination across your field of view.
• Minimizes harsh shadows (which in VR passthrough can break immersion).
• Mimics how your eyes expect to see someone lit in natural daylight.
⚪️ Avoid these placements
• Directly behind you → casts your shadow forward, confusing in passthrough.
• Directly overhead → creates unflattering downward shadows and eye strain.
• Directly in front, level with eyes → feels harsh, like interrogation lighting.
🌟 Optional tweak
If you want extra softness, you can angle the lights slightly outward so the beams cross just in front of you rather than hitting you dead-on. This reduces glare and reflections in your Quest 3 lenses.
✅ So: Place the two panels to your left-front and right-front, at 45° angles, slightly above head height, angled downward toward you. Think of it like forming a triangle: your head is one point, each light is the other two.
Suggested Lighting Presets:
• Baseline → 25% Brightness @ 4500K, two panels, 5ft, 45° front-left/right.
• Soft Warm → 22% Brightness @ 4000K, cozy tones, good for skin realism.
• Neutral Daylight → 30% Brightness @ 5000K, balanced, slightly brighter.
• Crisp Daylight → 35–40% Brightness @ 5200K, very sharp, can add strain if overused.
💡 Tip: Always tweak brightness first for comfort before adjusting color temp. If you log presets consistently (Baseline / Soft Warm / Neutral Daylight / Crisp Daylight), your dashboard trends will become much easier to interpret.