Princeton Lightwave Review
The machines that teach
machines to see.
Independent analysis of LiDAR, photonic sensing, and the detection systems powering autonomous vehicles, defense platforms, and industrial automation.
$14.2B
Global LiDAR Market by 2028
340+
Active LiDAR Manufacturers Tracked
6 Modes
Detection Architectures Compared
Photons Are the New Data
Every autonomous vehicle, every precision-guided defense system, every robotic warehouse — they all begin with the same fundamental problem: a machine needs to understand the three-dimensional world around it in real time. Cameras can see, but they cannot measure. Radar can measure, but it cannot resolve. LiDAR does both. It fires photons into the environment and measures the time it takes for each one to return, building a millimetre-accurate point cloud of everything in its field of view.
For decades, LiDAR was confined to defence labs and government survey aircraft — bulky, expensive, and fragile. That era is over. The convergence of solid-state photonics, single-photon avalanche diode (SPAD) arrays, and silicon photomultiplier (SiPM) technology has made it possible to put a high-resolution 3D sensor on a consumer vehicle, a delivery drone, or a handheld mapping device for a fraction of what it cost ten years ago.
Princeton Lightwave Review exists to cover this shift in depth. We track the technology, the companies, the supply chains, and the engineering trade-offs that define modern photonic sensing. From the physics of Geiger-mode avalanche photodiodes to the production economics of VCSEL arrays, from the integration challenges facing Tier 1 automotive suppliers to the defence procurement cycles driving next-generation targeting systems — we write about the intersection of photonics, computation, and the physical world.
The next wave of LiDAR won’t just be smaller and cheaper. It will be intelligent — integrated with on-chip processing, running real-time classification at the sensor level, and consuming a fraction of the cloud compute that today’s perception stacks require. The companies that master this integration will define the next decade of autonomous systems. We’re here to document the race.
[blocksy_posts limit=”3″ has_pagination=”no”]
01. Detection Architectures
Time-of-flight, frequency-modulated continuous wave (FMCW), flash LiDAR, Geiger-mode, scanning, and solid-state — we compare every detection modality head-to-head on range, resolution, cost, and manufacturability for each target application.
02. Automotive LiDAR
The autonomous vehicle industry has consumed more LiDAR investment than any other sector. We cover the perception stack integration, the Tier 1 supplier landscape, and the ongoing debate between camera-only and LiDAR-based autonomy approaches.
03. Defence & ISR Sensing
Military applications drove the first generation of high-performance LiDAR. We track defence procurement, DARPA programs, intelligence, surveillance, and reconnaissance (ISR) sensor development, and the crossover technologies flowing between defence and commercial markets.
04. Photonic Components
VCSELs, edge-emitting lasers, InGaAs photodiodes, SPAD arrays, silicon photomultipliers, and micro-optical assemblies. We cover the component supply chain that determines what’s possible at the system level — who makes what, for whom, and at what cost.
05. Industrial & Mapping
Beyond cars and defence, LiDAR is transforming surveying, mining, agriculture, construction, and forestry. We cover the industrial sensor market — where ruggedness, calibration stability, and long-range performance matter more than unit cost.
06. Compute & Perception
Raw point clouds are useless without processing. We cover the compute layer — edge inference, sensor fusion algorithms, cloud-based point cloud analysis, and the AI models that turn photon data into actionable decisions for autonomous systems.
The Core Technology
Geiger-Mode LiDAR: detecting single photons at kilometre range.
Geiger-mode avalanche photodiode (GmAPD) technology represents the most sensitive class of LiDAR detector ever built. Originally developed for defence imaging and long-range reconnaissance, GmAPD arrays can detect individual photons returning from targets at distances exceeding 10 kilometres. Each pixel in the focal plane array operates as an independent single-photon counter, enabling three-dimensional imaging through obscurants like fog, rain, and foliage that would blind conventional sensors. As manufacturing costs decrease and array sizes grow, Geiger-mode technology is moving from classified military programs into commercial mapping, autonomous navigation, and space-based Earth observation.
How a photon becomes a decision
Stage 1: Emission
A laser source — typically a VCSEL array, edge-emitting diode, or fibre laser — fires a pulse of near-infrared light into the scene. The wavelength, pulse width, and beam steering mechanism determine the sensor’s range, eye safety class, and angular resolution.
Stage 2: Detection
Reflected photons return to the receiver — an InGaAs photodiode, SPAD array, or silicon photomultiplier. The detector converts each photon arrival into an electrical signal, timestamped with picosecond precision. This timing data is the raw material of every point cloud.
Stage 3: Point Cloud
Millions of time-of-flight measurements are assembled into a three-dimensional point cloud — a spatial representation of the environment accurate to within millimetres. Intensity, reflectivity, and multi-return data add layers of information beyond geometry.
Stage 4: Perception
AI models and sensor fusion algorithms interpret the point cloud — classifying objects, predicting trajectories, and generating actionable outputs for the autonomous system. This is where photonics meets computation, and where the biggest performance gains are happening today.
LiDAR Technology Landscape: 2026 Comparison
| Architecture | Effective Range | Key Advantage |
|---|---|---|
| Mechanical Scanning (ToF) | 200–300m | Proven reliability; largest installed base in AV testing |
| Solid-State Flash | 30–150m | No moving parts; lowest unit cost at automotive scale |
| MEMS Mirror Scanning | 150–250m | Balance of range and compact form factor |
| FMCW Coherent | 300m+ | Instantaneous velocity measurement; interference immunity |
| Geiger-Mode (GmAPD) | 1–10+ km | Single-photon sensitivity; through-obscurant imaging |
| OPA (Optical Phased Array) | 100–200m | Fully electronic beam steering; no mechanical components |
| Application Segment | 2026 Market Share | Growth Driver |
|---|---|---|
| Autonomous Vehicles (L3–L5) | 38% | Regulatory mandates for ADAS; robotaxi deployment |
| Topographic Mapping & Survey | 22% | Infrastructure spending; drone-based aerial survey |
| Defence & ISR | 18% | Next-gen targeting; long-range reconnaissance |
| Industrial Automation | 12% | Warehouse robotics; quality inspection; logistics |
| Smart Cities & Traffic | 6% | Intersection monitoring; pedestrian safety systems |
| Agriculture & Forestry | 4% | Precision agriculture; canopy analysis; yield estimation |
| Component | Primary Material | Supply Status |
|---|---|---|
| VCSEL Arrays (905nm) | GaAs / AlGaAs | Mature supply; automotive-qualified sources available |
| Edge-Emitting Lasers (1550nm) | InGaAsP / InP | Limited high-power sources; growing demand pressure |
| InGaAs SPAD Arrays | InGaAs / InP | Specialised foundries; defence-grade lead times 6–12 months |
| Silicon Photomultipliers | Silicon CMOS | Scaling rapidly; multiple automotive-qualified suppliers |
| Micro-Optic Assemblies | Glass / Polymer | Custom designs; precision alignment is the bottleneck |
- The Sensor Has Eaten the City: Why Urban Photonics Needs a Better Story Than “Smart”
Urban Photonics · Sensing · Essay The Sensor Has Eaten the City: Why Urban Photonics Needs a Better Story Than… Read more: The Sensor Has Eaten the City: Why Urban Photonics Needs a Better Story Than “Smart” - The Quiet Repositioning of 3D Sensing in Consumer Electronics: Where ToF Actually Stands in 2026
Consumer Electronics · Depth Sensing · 2026 Outlook The Quiet Repositioning of 3D Sensing in Consumer Electronics: Where ToF Actually… Read more: The Quiet Repositioning of 3D Sensing in Consumer Electronics: Where ToF Actually Stands in 2026 - The State of Global Photonics 2025–2026: A €50 Billion German Industry, Quantum Momentum, and the Geopolitics of Light
Industry Report · Global Photonics · 2025–2026 The State of Global Photonics 2025–2026: A €50 Billion Industry Navigating Quantum Breakthroughs… Read more: The State of Global Photonics 2025–2026: A €50 Billion German Industry, Quantum Momentum, and the Geopolitics of Light
Analysis, Reviews & Technical Briefings
Photonics & LiDAR: 12 Technical Questions Answered
LiDAR stands for Light Detection and Ranging. It works by emitting laser pulses and measuring the time it takes for each pulse to reflect back from objects in the environment. By recording millions of these time-of-flight measurements per second, a LiDAR sensor builds a three-dimensional point cloud — a precise spatial map of everything in its field of view.
Time-of-Flight (ToF) LiDAR sends discrete pulses and measures the round-trip time. FMCW (Frequency-Modulated Continuous Wave) LiDAR sends a continuous beam with a frequency chirp and measures the beat frequency of the returned signal. FMCW provides instantaneous velocity data alongside range and is inherently immune to interference from other LiDAR sensors — a critical advantage in dense traffic.
Geiger-mode LiDAR uses avalanche photodiodes biased above their breakdown voltage, making them sensitive enough to detect individual photons. This enables imaging at extreme ranges — beyond 10 kilometres — and through obscurants like fog, rain, and vegetation canopy. Originally developed for military reconnaissance, the technology is now moving into commercial mapping and autonomous navigation.
Cameras provide rich colour and texture information but cannot directly measure distance. They also struggle in low light, direct sunlight, and adverse weather. LiDAR provides precise 3D geometry regardless of lighting conditions. Most advanced autonomous systems fuse data from cameras, LiDAR, and radar together to create a robust perception stack that compensates for the weaknesses of each individual sensor.
A Vertical-Cavity Surface-Emitting Laser (VCSEL) is a type of semiconductor laser that emits light vertically from its surface. VCSELs can be manufactured in dense arrays on a single wafer, making them ideal for high-volume, low-cost LiDAR illumination. Most 905nm solid-state LiDAR systems use VCSEL arrays as their light source because they can be produced at automotive scale using existing semiconductor fabrication processes.
905nm systems use cheaper silicon-based detectors and mature VCSEL sources, but are limited in power due to eye safety regulations. 1550nm systems can transmit significantly more power (the wavelength is less dangerous to human eyes), enabling longer range, but require more expensive InGaAs-based detectors. The choice depends on the application — automotive tends toward 905nm for cost, while defence and long-range mapping favour 1550nm for performance.
A Single-Photon Avalanche Diode (SPAD) array is a detector that can register individual photons with picosecond timing accuracy. By arranging thousands of SPADs into a focal plane array, you create a sensor that can build a 3D image from an extremely small number of returned photons — enabling LiDAR performance at ranges and in conditions that would be impossible for conventional detectors.
Costs vary dramatically by type. Mechanical spinning units that cost $75,000 a decade ago are now under $1,000. Solid-state flash LiDAR modules for automotive ADAS are targeting $200–500 at volume. Defence-grade Geiger-mode systems still run into the hundreds of thousands. The cost trajectory is downward across every category, driven by semiconductor integration and manufacturing scale.
Sensor fusion is the process of combining data from multiple sensor types — typically cameras, LiDAR, radar, and ultrasonic — into a single coherent perception model. No individual sensor is reliable enough for safety-critical autonomous systems. Fusion algorithms weight each sensor’s strengths and compensate for its weaknesses, producing a perception output that is more robust than any sensor alone.
Real-time LiDAR processing happens on edge hardware inside the vehicle or platform. But training the AI models that interpret point clouds, processing large-scale mapping datasets, and running simulation environments for autonomous vehicle testing all require massive cloud compute. Companies in this space consume significant volumes of GPU compute from providers like AWS, Azure, and Google Cloud.
Princeton Lightwave Inc. was a New Jersey-based manufacturer of InGaAs-based single-photon detectors and Geiger-mode LiDAR systems. The company was acquired by Argo AI (a Ford subsidiary) in 2017 to develop advanced LiDAR for autonomous vehicles. After Argo AI shut down in 2022, elements of the technology were absorbed by Ford and other entities. This publication is not affiliated with the original company — we are an independent review named after the domain.
Expect full solid-state integration — no moving parts, on-chip processing, and sensor-level AI classification running at the detector. FMCW architectures will mature for automotive. Geiger-mode will move further into commercial mapping. Costs will continue to fall as semiconductor foundries dedicate more capacity to photonic integrated circuits. The biggest shift will be from standalone sensors to integrated perception modules that combine emission, detection, and computation in a single package.
