Self-assessment guide: Work each section in order. Write your answer before expanding the details block. This chapter is the core of daily bag diagnosis — spend extra time on Section B (numerical). If you can compute d by hand in under 2 minutes, you can diagnose silent rejections in the field without a script.
AMR context: Every time you see a robot drift off a lane line while the line-sensor is active,
one of the gates in update() is silently dropping measurements. This exercise set
builds the muscle to compute which gate, and why.
A1. Explain the difference between an orthogonal and a parallel line-sensor measurement in the the navigation estimator. Which component of the state vector does each update? Why can a single line crossing only update one state component, not two?
A2. The Mahalanobis distance gate rejects a measurement when d > 2. Explain why a
tighter state covariance P can paradoxically cause MORE measurements to be rejected.
Give a concrete scenario where this leads to navigation failure.
A3. AMR uses a clamp update (x̂⁺ = clamp(x̂⁻, min, max)) instead of the standard
Kalman gain update (x̂⁺ = x̂⁻ + K ν). For what physical reason is the clamp model more
honest for line-sensor XY measurements? What probability distribution does the clamp represent?
A4. What is the purpose of the offset ambiguity check in update()?
Describe the scenario where it would reject an otherwise valid line-sensor measurement.
Under what state-covariance condition is the ambiguity check most likely to trigger?
A5. The is_reliable flag in line-sensor messages is checked at lines 876 and 1027 in
`. Explain whatis_reliable=False` indicates and why it only catches ~1% of
problematic messages. What type of problem does it miss entirely?
For each scenario below, compute the Mahalanobis distance d and state whether the measurement is ACCEPTED (d ≤ 2) or REJECTED (d > 2).
Given constants for all scenarios:
- Measurement noise: R = 0.005 m²
- Mahalanobis threshold: d_max = 2.0
B1. Scenario: Normal operation (should be accepted)
State estimate: x̂⁻(x) = 1.520m
State covariance: P⁻(x,x) = 0.008 m²
Line-Sensor reading: z = 1.500m (line at X = 1.5m)
Innovation: ν = z - x̂⁻ = 1.500 - 1.520 = -0.020m
Innovation var: S = P⁻(x,x) + R = 0.008 + 0.005 = 0.013 m²
Innovation std: √S = 0.1140m
Mahalanobis d: d = |ν| / √S = 0.020 / 0.1140 = 0.175
Decision: ACCEPTED ✓ (d = 0.18 << 2.0)
B2. Scenario: Borderline case — robot drifted after long corridor (accepted, just)
State estimate: x̂⁻(x) = 1.270m
State covariance: P⁻(x,x) = 0.025 m² (large — long dead-reckoning stretch)
Line-Sensor reading: z = 1.500m (line at X = 1.5m, robot drifted ~23cm)
Innovation: ν = 1.500 - 1.270 = +0.230m
Innovation var: S = 0.025 + 0.005 = 0.030 m²
Innovation std: √S = 0.1732m
Mahalanobis d: d = 0.230 / 0.1732 = 1.328
Decision: ACCEPTED ✓ (d = 1.33 < 2.0)
B3. Scenario: Rejection — genuine outlier (wrong lane or sensor fault)
State estimate: x̂⁻(x) = 1.490m
State covariance: P⁻(x,x) = 0.010 m²
Line-Sensor reading: z = 3.000m (robot at X=1.5m seeing X=3.0m line — wrong lane)
Innovation: ν = 3.000 - 1.490 = +1.510m
Innovation var: S = 0.010 + 0.005 = 0.015 m²
Innovation std: √S = 0.1225m
Mahalanobis d: d = 1.510 / 0.1225 = 12.33
Decision: REJECTED ✗ (d = 12.3 >> 2.0)
Task: Implement the standard Kalman filter update step as a Python function. This is the standard update (not the robot clamp variant) — it forms the mathematical foundation for understanding why AMR deviated from it.
Function signature:
import numpy as np
def ekf_update(
x: np.ndarray, # Prior state estimate, shape (n,)
P: np.ndarray, # Prior covariance, shape (n, n)
z: np.ndarray, # Measurement, shape (m,)
H: np.ndarray, # Measurement matrix, shape (m, n)
R: np.ndarray, # Measurement noise covariance, shape (m, m)
) -> tuple[np.ndarray, np.ndarray, float]:
"""
Standard Kalman filter update step.
Returns:
x_post: Posterior state estimate, shape (n,)
P_post: Posterior covariance, shape (n, n)
mahal_d: Mahalanobis distance (scalar) — for gating decisions
"""
Expected behaviour:
- Innovation: nu = z - H @ x
- Innovation covariance: S = H @ P @ H.T + R
- Kalman gain: K = P @ H.T @ np.linalg.inv(S)
- State update: x_post = x + K @ nu
- Covariance update: P_post = (I - K @ H) @ P
- Mahalanobis distance: sqrt(nu.T @ inv(S) @ nu)
import numpy as np
def ekf_update(
x: np.ndarray,
P: np.ndarray,
z: np.ndarray,
H: np.ndarray,
R: np.ndarray,
) -> tuple[np.ndarray, np.ndarray, float]:
n = x.shape[0]
nu = z - H @ x # innovation
S = H @ P @ H.T + R # innovation covariance
K = P @ H.T @ np.linalg.inv(S) # Kalman gain
x_post = x + K @ nu # state update
P_post = (np.eye(n) - K @ H) @ P # covariance update (Joseph form safer)
mahal_sq = float(nu.T @ np.linalg.inv(S) @ nu)
mahal_d = np.sqrt(max(mahal_sq, 0.0)) # numerical guard
return x_post, P_post, mahal_d
# --- Test cases ---
def test_identity_measurement():
"""H=I, R small: posterior should be close to measurement."""
x = np.array([2.05, 1.00, 0.1])
P = np.diag([0.01, 0.01, 0.01])
z = np.array([2.00, 1.00])
H = np.array([[1, 0, 0],
[0, 1, 0]])
R = np.diag([0.005, 0.005])
x_post, P_post, d = ekf_update(x, P, z, H, R)
assert x_post[0] < x[0], "X should move toward measurement"
assert P_post[0, 0] < P[0, 0], "Covariance should shrink"
assert d < 2.0, f"Expected accepted measurement, got d={d:.3f}"
print(f"x_post[0] = {x_post[0]:.4f} (expected ~2.017)")
print(f"P_post[0,0] = {P_post[0,0]:.5f} (expected ~0.00333)")
print(f"Mahal d = {d:.3f} (expected 0.41)")
def test_large_innovation_rejected():
"""Innovation far from prediction: d should exceed 2."""
x = np.array([1.49])
P = np.array([[0.010]])
z = np.array([3.00])
H = np.array([[1.0]])
R = np.array([[0.005]])
_, _, d = ekf_update(x, P, z, H, R)
assert d > 2.0, f"Expected rejection, got d={d:.3f}"
print(f"Mahal d = {d:.3f} (expected ~12.3 → REJECTED)")
def test_covariance_shrinks():
"""After update, covariance must be smaller than prior."""
x = np.array([0.0, 0.0])
P = np.diag([1.0, 1.0])
z = np.array([0.1])
H = np.array([[1.0, 0.0]])
R = np.array([[0.1]])
_, P_post, _ = ekf_update(x, P, z, H, R)
assert P_post[0, 0] < P[0, 0], "X covariance must shrink after measurement"
assert abs(P_post[1, 1] - P[1, 1]) < 1e-10, "Y covariance unchanged (H only observes X)"
print(f"P_post = {np.diag(P_post)}")
if __name__ == "__main__":
test_identity_measurement()
test_large_innovation_rejected()
test_covariance_shrinks()
print("All tests passed.")
x_post[0] = 2.0167 (weighted average: x moved 2/3 toward measurement)
P_post[0,0] = 0.00333 (1/3 of original P — sensor reduces uncertainty)
Mahal d = 0.409 (well within 2σ gate)
Mahal d = 12.327 (REJECTED)
P_post = [0.09091 1.0] (X tightened, Y unchanged)
All tests passed.
Problem:
A line-sensor measurement has a range of 0.06m (the IR array span covers 6cm — 3cm either
side of the detected line centroid):
- meas.min = 2.470m
- meas.max = 2.530m
- meas.range = 0.060m
The current state estimate and covariance are:
- x̂⁻(x) = 2.510m (state estimate is inside the range)
- Two cases: P⁻(x,x) = 0.010 m² and P⁻(x,x) = 0.0001 m²
Answer these questions:
σ²_uniform does AMR assign to this measurement range?P⁺ = min(P⁻, σ²_uniform) produce?x̂⁻(x) = 2.450m (state is OUTSIDE the range). What happens? σ²_uniform = (meas.range)² / 12
= (0.060)² / 12
= 0.0036 / 12
= 0.0003 m²
σ_uniform = 0.01732m ≈ 1.73cm
Case A: P⁻ = 0.010 m² (large — 10x bigger than σ²_uniform)
P⁺ = min(0.010, 0.0003) = 0.0003 m²
→ Covariance tightened to the line-sensor range variance
→ After this update the filter is "as confident as the line-sensor allows"
Case B: P⁻ = 0.0001 m² (very tight — 3x smaller than σ²_uniform)
P⁺ = min(0.0001, 0.0003) = 0.0001 m²
→ Covariance UNCHANGED — the state is already MORE precise than the measurement
→ The line-sensor cannot improve on what the filter already "knows"
x̂⁻ = 2.510m (inside [2.470, 2.530])
x̂⁺ = clamp(2.510, 2.470, 2.530) = 2.510m
→ No change to state position in either P case.
→ The state is already consistent with the measurement range; no correction needed.
x̂⁻ = 2.450m (below meas.min = 2.470m)
x̂⁺ = clamp(2.450, 2.470, 2.530) = 2.470m
→ State SNAPPED to the nearest edge of the range.
→ The innovation here would be:
ν = (meas centre) - x̂⁻ = 2.500 - 2.450 = 0.050m
Check Mahalanobis gate first (using Case A, P = 0.010):
d = 0.050 / sqrt(0.010 + 0.0003) = 0.050 / 0.1015 = 0.493
→ ACCEPTED ✓ (d < 2)
After clamp: x̂⁺ = 2.470m (snapped from 2.450m to range boundary)
Covariance: P⁺ = min(0.010, 0.0003) = 0.0003 m²
When you see P_xx trace in a bag that goes:
0.010 → 0.0030 → 0.0010 → 0.0003 → 0.0003 → 0.0003 → [flat]
…and the flat region spans 3+ seconds, that is the duplicate flood signature. The covariance
has reached the σ²_uniform floor and is being confirmed there by stale readings. The robot’s
position belief is locked at one value while the physical robot may have moved. The next
fresh line-sensor reading that reports a different position will be evaluated against
d = ν / sqrt(0.0003 + 0.005) = ν / 0.0724. Any drift above 2 × 0.0724 = 14.5cm
causes silent rejection — and the robot continues drifting.
This is why line-sensor duplicate detection upstream of the estimator (sequence numbers, content hashing, or timestamp monotonicity checks) would fix the silent-rejection failure class without changing any estimator math.