Given ∑ 𝑖 𝑡 | 𝑡 − 1 , error covariance, ̂ 𝑥 𝑖 𝑡 | 𝑡 − 1 , previous estimate, consensus update parameter 𝜀 , and the window size Δ . 1. Obtain measurement 𝑦 𝑖 𝑡 = 𝐶 ( 𝛾 𝑖 𝑡 ) 𝑥 𝑡 + 𝜐 𝑖 𝑡 + 𝑧 𝑖 𝑡 , 𝑖 = 1 , … , 𝑁 . 2. For each measurement solve L1-norm optimization problem, reject outliers as given in (3.5) and then obtain the trimmed measurements: ̂ 𝑦 𝑖 𝑡 = 𝑦 𝑖 𝑡 − ̂ 𝑧 𝑖 𝑡 . 3. Calculate the mode probability P r ( 𝛾 𝑖 𝑡 ∣ ̂ 𝑦 𝑖 𝑡 − Δ ∶ 𝑡 ) . Given P r ( 𝛾 𝑖 𝑡 − Δ ∣ ̂ 𝑦 𝑖 𝑡 − Δ ) , For 𝑠 = 𝑡 − Δ ∶ 𝑡 Evaluate measurement likelihood for ̂ 𝑦 𝑖 𝑠 . Evaluate the Bayesian recursion (3.8)-(3.9). End Decide the channel mode ̂ 𝛾 𝑖 𝑡 using threshold testing. 4. Compute contribution term of information state and matrix such that 𝑢 𝑖 𝑡 = ( 𝐶 𝑖 𝑡 ( ̂ 𝛾 𝑖 𝑡 ) ) 𝑇 ( 𝑅 𝑖 ) − 1 ̂ 𝑦 𝑖 𝑡 , 𝑈 𝑖 𝑡 = ( 𝐶 𝑖 𝑡 ( ̂ 𝛾 𝑖 𝑡 ) ) 𝑇 ( 𝑅 𝑖 ) − 1 𝐶 𝑖 𝑡 ( ̂ 𝛾 𝑖 𝑡 ) . 5. Broadcast message 𝑚 𝑖 𝑡 = ( 𝑢 𝑖 𝑡 , 𝑈 𝑖 𝑡 , ̂ 𝑥 𝑖 𝑡 | 𝑡 − 1 ) to neighbors in 𝐿 𝑖 . 6. Collect messages 𝑚 𝑟 𝑡 = ( 𝑢 𝑟 𝑡 , 𝑈 𝑟 𝑡 , ̂ 𝑥 𝑟 𝑡 | 𝑡 − 1 ) from neighbors. 7. Aggregate the information states and matrices of neighbors including node 𝑖 : 𝐽 𝑖 = 𝐿 𝑖 ∪ { 𝑖 } : 𝑔 𝑖 𝑡 = ∑ 𝑟 ∈ 𝐽 𝑖 𝑢 𝑟 𝑡 , 𝑆 𝑖 𝑡 = ∑ 𝑟 ∈ 𝐽 𝑖 𝑈 𝑟 𝑡 . 8. Compute the Kalman-Consensus estimate: ( 𝑀 𝑖 𝑡 ) − 1 = ( Φ 𝑖 𝑡 ∣ 𝑡 − 1 ) − 1 + 𝑆 𝑖 𝑡 , ̂ 𝑥 𝑖 𝑡 ∣ 𝑡 = ̂ 𝑥 𝑖 𝑡 ∣ 𝑡 − 1 + 𝑀 𝑖 𝑡 ( 𝑔 𝑖 𝑡 − 𝑆 𝑖 𝑡 ̂ 𝑥 𝑖 𝑡 | 𝑡 − 1 𝑀 ) + 𝜀 𝑖 𝑡 1 + ‖ 𝑀 𝑖 𝑡 ‖ ∑ 𝑟 ∈ 𝐽 𝑖 ( ̂ 𝑥 𝑟 𝑡 ∣ 𝑡 − 1 − ̂ 𝑥 𝑖 𝑡 ∣ 𝑡 − 1 ) . Prediction stage Φ 𝑖 𝑡 + 1 ∣ 𝑡 ⟵ 𝐴 𝑀 𝑖 𝑡 𝐴 𝑇 + 𝑄 , ̂ 𝑥 𝑖 𝑡 + 1 ∣ 𝑡 ⟵ 𝐴 ̂ 𝑥 𝑖 𝑡 ∣ 𝑡 .
Algorithm 1: Robust distributed fusion algorithm for node .