Assistive signal-based verification

Một phần của tài liệu Collaborative detection framework for security attacks on the internet of things (Trang 121 - 124)

5.5 TrioSys for detecting location forgery attacks

5.5.4 Assistive signal-based verification

The local tracking engine may suffer poor performance in some cases. For example, as illustrated in Fig. 5.5.4, the vehicles in the red area is invisible to the Rx. In another case, if the Tx and the Rx are far from each other, their communications will suffer high noise.

Thus, if the position offset is small [98], the detection performance of the local tracking will be poor. Moreover, in a non-LOS area, it is impossible for the host vehicle to know from which object type sends the messages. We assume that a smart attacker may like to install the OBU on a custom device, e.g., motorcycle. Consequently, the motion models used in vehicle trajectory prediction, e.g., [98], are no longer accurate (thus poor detection accuracy) if applied to such objects. This is where the assitive verification can help. We design the RSUs to do the extensive verification and improve the system performance in such cases.

The Rx may only need to take care of approaching objects at a dangerous distance (no matter which type), particularly at the street junctions. Because the RSUs are fixed at given locations (attached to street/traffic lights), we propose to set up a pre-defined grid of RSSI signal patterns in the road area near an RSU (e.g., the asterisk symbol and the red area of Road 2 in Fig. 5.5.4). The values of RSSI patterns may vary with the environment.

When a vehicle is approaching the area, given the received RSSI, the RSU can estimate the distance to the Tx from the trained value in the pre-defined deep map. Note that we can pre-build this map by collecting the RSSI patterns from several training/known vehicles running through at given time of days in various weather conditions and for various speeds of vehicles. Also, due to the low accuracy of RSSI-based localization in a long distance, this verification should be applied only to an area within 50m around the RSUs. However, a significant advantage of this RSSI-based approach is that it can work independently of the motion models. According to signal processing theory, RSSI =−10nlog10d+C, in which n = 2or 4 is the path loss exponent that varies with the environment, d is the distance between the Tx and receiving devices (RSUs), and C is a fixed constant that

accounts for system losses, say 1m. d can be expressed by d= 10((C−RSSI)/(10∗n))

. (5.5.29)

In this work, we design the RSSI-based localization to verify whether there are vehicles approaching at a dangerous distance (e.g., within 50m near the street junctions); therefore, a marginal error (e.g., 5m) in the distance estimation is acceptable. We also note that, if RSUs can sniff V2V communications or even the Tx has a V2I connection (e.g., infotainment) with the RSUs, the tracking-based local detection can be used on these RSU devices.

Fusing the verification results It is impractical to expect a single source with perfect knowledge about whether a vehicle exists at the location as it claims. The system can refer to the information from multiple sources (with various verification methods) that provide imperfect and incomplete knowledge, and then infer the correct information. In this work, we use probability-based fusion to infer the detection results from the outputs of two independent sources: tracking-based local detection and signal-based assistive verification.

Assume that a hypothesis H denotes the state of the system after the fusion, which can be either of two mutually exclusive states (Attack/N oAttack). Similar to [121], H =N oAttack represents the hypothesis that no attack vehicle is found, i.e., the vehicle is at the location as claimed in the BSM.H¯ =Attack represents the hypothesis that the system classifies the target vehicle as an attack one, i.e., no vehicle is found at the location as claimed in the BSM; U denotes a hypothesis that it cannot identify whether Attack or N oAttack is observed. Each hypothesis pertains to a certain degree of belief that a specific state occurs. Following this, the probability of an event (Attack or N oAttack) can represent the degree of belief in hypothesis H and 0 degree of belief in its absence.

Let the degree of belief ofN oAttack on the local engine being trustworthy is α, and that of the presence of the vehicle by the assistive verification is β. Then the basic belief assignment for H is m1(H) =α,m1( ¯H) = 0, and m1(U) = 1−α. If the local detection engine claims the vehicle appearance is untrustworthy, its basic probability assignment will be is m1(H) = 0, m1( ¯H) = α, and m1(U) = 1−α. Then, the belief value for the local detection engine considering H isbelief1(H) =m1(H). The calculation is the same for the assistive verification. Table 5.5.2 summarizes the belief values for the detection results from the two sources.

Table 5.5.2: Data fusion in our misbehavior detection Trustworthiness Decision

logic

Basic belief assignment mi(H) mi(H¯) mi(U) Local detection engine

(i = 1)

NoAttack α 0 1−α

Attack 0 α 1−α

Assistive verification (i = 2)

NoAttack β 0 1−β

Attack 0 β 1−β

Fusion NoAttack m1(H) ⊕ m2(H)

Attack m1(H)¯ ⊕ m2( ¯H)

Given the detection state of each source, the degree of belief of the fusion result is calculated as follows:





beliefN oAttack =m1(H)⊕m2(H) beliefAttack =m1( ¯H)⊕m2( ¯H),

(5.5.30)

where ⊕ is the orthogonal sum [121] as follows:

m1(H)⊕m2(H) = P

i,j:Hi∩Hj=Hm1(Hi)m2(Hj) P

i,j:Hi∩Hj=∅m1(Hi)m2(Hj)

For example, the detection result of the N oAttack state at the local engine is trustworthy with a probability 0.8, and the RSU-based engine confirms the probability is 0.6 that the vehicle presents at the location as claimed in the BSM. Also, suppose that the vehicle V is actually trustworthy. If the local engine and the RSU-based engine agree that the vehicleV is trustworthy, then their combined degree of belief in V’s trustworthiness:

m1(H)⊕m2(H) = m1(H)∗m2(H) +m1(H)∗m2(U) +m1(U)∗m2(H) m1( ¯H)∗m2( ¯H) +m1( ¯H)∗m2(U) +m1(U)∗m2( ¯H)+

m1( ¯H)∗m2(H) +m1(H)∗m2( ¯H)

= 0.92

Following this, the degree of belief forN oAttack validation is 0.92 as given by the fusion, clearly higher than that if using the local detection engine alone. Consequently, the host vehicle will suspect that the target tracking vehicle is an attacker. The fusion helps to increase the reliability significantly if both engines have high trustworthiness about the

same hypothesis.

Một phần của tài liệu Collaborative detection framework for security attacks on the internet of things (Trang 121 - 124)

Tải bản đầy đủ (PDF)

(168 trang)