The available sensors equipped in the vehicles are essential to confirm whether a misbehavior detection system can work with existing hardware configuration. This section presents our assessment on the vehicle configuration, the reliable measurements, the attack model, and the other assumption conditions which is important to understand our proposed approach in the next section.
5.3.1 Vehicle configuration & source information
Due to the cost, it is common to assume that a vehicle may not have a full box of high-end facilities but a critical communication system such as on-board unit (OBU).
According to [86], [103], [104], an ego vehicle should be equipped with one or several of the following sensors: high-resolution camera, mmWave radar, infrared system, gyroscopes and accelerometers, LIght Detection and Ranging (LIDAR) and so on. V2V communication modules using 5G multi-array antennas [103], [105] and short-range interaction such as visible light communication (VLC) are also predicted to be supported. Notably, our system can still work well with the vehicles equipped with only 5G V2V communication system.
The reliability of the input information is a crucial factor of all verification systems, but that often relies much on the availability of equipped sensors in the vehicles. Data from physical signals, radar/LIDAR, and HD cameras can be considered reliable, although their reliability is also subject to specific contexts [12]. For example, one may question that there are many ways to launch attacks against such source inputs, e.g., drone spoofing attacks/GPS spoofing (deception), even jamming attacks (disruption). Although the reliability of individual sensors is beyond the scope of this work, for the reliable sources, we assume that our verification can cooperate with several existing countermeasures and resilience approaches such as Differential Correction GNSS and antenna enhancements [101], [106] to prevent physical signal spoofing. For the disruption attacks, i.e., jamming, a resilience module can be implemented with multiple channel hopping/spatial retreats, for example.
In contrast, data in the application layer, e.g., Basic Safety Messages (BSM) or Cooperative Awareness Messages (CAM), cannot be trusted by default because a compromised vehicle can manipulate such data types with legitimate credentials, i.e., insider attacks [90], [91].
Without loss of generality, in this work, we target primarily to help the host vehicle to detect the insider attackers that intentionally spread false data through V2V applications, e.g., Cooperative Collision Warning or Lane Merge[107].
5.3.2 Assumption
In this work, we assume that the vehicles are all equipped with V2V communication
antennas by default[108]. We also assume that the antenna array is placed at the four corners of a vehicle, possibly also on the vehicle’s roof, to maximize the communication azimuth. Each vehicle is supposed to broadcast its position, velocity and acceleration periodically (at least 10 messages/second). Moreover, we assume that the PKI is in place so that the entity authentication and message integrity are guaranteed by cryptographic means. However, a vehicle’s credentials can be compromised; thus, no vehicle should be trusted by default. The communication range in V2V can be up to 300 meters (IEEE 802.11p) or 1km (5G V2V) in the highways and shorter in an urban region [24]. Also, we only consider the one-hop neighborhood in this work. Moreover, the ADAS system firmware of the host vehicle is supposed to be well protected, e.g., by hardware security module (HSM); thus, the attacker cannot manipulate the system workflow of sensors and the detection framework of this work. The localization systems of the host vehicle, e.g., GPS, are also supposed to be reliable.
5.3.3 Attack model
As mentioned above, this work focuses on the insider attacks, in which an attacker can compromise or gain control over one or more vehicles on the road and then broadcast false messages. The false data can be diverse, such as the potential combination of false event reports, false location, velocity, acceleration, and probably even false measurements from defected sensors. Note that in this attack model, the received messages will pass the signature verification, as they are originated from the vehicles compromised by the attacker. In summary, the three attack scenarios in this work include
1. A single attack: The attacker manipulates the data in the BSM/CAM and broadcasts to the surrounding vehicles. For example, a malicious vehicle may report that it is accelerating, but it actually runs at constant speed. This vehicle may randomly falsify its location or adjust the offset values consistently from its real trajectory, e.g., always several meters behind.
2. Sybil attack, also known as Ghost attack [92], [93]: An attacker with multiple identities intentionally broadcasts false data like the first attack. This attack is quite common in VANET, particularly for the network models that require high privacy [26], [109].
3. Collusion attack: A malicious vehicle can also collude with the others nearby the
victim to report false data. Such collusion aims at evading detection by honest vehicles and maximizing the damage to the receivers.
In practice, the first (single attack) and the second attack (Sybil attack) are common in vehicular communication[91]–[93]. A collusion attack may occur when the well- equipped/funded attackers target an important victim (e.g., VIP). This case is explicitly mentioned in the survey paper [91]. Specifically, the attacker may coach several vehicles moving along near the victim. By spreading the false data together (each colluding vehicle consistently reporting false data), detecting the attack will be much more challenging.
Note that common trust-based misbehavior detection approaches relying on the honest majority are no longer reliable as they were, since the reported data from the surrounding vehicles are intentionally manipulated.
We assume that a lying vehicle’s false data are consistent (i.e., the reported position and velocity over time are consistent with the reported acceleration; otherwise, the false data can be easily detected by merely examining the overall data). Fig. 5.3.1 illustrates the attack model and several cases of potential threats (accidents) to benign vehicles in the 5G network model. In Fig. 5.3.1, an attacker (Tx1) broadcasts BSM/CAM messages to claim it is braking (marker 1) or suddenly stops (marker 4), but in fact, it stops at the side of LANE 2 of Road 1. Another attacker (Tx2) on Road 2 broadcasts that it is moving to the street junction at high speed (90km/h), but it actually stops at the roadside. The attacks may cause the Rx vehicles serious accidents due to the rear-end distance (marker 1, 2), and could even result in a pileup (marker 4), if many cars are moving at high speed and tailgating.