Layer 8 Tarpits:

Overwhelming malicious actors with distracting information

Authors

DOI:

https://doi.org/10.34190/eccws.21.1.252

Keywords:

tarpit, honeypot, information security, disinformation, decision making

Abstract

This paper presents a concept for utilising falsified documents and disinformation as a security measure by diminishing the utility of the stolen information for the attacker. Classical definition of tarpitting honeypots is to create virtual servers attractive to worms and other malware that answer their connection attempts in such a way that the machine on the other end becomes stuck. A common extension to the OSI model is to refer the user as the layer 8 on top of the application layer. By generating attractive looking but falsified documents and datasets within our secured network along with the real information, we could be able to force the malicious user on the other end similarly to be 'stuck' as they need to dig through and verify all the information they have managed to steal. This in effect slows down the opponents' decision making speed, can make their activity in the network more visible and possibly even mislead them. The concept has similarities to the Canary trap or Barium Meal type of tests, and using Honey tokens to help identify who might be the leaker or from which database the data was stolen. However, the amount of falsified data or fake entries in databases in our concept is significantly larger and the main purpose is to diminish the utility of the stolen data or otherwise leaked information. The requirement to verify the information and scan through piles of documents trying to found the real information among them can give more time to the defender to react if the attack was noticed. It will also reduce the value of the information if it is just dumped in the open, as its contents and authenticity can be more easily questioned. AI powered methods such as the GPT-3 that can generate massive amounts very realistic looking text which is hard to differentiate from human generated texts could make this type of concept more feasible to the defender to utilise. The shortcoming of this concept is the risk that legitimate end-users could also confuse the real and falsified information together if that is not prevented somehow.

Author Biography

Petteri Simola, Finnish Defence Research Agency (FDRA)

Research manager, PhD. (Psych), adjunct professor

Military psychology research group

Human Performance Division

Downloads

Published

2022-06-08