Search

Category : "Social and Behavioral Sciences" with 298 Results

Computer users have access to computer security information from many different sources, but few people receive explicit computer security training. Despite this lack of formal education, users regularly make many important security decisions, such as “Should I click on this potentially shady link?” or “Should I enter my password into this form?” For these decisions, much knowledge comes from incidental and informal learning. To better understand differences in the security-related information available to users for such learning, we compared three informal sources of computer security information: news articles, web pages containing computer security advice, and stories about the experiences of friends and family. Using a Latent Dirichlet Allocation topic model, we found that security information from peers usually focuses on who conducts attacks, information containing expertise focuses instead on how attacks are conducted, and information from the news focuses on the consequences of attacks. These differences may prevent users from understanding the persistence and frequency of seemingly mundane threats (viruses, phishing), or from associating protective measures with the generalized threats the users are concerned about (hackers). Our findings highlight the potential for sources of informal security education to create patterns in user knowledge that affect their ability to make good security decisions.

Available in :
  • 1
  • 0

Security is a critical concern around the world. In many domains from cybersecurity to sustainability, limited security resources prevent complete security coverage at all times. Instead, these limited resources must be scheduled (or allocated or deployed), while simultaneously taking into account the importance of different targets, the responses of the adversaries to the security posture, and the potential uncertainties in adversary payoffs and observations, etc. Computational game theory can help generate such security schedules. Indeed, casting the problem as a Stackelberg game, we have developed new algorithms that are now deployed over multiple years in multiple applications for scheduling of security resources. These applications are leading to real-world use-inspired research in the emerging research area of “security games.” The research challenges posed by these applications include scaling up security games to real-world-sized problems, handling multiple types of uncertainty, and dealing with bounded rationality of human adversaries. In cybersecurity domain, the interaction between the defender and adversary is quite complicated with high degree of incomplete information and uncertainty. While solutions have been proposed for parts of the problem space in cybersecurity, the need of the hour is a comprehensive under- standing of the whole space including the interaction with the adversary. We highlight the innovations in security games that could be used to tackle the game problem in cybersecurity.

Available in :
  • 1
  • 0

When should states publicly attribute cyber intrusions? Whilst this is a question governments increasingly grapple with, academia has hardly helped in providing answers. This article describes the stages of public attribution and provides a Public Attribution Framework designed to explain, guide, and improve decision making of public attribution by states. Our general argument is that public attribution is a highly complex process which requires trade-offs of multiple considerations. Effective public attribution not only necessitates a clear understanding of the attributed cyber operation and the cyber threat actor, but also the broader geopolitical environment, allied positions and activities, and the legal context. This also implies that more public attribution is not always better. Public attribution carries significant risks, which are often badly understood. We propose the decision maker’s attitude towards public attribution should be one of ‘strategic, coordinated pragmatism’. Public attribution – as part of a strategy – can only be successful if there is a consistent goal, whilst the avenues for potential negative counter effects are assessed on a case-by-case basis.

Available in :
  • 1
  • 0

While much focus has remained on the concept of cyberwar, what we have been observing in actual cyber behaviour are campaigns comprised of linked cyber operations, with the specific objective of achieving strategic outcomes without the need of armed attack. These campaigns are not simply transitory clever tactics, but strategic in intent. This article examines strategic cyber competition and reveals how the adoption of a different construct can pivot both explanation and policy prescription. Strategy must be unshackled from the presumption that it deals only with the realm of coercion, militarised crisis, and war in cyberspace.

Available in :
  • 1
  • 0

The rapid developments in Artificial Intelligence (AI) and the intensification in the adoption of AI in domains such as autonomous vehicles, lethal weapon systems, robotics and alike pose serious challenges to governments as they must manage the scale and speed of socio-technical transitions occurring. While there is considerable literature emerging on various aspects of AI, governance of AI is a significantly underdeveloped area. The new applications of AI offer opportunities for increasing economic efficiency and quality of life, but they also generate unexpected and unintended consequences and pose new forms of risks that need to be addressed. To enhance the benefits from AI while minimising the adverse risks, governments worldwide need to understand better the scope and depth of the risks posed and develop regulatory and governance processes and structures to address these challenges. This introductory article unpacks AI and describes why the Governance of AI should be gaining far more attention given the myriad of challenges it presents. It then summarises the special issue articles and highlights their key contributions. This special issue introduces the multifaceted challenges of governance of AI, including emerging governance approaches to AI, policy capacity building, exploring legal and regulatory challenges of AI and Robotics, and outstanding issues and gaps that need attention. The special issue showcases the state-of-the-art in the governance of AI, aiming to enable researchers and practitioners to appreciate the challenges and complexities of AI governance and highlight future avenues for exploration.

Available in :
  • 1
  • 0