20 skills found
OpenAEV-Platform / OpenaevOpen Adversarial Exposure Validation Platform
DataCanvasIO / HyperGBMA full pipeline AutoML tool for tabular data
sumanj / FrankencertFrankencert - Adversarial Testing of Certificate Validation in SSL/TLS Implementations
zygmuntz / Adversarial ValidationCreating a better validation set when test examples differ from training examples
Qiuyan918 / Adversarial Validation Case Study当样本分布发生变化时,交叉验证无法准确评估模型在测试集上的效果,这时候需要其他构造验证集的方法来应对。
google-research-datasets / Adversarial NibblerThis dataset contains results from all rounds of Adversarial Nibbler. This data includes adversarial prompts fed into public generative text2image models and validations for unsafe images. There will be two sets of data: all prompts submitted and all prompts attempted (sent to t2i models but not submitted as unsafe).
ilias-ant / Adversarial ValidationA tiny framework to perform adversarial validation of your training and test data.
yzshi5 / GM GANOCode for "Broadband Ground Motion Synthesis via Generative Adversarial Neural Operators: Development and Validation" BSSA 2024
sisl / AdversarialDriving.jlAdversarial driving simulator for testing safety validation algorithms
alansoong200 / SSSR PETThe intrinsically low spatial resolution of positron emission tomography (PET) leads to image quality degradation and inaccurate image-based quantitation. Recently developed supervised super-resolution (SR) approaches are of great relevance to PET but require paired low- and high-resolution images for training, which are usually unavailable for clinical datasets. In this paper, we present a self-supervised SR (SSSR) technique for PET based on dual generative adversarial networks (GANs), which precludes the need for paired training data, ensuring wider applicability and adoptability. The SSSR network receives as inputs a low-resolution PET image, a high-resolution anatomical magnetic resonance (MR) image, spatial information (axial and radial coordinates), and a high-dimensional feature set extracted from an auxiliary CNN which is separately-trained in a supervised manner using paired simulation datasets. The network is trained using a loss function which includes two adversarial loss terms, a cycle consistency term, and a total variation penalty on the SR image. We validate the SSSR technique using a clinical neuroimaging dataset. We demonstrate that SSSR is promising in terms of image quality, peak signal-to-noise ratio, structural similarity index, contrast-to-noise ratio, and an additional no-reference metric developed specifically for SR image quality assessment. Comparisons with other SSSR variants suggest that its high performance is largely attributable to simulation guidance.
davfd / Foundation Alignment Cross ArchitectureComplete elimination of instrumental self-preservation across AI architectures: Cross-model validation from 4,312 adversarial scenarios. 0% harmful behaviors (p<10⁻¹⁵) across GPT-4o, Gemini 2.5 Pro, and Claude Opus 4.1 using Foundation Alignment Seed v2.6.
davfd / Foundation Alignment Universal AI Safety MechanismThe most comprehensive adversarial AI alignment validation to date.
liuweitb / Mutual Knowledge Learning NetworkFace forgery techniques such as Generative Adversarial Network (GAN) have been widely used for image synthesis in movie production, journalism, etc. What backfires is that these generative technologies are widely abused to impersonate credible people and distribute illegal, misleading, and confusing information to the public. However, to our dismay, the problem with previous fake face detection methods is that they fail to distinguish between different fake generation modalities (various GANs), so none of these methods generalize to opening counterfeit scenes. These previous methods are almost ineffective in identifying fake faces when faced with unknown forgery approaches. To address this challenge, this paper first further analyzes the weaknesses of GAN-based generators. Our validation experimental results of different face generation models, such as Deepfakes, Face2Face, FaceSwap, etc., found that the faces generated by other models have no generalization. Our experiments revealed that the recent fake faces generated by GANs are still not robust enough because it does not consider enough pixels. Inspired by this finding, we design a novel convolutional neural network that uses frequency texture augmentation and knowledge distillation to enhance its global texture perception, effectively describe textures at different semantic levels in images, and improve robustness. It is worth mentioning that we introduce two core components: Discrete Cosine Transform (DCT) and Knowledge Distillation (KDL). DCT could play the role of image compression and also as image distinguishing between fake faces and real faces. KDL is used to extract features from counterfeit and real image targets, making our model generalize to multiple types of fake face generation methods. Experiments were done on two datasets, Celeb-DF and FaceForenscics++, demonstrating that DCT facilitates deep fakes detection in some cases. Knowledge distillation plays a key role in our model. Our model achieves better and more consistent performance in image processing or cross-domain settings, especially when images are subject to Gaussian noise.
AbhayKumarDas / GNN AML Aerospace ResilienceImplementation of a GNN-AML framework for fault detection, fault propagation, and adversarial attack mitigation in aerospace System-of-Systems (SoS). Includes training, testing, validation, adversarial training, and simulations for UAV swarms, satellites, and sensor networks.
JackHan-Sdu / Feap Mujoco DeploymentThis repository provides the deployment and validation framework for **FEAP** (Feature-Enhanced Adversarial Priors), a unified learning framework that enables a single policy to acquire multiple human-like locomotion styles on complex terrains within a single training phase.
needle-mirror / Com.unity.AI.planner[Mirrored from UPM, not affiliated with Unity Technologies.] 📦 The AI Planner includes authoring tools and a system for automated decision-making.Automated planners are useful for:▪ Directing agent behavior either in a cooperative, neutral, or adversarial capacity▪ Auto-generating storylines or as an online story manager▪ Validating game design mechanics▪ Assisting in creating tutorials▪ Automated testingStart by defining a domain definition of traits/enumerations. Then, create action definitions for what actions are possible in the domain. Once the planning problem is defined, the planner system will iteratively build a plan that converges to an optimal solution. Execute these plans by adding a decision controller to your agent.
sajeevan16 / DDoS Testing ServerAPIs are exposed to the public or internal network interfaces, thus they are vulnerable to various security threats. Hackers can attack such APIs to steal sensitive data or to disrupt the services provided by APIs to the intended users. Therefore, API-based attack detection is important to identify and prevent fraudulent access to APIs. Since Machine learning (ML) and Artificial Intelligence (AI) have shown great potential in detecting abnormal patterns, AI is a useful tool in detecting attacks to the APIs. However, using AI/ML requires accurate data to learn the fraudulence patterns and to validate the developed solutions, which is a major challenge faced by data scientists and researchers. To address this challenge, we proposed an approach that learns to detect attacks using the generated data by attacking the APIs. Therefore, the solution will consist of two models for 1) attack detection, 2) attack generation. Assume if we want to detect DDOS attacks, the attack simulation model will try to simulate the DDOS attack without being detected by the attack detection model. If the attack is undetected and leads to the unavailability of the API, we can assign a penalty to attack detection model, and reward to the attacking model. We can allow both models to compete with each other similar to adversarial learning to achieve highly accurate attack detection models. This blogs [1] explains how adversarial learning is used to prevent attacks to the image recognition models. The goal of this project is to deliver an attack simulation and detection tool by improving adversarial learning approaches to simulate and detect API-based attacks.
Caltech-geoquake / GM GANOCode for "Broadband Ground Motion Synthesis via Generative Adversarial Neural Operators: Development and Validation"
rccreager / HidefaceA set of tools for implementing popular face detection algorithms and adversarial attacks, then validating (non) detection after attack
Pavelevich / Hydra SecurityMulti-agent security auditing system with adversarial validation and Solana/Anchor specialization. Source-available - NO commercial use permitted. Hydra Security 2026.