Skip to content

Commit

Permalink
add README and LICENSE
Browse files Browse the repository at this point in the history
  • Loading branch information
Munkhtenger19 committed Jun 1, 2024
1 parent ecb9c68 commit e0cea37
Show file tree
Hide file tree
Showing 2 changed files with 44 additions and 0 deletions.
7 changes: 7 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Copyright 2024 Munkhtenger Munkh-Aldar

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
37 changes: 37 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
## GNN Robustness Toolbox (GRT)

The GNN Robustness Toolbox (GRT) is a Python framework for evaluating the robustness of Graph Neural Network (GNN) models against adversarial attacks. GRT provides a flexible and extensible platform for conducting robustness experiments, enabling researchers and practitioners to:

* Systematically evaluate the robustness of different GNN architectures.
* Compare the effectiveness of various adversarial attack strategies.
* Develop and benchmark defense mechanisms against adversarial attacks.

**Key Features:**

* **Extensible Architecture:** Easily integrate custom models, attacks, datasets, transforms, optimizers, and loss functions.
* **Flexible Configuration:** Define experiments using a user-friendly YAML configuration file.
* **Model Caching:** Cache trained models and results to avoid redundant computations.
* **Comprehensive Output:** Generate detailed experiment results in JSON and CSV format.

**Installation:**

```bash
git clone https://github.com/Munkhtenger19/Toolbox.git
cd Toolbox
pip install -r requirements.txt
```

**Usage:**

1. **Define Custom Components (Optional):** Create and register custom models, attacks, datasets, etc., in the `custom_components` directory.
2. **Configure Experiments:** Create a YAML configuration file specifying the experiment settings (see `configs/` for examples).
3. **Run Experiments:** Execute the `main.py` script with the configuration file path:

```bash
python main.py --cfg path/to/config.yaml
```


**License:**

GRT is released under the [MIT License](LICENSE).

0 comments on commit e0cea37

Please sign in to comment.