SkillAgentSearch skills...

EvalNE

Source code for EvalNE, a Python library for evaluating Network Embedding methods.

Install / Use

/learn @Dru-Mara/EvalNE

README

EvalNE: A Python library for evaluating Network Embedding methods

<div id="top"></div>

Documentation Status contributions welcome MIT license made-with-python made-with-sphinx-doc

<div align="center"> <a href="https://evalne.readthedocs.io/en/latest/"> <img src="docs/source/EvalNE-logo.jpg" alt="Logo" height="80"> </a> <br /> <a href="https://evalne.readthedocs.io/en/latest/"><strong>Read The Docs »</strong></a> </div> <!-- TABLE OF CONTENTS --> <details> <summary>Table of Contents</summary> <ol> <li> <a href="#about-evalne">About EvalNE</a> <ul> <li><a href="#for-methodologists">For methodologists</a></li> <li><a href="#for-practitioners">For practitioners</a></li> </ul> </li> <li><a href="#installation">Installation</a></li> <li> <a href="#usage">Usage</a> <ul> <li><a href="#as-a-command-line-tool">As a command line tool</a></li> <li><a href="#as-an-api">As an API</a></li> <li><a href="#output">Output</a></li> </ul> </li> <li><a href="#contributing">Contributing</a></li> <li><a href="#license">License</a></li> <li><a href="#citation">Citation</a></li> </ol> </details>

About EvalNE

This repository provides the source code for EvalNE, an open-source Python library designed for assessing and comparing the performance of Network Embedding (NE) methods on Link Prediction (LP), Sign prediction (SP), Network Reconstruction (NR) and Node Classification (NC) tasks. The library intends to simplify these complex and time consuming evaluation processes by providing automation and abstraction of tasks such as hyper-parameter tuning and model validation, node and edge sampling, node-pair embedding computation, results reporting and data visualization.

The library can be used both as a command line tool and an API. In its current version, EvalNE can evaluate unweighted directed and undirected simple networks.

A Graphical User Interface based on Plotly Dash has been recently added to EvalNE. The interface allows users to set up and execute EvalNE evaluations in an intuitive and interactive way, monitor system resources and browse previous evaluations. Check out the project here -> EvalNE-gui.

Interested in robustness evaluation? That can also be done using EvalNE! Check out the following project (we will port it into the main library very soon): EvalNE-robustness.

The library is maintained by Alexandru Mara (alexandru.mara(at)ugent.be). The full documentation of EvalNE is hosted by Read the Docs and can be found here.

For Methodologists

A command line interface in combination with a configuration file (describing datasets, methods and evaluation setup) allows the user to evaluate any embedding method and compare it to the state of the art or replicate the experimental setup of existing papers without the need to write additional code. EvalNE does not provide implementations of any NE methods but offers the necessary environment to evaluate any off-the-shelf algorithm. Implementations of NE methods can be obtained from libraries such as OpenNE or GEM as well as directly from the web pages of the authors e.g. Deepwalk, Node2vec, LINE, PRUNE, Metapath2vec, CNE.

EvalNE does, however, includes the following LP heuristics for both directed and undirected networks (in and out node neighbourhoods), which can be used as baselines for different downstream tasks:

  • Random Prediction
  • Common Neighbours
  • Jaccard Coefficient
  • Adamic Adar Index
  • Preferential Attachment
  • Resource Allocation Index
  • Cosine Similarity
  • Leicht-Holme-Newman index
  • Topological Overlap
  • Katz similarity
  • All baselines (a combination of the first 5 heuristics in a 5-dim embedding)

For practitioners

When used as an API, EvalNE provides functions to:

  • Load and preprocess graphs
  • Obtain general graph statistics
  • Conveniently read node/edge embeddings from files
  • Sample nodes/edges to form train/test/validation sets
  • Different approaches for edge sampling:
    • Timestamp based sampling: latest nodes are used for testing
    • Random sampling: random split of edges in train and test sets
    • Spanning tree sampling: train set will contain a spanning tree of the graph
    • Fast depth first search sampling: similar to spanning tree but based of DFS
  • Negative sampling or generation of non-edge pairs using:
    • Open world assumption: train non-edges do not overlap with train edges
    • Closed world assumption: train non-edges do not overlap with either train nor test edges
  • Evaluate LP, SP and NR for methods that output:
    • Node Embeddings
    • Node-pair Embeddings
    • Similarity scores (e.g. the ones given by LP heuristics)
  • Implements simple visualization routines for embeddings and graphs
  • Includes NC evaluation for node embedding methods
  • Provides binary operators to compute edge embeddings from node feature vectors:
    • Average
    • Hadamard
    • Weighted L1
    • Weighted L2
  • Can use any scikit-learn classifier for LP/SP/NR/NC tasks
  • Provides routines to run command line commands or functions with a given timeout
  • Includes hyperparameter tuning based on grid search
  • Implements over 10 different evaluation metrics such as AUC, F-score, etc.
  • AUC and PR curves can be provided as output
  • Includes routines to generate tabular outputs and directly parse them to Latex tables
<p align="right">(<a href="#top">back to top</a>)</p>

Installation

The latest version of the library (v0.4.0) has been tested on Python 3.8.

EvalNE depends on the following packages:

  • Numpy
  • Scipy
  • Scikit-learn
  • Matplotlib
  • NetworkX
  • Pandas
  • tqdm
  • kiwisolver

Before installing EvalNE make sure that pip and python-tk packages are installed on your system, this can be done by running:

sudo apt-get install python3-pip
sudo apt-get install python3-tk

Option 1: Install the library using pip:

pip install evalne

Option 2: Cloning the code and installing:

  • Clone the EvalNE repository:

    git clone https://github.com/Dru-Mara/EvalNE.git
    cd EvalNE
    
  • Download dependencies and install the library:

    # System-wide install
    sudo python setup.py install
    
    # Alternative install for a single user
    python setup.py install --user
    

Check the installation by running simple_example.py or functions_example.py as shown below. If you have installed the package using pip, you will need to download the examples folder from the github repository first.

cd examples/
python simple_example.py

NOTE: In order to run the evaluator_example.py script, the OpenNE library, PRUNE and Metapath2Vec are required. The instructions for installing them are available here, here, and here, respectively. The instructions on how to run evaluations using .ini files are provided in the next section.

<p align="right">(<a href="#top">back to top</a>)</p>

Usage

As a command line tool

The library takes as input an .ini configuration file. This file allows the user to specify the evaluation settings, from the task to perform to the networks to use, data preprocessing, methods and baselines to evaluate, and types of output to provide.

An example conf.ini file is provided describing the available options for each parameter. This file can be either modified to simulate different evaluation settings or used as a template to generate other .ini files.

Additional configuration (.ini) files are provided replicating the experimental sections of different papers in the NE literature. These can be found in different folders under examples/replicated_setups. One such configuration file is examples/replicated_setups/node2vec/conf_node2vec.ini. This file simulates the link prediction experiments of the paper "Scalable Feature Learning for Networks" by A. Grover and J. Leskovec.

Once the configuration is set, the evaluation can be run as indicated in the next subsection.

Running the conf examples

In order to run the evaluations using the provided conf.ini or any other .ini file, the following steps are necessary:

  1. Download/Install the methods you want to test:

  2. Download t

Related Skills

View on GitHub
GitHub Stars107
CategoryEducation
Updated2mo ago
Forks26

Languages

Python

Security Score

100/100

Audited on Jan 27, 2026

No findings