XKT
Multiple Knowledge Tracing models implemented by mxnet
Install / Use
/learn @tswsxk/XKTREADME
XKT
Multiple Knowledge Tracing models implemented by mxnet-gluon.
The people who like pytorch can visit the sister projects:
where the previous one is easy-to-understanding and the latter one shares the same architecture with XKT.
For convenient dataset downloading and preprocessing of knowledge tracing task, visit Edudata for handy api.
Tutorial
Installation
- First get the repo in your computer by
gitor any way you like. - Suppose you create the project under your own
homedirectory, then you can use usepip install -e .to install the package, orexport PYTHONPATH=$PYTHONPATH:~/XKT
Quick Start
To know how to use XKT, readers are encouraged to see
- examples containing script usage and notebook demo and
- scripts containing command-line interfaces which can be used to conduct hyper-parameters searching.
Data Format
In XKT, all sequence is store in json format, such as:
[[419, 1], [419, 1], [419, 1], [665, 0], [665, 0]]
Each item in the sequence represent one interaction. The first element of the item is the exercise id
and the second one indicates whether the learner correctly answer the exercise, 0 for wrongly while 1 for correctly
One line, one json record, which is corresponded to a learner's interaction sequence.
A demo loading program is presented as follows:
import json
from tqdm import tqdm
def extract(data_src):
responses = []
step = 200
with open(data_src) as f:
for line in tqdm(f, "reading data from %s" % data_src):
data = json.loads(line)
for i in range(0, len(data), step):
if len(data[i: i + step]) < 2:
continue
responses.append(data[i: i + step])
return responses
The above program can be found in XKT/utils/etl.py.
To deal with the issue that the dataset is store in tl format:
5
419,419,419,665,665
1,1,1,0,0
Refer to Edudata Documentation.
Citation
If this repository is helpful for you, please cite our work
@inproceedings{tong2020structure,
title={Structure-based Knowledge Tracing: An Influence Propagation View},
author={Tong, Shiwei and Liu, Qi and Huang, Wei and Huang, Zhenya and Chen, Enhong and Liu, Chuanren and Ma, Haiping and Wang, Shijin},
booktitle={2020 IEEE International Conference on Data Mining (ICDM)},
pages={541--550},
year={2020},
organization={IEEE}
}
Appendix
Model
There are a lot of models that implements different knowledge tracing models in different frameworks, the following are the url of those implemented by python (the stared is the authors version):
-
DKT [tensorflow]
-
DKT+ [tensorflow*]
-
DKVMN [mxnet*]
-
KTM [libfm]
-
EKT[pytorch*]
More models can be found in here
Dataset
There are some datasets which are suitable for this task, you can refer to BaseData ktbd doc for these datasets
Related Skills
node-connect
348.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
108.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
348.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
348.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
