Simglucose
A Type-1 Diabetes simulator implemented in Python for Reinforcement Learning purpose
Install / Use
/learn @jxx123/SimglucoseREADME
simglucose
A Type-1 Diabetes simulator implemented in Python for Reinforcement Learning purpose
This simulator is a python implementation of the FDA-approved UVa/Padova Simulator (2008 version) for research purpose only. The simulator includes 30 virtual patients, 10 adolescents, 10 adults, 10 children. There is documentation of the virtual patient's parameters.
HOW TO CITE: Jinyu Xie. Simglucose v0.2.1 (2018) [Online]. Available: https://github.com/jxx123/simglucose. Accessed on: Month-Date-Year.
Notice: simglucose no longer supports python 3.7 and 3.8, please update to >=3.9 verison. Thanks!
Announcement (08/20/2023): simglucose now supports gymnasium! Check examples/run_gymnasium.py for usage.
| Animation | CVGA Plot | BG Trace Plot | Risk Index Stats |
| ------------------------------------------------ | :---------------------------- | ----------------------------------------------- | ----------------------------------------------- |
|
|
|
|
|
Main Features
- Simulation environment follows OpenAI gym and rllab APIs. It returns observation, reward, done, info at each step, which means the simulator is "reinforcement-learning-ready".
- Supports customized reward function. The reward function is a function of blood glucose measurements in the last hour. By default, the reward at each step is
risk[t-1] - risk[t].risk[t]is the risk index at timetdefined in this paper. - Supports parallel computing. The simulator simulates multiple patients in parallel using pathos multiprocessing package (you are free to turn parallel off by setting
parallel=False). - The simulator provides a random scenario generator (
from simglucose.simulation.scenario_gen import RandomScenario) and a customized scenario generator (from simglucose.simulation.scenario import CustomScenario). Commandline user-interface will guide you through the scenario settings. - The simulator provides the most basic basal-bolus controller for now. It provides very simple syntax to implement your own controller, like Model Predictive Control, PID control, reinforcement learning control, etc.
- You can specify random seed in case you want to repeat your experiments.
- The simulator will generate several plots for performance analysis after simulation. The plots include blood glucose trace plot, Control Variability Grid Analysis (CVGA) plot, statistics plot of blood glucose in different zones, risk indices statistics plot.
- NOTE:
animateandparallelcannot be set toTrueat the same time in macOS. Most backends of matplotlib in macOS is not thread-safe. Windows has not been tested. Let me know the results if anybody has tested it out.
Installation
It is highly recommended using pip to install simglucose, follow this link to install pip.
Auto installation:
pip install simglucose
Manual installation:
git clone https://github.com/jxx123/simglucose.git
cd simglucose
If you have pip installed, then
pip install -e .
If you do not have pip, then
python setup.py install
If rllab (optional) is installed, the package will utilize some functionalities in rllab.
Note: there might be some minor differences between auto install version and manual install version. Use git clone and manual installation to get the latest version.
Quick Start
Use simglucose as a simulator and test controllers
Run the simulator user interface
from simglucose.simulation.user_interface import simulate
simulate()
You are free to implement your own controller, and test it in the simulator. For example,
from simglucose.simulation.user_interface import simulate
from simglucose.controller.base import Controller, Action
class MyController(Controller):
def __init__(self, init_state):
self.init_state = init_state
self.state = init_state
def policy(self, observation, reward, done, **info):
'''
Every controller must have this implementation!
----
Inputs:
observation - a namedtuple defined in simglucose.simulation.env. For
now, it only has one entry: blood glucose level measured
by CGM sensor.
reward - current reward returned by environment
done - True, game over. False, game continues
info - additional information as key word arguments,
simglucose.simulation.env.T1DSimEnv returns patient_name
and sample_time
----
Output:
action - a namedtuple defined at the beginning of this file. The
controller action contains two entries: basal, bolus
'''
self.state = observation
action = Action(basal=0, bolus=0)
return action
def reset(self):
'''
Reset the controller state to inital state, must be implemented
'''
self.state = self.init_state
ctrller = MyController(0)
simulate(controller=ctrller)
These two examples can also be found in examples\ folder.
In fact, you can specify a lot more simulation parameters through simulation:
simulate(sim_time=my_sim_time,
scenario=my_scenario,
controller=my_controller,
start_time=my_start_time,
save_path=my_save_path,
animate=False,
parallel=True)
OpenAI Gym usage
- Using default reward
import gym
# Register gym environment. By specifying kwargs,
# you are able to choose which patient or patients to simulate.
# patient_name must be 'adolescent#001' to 'adolescent#010',
# or 'adult#001' to 'adult#010', or 'child#001' to 'child#010'
# It can also be a list of patient names
# You can also specify a custom scenario or a list of custom scenarios
# If you chose a list of patient names or a list of custom scenarios,
# every time the environment is reset, a random patient and scenario will be
# chosen from the list
from gym.envs.registration import register
from simglucose.simulation.scenario import CustomScenario
from datetime import datetime
start_time = datetime(2018, 1, 1, 0, 0, 0)
meal_scenario = CustomScenario(start_time=start_time, scenario=[(1,20)])
register(
id='simglucose-adolescent2-v0',
entry_point='simglucose.envs:T1DSimEnv',
kwargs={'patient_name': 'adolescent#002',
'custom_scenario': meal_scenario}
)
env = gym.make('simglucose-adolescent2-v0')
observation = env.reset()
for t in range(100):
env.render(mode='human')
print(observation)
# Action in the gym environment is a scalar
# representing the basal insulin, which differs from
# the regular controller action outside the gym
# environment (a tuple (basal, bolus)).
# In the perfect situation, the agent should be able
# to control the glucose only through basal instead
# of asking patient to take bolus
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t + 1))
break
- Customized reward function
import gym
from gym.envs.registration import register
def custom_reward(BG_last_hour):
if BG_last_hour[-1] > 180:
return -1
elif BG_last_hour[-1] < 70:
return -2
else:
return 1
register(
id='simglucose-adolescent2-v0',
entry_point='simglucose.envs:T1DSimEnv',
kwargs={'patient_name': 'adolescent#002',
'reward_fun': custom_reward}
)
env = gym.make('simglucose-adolescent2-v0')
reward = 1
done = False
observation = env.reset()
for t in range(200):
env.render(mode='human')
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
print(observation)
print("Reward = {}".format(reward))
if done:
print("Episode finished after {} timesteps".format(t + 1))
break
rllab usage
from rllab.algos.ddpg import DDPG
from rllab.envs.normalized_env import normalize
from rllab.exploration_strategies.ou_strategy import OUStrategy
from rllab.policies.deterministic_mlp_policy import DeterministicMLPPolicy
from rllab.q_functions.continuous_mlp_q_function import ContinuousMLPQFunction
from rllab.envs.gym_env import GymEnv
from gym.envs.registration import register
register(
id='simglucose-adolescent2-v0',
entry_point='simglucose.envs:T1DSimEnv',
kwargs={'patient_name': 'adolescent#002'}
)
env = GymEnv('simglucose-adolescent2-v0')
env = normalize(env)
policy = DeterministicMLPPolicy(
env_spec=env.spec,
# The neural network policy should have two hidden layers, each with 32 hidden units.
hidden_sizes=(32, 32)
)
es = OUStrategy(env_spec=env.spec)
qf = ContinuousMLPQFunction(env_spec=env.spec)
algo = DDPG(
env
Related Skills
tmux
341.6kRemote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
claude-opus-4-5-migration
84.6kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
model-usage
341.6kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
terraform-provider-genesyscloud
Terraform Provider Genesyscloud
