Timeseries
Time Series package for fastai v2
Install / Use
/learn @ai-fast-track/TimeseriesREADME
timeseries package for fastai2
timeseriesis a Timeseries Classification and Regression package for fastai2.
<a href="https://colab.research.google.com/github/ai-fast-track/timeseries/blob/master/nbs/index.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
timeseries package documentation
Installation
There are may ways to install
timeseriespackage. Sincetimeseriesis built usingfastai2, there are also different ways to install fastai2. We will show 2 differents ways to install them and explain the motivation behin each one of them.
Method 1 : Editable Version
1A - Installing fastai2
Important :Only if you have not already installed
fastai2,install fastai2 by following the steps described there.
1B - Installing timeseries on a local machine
Note :Installing an editable version of a package means that you will install a package from its corresponding github repository on your local machine. By doing so, you can pull the latest version whenever a new version is pushed. To install
timeserieseditable package, follow the instructions here below:
git clone https://github.com/ai-fast-track/timeseries.git
cd timeseries
pip install -e .
Method 2 : Non Editable version
Note :Everytime you run the
!pip install git+https:// ..., you are installing the package latest version stored on github. > Important :As both fastai2 andtimeseriesare still under development, this is an easy way to use them in Google Colab or any other online platform. You can also use it on your local machine.
2A - Installing fastai2 from its github repository
# Run this cell to install the latest version of fastai shared on github
!pip install git+https://github.com/fastai/fastai2.git
# Run this cell to install the latest version of fastcore shared on github
!pip install git+https://github.com/fastai/fastcore.git
2B - Installing timeseries from its github repository
# Run this cell to install the latest version of timeseries shared on github
!pip install git+https://github.com/ai-fast-track/timeseries.git
Usage
%reload_ext autoreload
%autoreload 2
%matplotlib inline
The history saving thread hit an unexpected error (DatabaseError('database disk image is malformed',)).History will not be written to the database.
from fastai2.basics import *
from timeseries.all import *
Tutorial on timeseries package for fastai2
Example : NATOS dataset
Description
The data is generated by sensors on the hands, elbows, wrists and thumbs. The data are the x,y,z coordinates for each of the eight locations. The order of the data is as follows:

Right Arm vs Left Arm time series for the 'Not clear' Command ((#3) (see picture here above)

Channels (24)
| Hand | Elbow | Hand | Elbow | |:------------------- |:----------------- |:------------------ |:-------------------- | | 0. Hand tip left, X | 6. Elbow left, X | 12. Wrist left, X | 18. Thumb left, X | | 1. Hand tip left, Y | 7. Elbow left, Y | 13. Wrist left, X | 19. Thumb left, X | | 2. Hand tip left, Z | 8. Elbow left, Z | 14. Wrist left, X | 20. Thumb left, X | | 3. Hand tip righ, X | 9. Elbow righ, X | 15. Wrist righ, X | 21. Thumb righ, X | | 4. Hand tip righ, Y | 10. Elbow righ, Y | 16. Wrist righ, X | 22. Thumb righ, X | | 5. Hand tip righ, Z | 11. Elbow righ, Z | 17. Wrist righ, X | 23. Thumb righ, X |
Classes (6)
The six classes are separate actions, with the following meaning:
| | | | | | | |:----------------- |:-------------- |:-------------- |:-------------- |:-------------- |:-------------- | | 1: I have command | 2: All clear | 3: Not clear | 4: Spread wings | 5: Fold wings |6: Lock wings |
Downloading and unzipping a time series dataset
dsname = 'NATOPS' #'NATOPS', 'LSST', 'Wine', 'Epilepsy', 'HandMovementDirection'
# url = 'http://www.timeseriesclassification.com/Downloads/NATOPS.zip'
path = unzip_data(URLs_TS.NATOPS)
path
Path('/home/farid/.fastai/data/NATOPS')
Why do I have to concatenate train and test data?
Both Train and Train dataset contains 180 samples each. We concatenate them in order to have one big dataset and then split into train and valid dataset using our own split percentage (20%, 30%, or whatever number you see fit)
fname_train = f'{dsname}_TRAIN.arff'
fname_test = f'{dsname}_TEST.arff'
fnames = [path/fname_train, path/fname_test]
fnames
[Path('/home/farid/.fastai/data/NATOPS/NATOPS_TRAIN.arff'),
Path('/home/farid/.fastai/data/NATOPS/NATOPS_TEST.arff')]
data = TSData.from_arff(fnames)
print(data)
TSData:
Datasets names (concatenated): ['NATOPS_TRAIN', 'NATOPS_TEST']
Filenames: [Path('/home/farid/.fastai/data/NATOPS/NATOPS_TRAIN.arff'), Path('/home/farid/.fastai/data/NATOPS/NATOPS_TEST.arff')]
Data shape: (360, 24, 51)
Targets shape: (360,)
Nb Samples: 360
Nb Channels: 24
Sequence Length: 51
items = data.get_items()
idx = 1
x1, y1 = data.x[idx], data.y[idx]
y1
'3.0'
# You can select any channel to display buy supplying a list of channels and pass it to `chs` argument
# LEFT ARM
# show_timeseries(x1, title=y1, chs=[0,1,2,6,7,8,12,13,14,18,19,20])
# RIGHT ARM
# show_timeseries(x1, title=y1, chs=[3,4,5,9,10,11,15,16,17,21,22,23])
# ?show_timeseries(x1, title=y1, chs=range(0,24,3)) # Only the x axis coordinates
seed = 42
splits = RandomSplitter(seed=seed)(range_of(items)) #by default 80% for train split and 20% for valid split are chosen
splits
((#288) [304,281,114,329,115,130,338,294,94,310...],
(#72) [222,27,96,253,274,35,160,172,302,146...])
Using Datasets class
Creating a Datasets object
lbl_dict = dict([
('1.0', 'I have command'),
('2.0', 'All clear'),
('3.0', 'Not clear'),
('4.0', 'Spread wings'),
('5.0', 'Fold wings'),
('6.0', 'Lock wings')]
)
tfms = [[ItemGetter(0), ToTensorTS()], [ItemGetter(1), lbl_dict.get, Categorize()]]
# Create a dataset
ds = Datasets(items, tfms, splits=splits)
ax = show_at(ds, 2, figsize=(1,1))
Not clear
Creating a Dataloaders object
1st method : using Datasets object
bs = 128
# Normalize at batch time
tfm_norm = Normalize(scale_subtype = 'per_sample_per_channel', scale_range=(0, 1)) # per_sample , per_sample_per_channel
# tfm_norm = Standardize(scale_subtype = 'per_sample')
batch_tfms = [tfm_norm]
dls1 = ds.dataloaders(bs=bs, val_bs=bs * 2, after_batch=batch_tfms, num_workers=0, device=default_device())
dls1.show_batch(max_n=9, chs=range(0,12,3))
Using DataBlock class
2nd method : using DataBlock and DataBlock.get_items()
tsdb = DataBlock(blocks=(TSBlock, CategoryBlock),
get_items=get_ts_items,
get_x = ItemGetter(0),
get_y = Pipeline([ItemGetter(1), lbl_dict.get]),
splitter=RandomSplitter(seed=seed),
batch_tfms = batch_tfms)
tsdb.summary(fnames)
Setting-up type transforms pipelines
Collecting items from [Path('/home/farid/.fastai/data/NATOPS/NATOPS_TRAIN.arff'), Path('/home/farid/.fastai/data/NATOPS/NATOPS_TEST.arff')]
Found 360 items
2 datasets of sizes 288,72
Setting up Pipeline: ItemGetter -> ToTensorTS
Setting up Pipeline: ItemGetter -> dict.get -> Categorize
Building one sample
Pipeline: ItemGetter -> ToTensorTS
starting from
([[-0.540579 -0.54101 -0.540603 ... -0.56305 -0.566314 -0.553712]
[-1.539567 -1.540042 -1.538992 ... -1.532014 -1.534645 -1.536015]
[-0.608539 -0.604609 -0.607679 ... -0.593769 -0.592854 -0.599014]
...
[ 0.454542 0.449924 0.453195 ... 0.480281 0.45537 0.457275]
[-1.411445 -1.363464 -1.390869 ... -1.468123 -1.368706 -1.386574]
[-0.473406 -0.453322 -0.463813 ... -0.440582 -0.427211 -0.435581]], 2.0)
applying ItemGetter gives
[[-0.540579 -0.54101 -0.540603 ... -0.56305 -0.566314 -0.553712]
[-1.539567 -1.540042 -1.538992 ... -1.532014 -1.534645 -1.536015]
[-0.608539 -0.604609 -0.607679 ... -0.593769 -0.592854 -0.599014]
...
[ 0.454542 0.449924 0.453195 ... 0.480281 0.45537 0.457275]
[-1.411445 -1.363464 -1.390869 ... -1.468123 -1.368706 -1.386574]
[-0.473406 -0.453322 -0.463813 ... -0.440582 -0.427211 -0.435581]]
applying ToTensorTS gives
TensorTS of size 24x51
Pipeline: ItemGetter -> dict.get -> Categorize
starting from
([[-0.540579 -0.54101 -0.540603 ... -0.56305 -0.566314 -0.553712]
[-1.539567 -1.540042 -1.538992 ... -1.532014 -1.534645 -1.536015]
[-0.608539 -0.604609 -0.607679 ... -0.593769 -0.592854 -0.599014]
...
[ 0.454542 0.449924 0.453195 ... 0.480281 0.45537 0.457275]
[-1.411445 -1.363464 -1.390869 ... -1.468123 -1.368706 -1.386574]
[-0.473406 -0.453322 -0.463813 ... -0.440582 -0.427211 -0.435581]], 2.0)
applying ItemGetter gives
2.0
applying dict.get gives
All clear
applying Categorize gives
TensorCategory(0)
Final sample: (TensorTS([[-0.5406, -0.5410, -0.5406, ..., -0.5630, -0.5663, -0.5537],
[-1.5396, -1.5400, -1.5390, ..., -1.5320,
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
isf-agent
a repo for an agent that helps researchers apply for isf funding
