424 skills found · Page 1 of 15
jeelabs / Esp Linkesp8266 wifi-serial bridge, outbound TCP, and arduino/AVR/LPC/NXP programmer
xiph / LPCNetEfficient neural speech synthesis
Character Generator based on Universal-LPC-Spritesheet
ErichStyger / McuoneclipseMcuOnEclipse Processor Expert components and example projects
stacksmashing / Pico TpmsnifferA simple, very experimental TPM sniffer for LPC bus
ar1st0crat / NWaves.NET DSP library with a lot of audio processing functions
huailiang / LipSyncLipSync for Unity3D 根据语音生成口型动画 支持fmod
fluffos / FluffosActively maintained LPMUD driver (LPC interpreter, MudOS fork)
shalxmva / ModxoXbox LPC Port modchip using a Raspberry Pi Pico
makrohn / Universal LPC SpritesheetAn attempt to merge most character assets generated by the Liberated Pixel Cup into a single .xcf, where they can be mixed and matched.
kmilo17pet / QuarkTSAn open-source OS for embedded applications that supports prioritized cooperative scheduling, time control, inter-task communications primitives, hierarchical state machines and CoRoutines.
gionanide / Speech Signal Processing And ClassificationFront-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
sp-nitech / SPTKA suite of speech signal processing tools
gillesdemey / Node Record Lpcm16:microphone: Records a 16-bit signed-integer linear pulse modulation code encoded audio file.
insane-adding-machines / FrostedFrosted: Free POSIX OS for tiny embedded devices
sp-nitech / DiffsptkA differentiable version of SPTK
ElizaWy / LPCCurated collection of the Liberated Pixel Cup art set
sobjornstad / AnkiLPCGAddon for dae/anki for studying lyrics and poetry
denandz / Lpc Sniffer TpmA low pin count sniffer for ICEStick - targeting TPM chips
stacksmashing / LPCClocklessAnalyzerA Saleae Analyzer for TPM traffic that only requires the LADD & LFRAME signals, no clock.