17 skills found
dverite / PermuteseqPostgreSQL extension for scalable pseudo-random permutations of sequences
simonheb / RitestStata command to perform randomization inference and permutation tests, allowing for arbitrary randomization procedures with (almost) any Stata command.
SWFSC / RfPermuteEstimate Permutation p-Values for Random Forest Importance Metrics
lrkrol / PermutationTestA permutation test (aka randomization test) for MATLAB.
Thiru-kumaran-R / Aptitude APIA REST-API that provides random or topic based Aptitude question for each call. Each topic in this API contains more than 100+ questions.Topics that are available in this API are Mixture and Alligation , Profit and Loss , Pipes and Cisterns , Age , Permutation and Combination , Speed Time Distance , Simple Interest , Calendars.
usnistgov / PasswordMetricsPython code for 1) permuting randomly-generated passwords for easier entry on mobile devices, and 2) for estimating entropy lost as a result of said permutation.
asimihsan / Permutation Iterator RsA Rust library for iterating over random permutations.
bbc2 / ShuffledPython random permutations of large integer ranges
maxmouchet / GfcImplementation of a Generalized-Feistel Cipher for generating random permutations.
scijs / Random PermutationGenerates a random permutation
ekhiru / PerMallowsIncludes functions to work with the Mallows and Generalized Mallows Models. The considered distances are Kendall's-tau, Cayley, Hamming and Ulam and it includes functions for making inference, sampling and learning such distributions, some of which are novel in the literature. As a by-product, PerMallows also includes operations for permutations, paying special attention to those related with the Kendall's-tau, Cayley, Ulam and Hamming distances. It is also possible to generate random permutations at a given distance, or with a given number of inversions, or cycles, or fixed points or even with a given length on LIS (longest increasing subsequence).
radrumond / ChameleonParametric models, and particularly neural networks, require weight initialization as a starting point for gradient-based optimization. In most current practices, this is accomplished by using some form of random initialization. Instead, recent work shows that a specific initial parameter set can be learned from a population of tasks, i.e., dataset and target variable for supervised learning tasks. Using this initial parameter set leads to faster convergence for new tasks (model-agnostic meta-learning). Currently, methods for learning model initializations are limited to a population of tasks sharing the same schema, i.e., the same number, order, type and semantics of predictor and target variables. In this paper, we address the problem of meta-learning parameter initialization across tasks with different schemas, i.e., if the number of predictors varies across tasks, while they still share some variables. We propose Chameleon, a model that learns to align different predictor schemas to a common representation. We use permutations and masks of the predictors of the training tasks at hand. In experiments on real-life data sets, we show that Chameleon successfully can learn parameter initializations across tasks with different schemas providing a 26\% lift on accuracy on average over random initialization and of 5\% over a state-of-the-art method for fixed-schema learning model initializations. To the best of our knowledge, our paper is the first work on the problem of learning model initialization across tasks with different schemas.
drtconway / Permutation RsA Rust library for creating random access permutations.
echenim / FisherYatesShuffleThe Fisher–Yates shuffle is an algorithm for generating a random permutation of a finite sequence—in plain terms, the algorithm shuffles the sequence. The algorithm effectively puts all the elements into a hat; it continually determines the next element by randomly drawing an element from the hat until no elements remain. The algorithm produces an unbiased permutation: every permutation is equally likely. The modern version of the algorithm is efficient: it takes time proportional to the number of items being shuffled and shuffles them in place. The Fisher–Yates shuffle is named after Ronald Fisher and Frank Yates, who first described it, and is also known as the Knuth shuffle after Donald Knuth. A variant of the Fisher–Yates shuffle, known as Sattolo's algorithm, may be used to generate random cyclic permutations of length n instead of random permutations.
RazaTheLegend / QuasiMonteCarloQuasi-Monte Carlo methods are permutations on the standard Monte Carlo method, which employs supremely uniform quasirandom numbers rather than Monte Carlo’s pseudorandom numbers. This thesis investigates the application of quasi-Monte Carlo methods on the Heston model. Our main focus in this paper is the Broadie-Kaya scheme, which our main algorithms are based on. The Monte Carlo methods provides statistical error estimates; however, this is lost in the quasi-Monte Carlo, but in return provides faster convergence than a standard Monte Carlo. A recent discovery has shown that the randomized quasi-Monte Carlo can preserve the speed of quasi-Monte Carlo, but also reintroduces the error estimates from Monte Carlo methods. For our investigation, we compare the Euler discretization with Full Truncation, the Broadie-Kaya scheme using pseudorandom sequences, then amp up the speed using quasirandom sequences and finally a randomized quasi-Monte Carlo.
nattstack / Fisher Yates ShuffleFisher–Yates Shuffle: Random Permutation Algorithm (Java). The program shuffles the song names in the input text file, and outputs the shuffled into new text file.
GarlGuo / CD GraBCD-GraB is a distributed gradient balancing framework that aims to find distributed data permutation with provably better convergence guarantees than Distributed Random Reshuffling (D-RR). https://arxiv.org/pdf/2302.00845.pdf.