350 skills found · Page 9 of 12
requix / Kiro TeamMulti-agent orchestration for Kiro CLI - coordinate specialized AI agents to tackle complex tasks through planning, parallel execution, and validation.
nibzard / Labruno AgentLabruno is an agent coordinator that creates multiple AI solutions to your coding tasks using parallel sandboxes and evaluates them to find the best implementation.
owebeeone / Sdaxsdax is a lightweight, high-performance, in-process micro-orchestrator for Python's asyncio. It is designed to manage complex, tiered, parallel asynchronous tasks with a declarative API, guaranteeing a correct and predictable order of execution.
GroupAYECS765P / BDP 05 Large Scale ClusteringBDP 05: CLUSTERING OF LARGE UNLABELED DATASETS OVERVIEW Real world data is frequently unlabeled and can seem completely random. In these sort of situations, unsupervised learning techniques are a great way to find underlying patterns. This project looks at one such algorithm, KMeans clustering, which searches for boundaries separating groups of points based on their differences in some features. The goal of the project is to implement an unsupervised clustering algorithm using a distributed computing platform. You will implement this algorithm on the stack overflow user base to find different ways the community can be divided, and investigate what causes these groupings. The clustering algorithm must be designed in a way that is appropriate for data intensive parallel computing frameworks. Spark would be the primary choice for this project, but it could also be implemented in Hadoop MapReduce. Algorithm implementations from external libraries such as Spark MLib may not be utilised; the code must be original from the students. However, once the algorithm is completed, a comparison between your own results and that generated by MLlib could be interesting and aid your investigation. Stack Overflow is the main dataset for this project, but alternative datasets can be adopted after consultation with the module organiser. Additionally, different clustering algorithms may be utilised, but this must be discussed and approved y the module organiser. DATASET The project will use the Stack Overflow dataset. This dataset is located in HDFS at /data/stackoverflow The dataset for StackOverflow is a set of files containing Posts, Users, Votes, Comments, PostHistory and PostLinks. Each file contains one XML record per line. For complete schema information: Click here In order to define the clustering use case, you must define what should be the features of each post that will be used to cluster the data. Have a look at the different fields to define your use case. ALGORITHM The project will implement the k-means algorithm for clustering. This algorithm iteratively recomputes the location of k centroids (k is the number of clusters, defined beforehand), that aim to classify the data. Points are labelled to the closest centroid, with each iteration updating the centroids location based on all the points labelled with that value. Spark and Map/Reduce can be utilised for implementing this problem. Spark is recommended for this task, due to its performance benefits in . However, note that the MLib extension of Spark is not allowed to be used as the primary implementation. The group must code its own original implementation of the algorithm. However, it is possible to also use the mllib implementation, in order to evaluate the results from each clustering implementation. Report Contents Brief literature survey on clustering algorithms, including the challenges on implementing them at scale for parallel frameworks. The report should then justify the chosen algorithm (if changed) and the implementation. Definition of the project use case, where the implemented project will be part of the solution. Implementation in MapReduce or Spark of a clustering algorithm(KMeans). Must take into account the potential enormous size of the dataset, and develop sensible code that will scale and efficiently use additional computing nodes. The code will also need to potentially convert the dataset from its storage format to an in-memory representation. Source code should not be included in the report. However, the algorithms should be explained in the report. Results section. Adequate figures and tables should be used to present the results. The effectiveness of the algorithm should also be shown, including performance indications. Not really sure if this can be done for clustering. Critical evaluation of the results should be provided. Experiments demonstrating the technique can successfully group users in the dataset. Representation of the results, and discussion of the findings in a critical manner. ASSESSMENT The project according to the specification has a base difficulty of 85/100. This means that a perfect implementation and report would get a 85. Additional technical features and experimentation would raise the difficulty in order to opt for a full 100/100 mark. Report presentation: 20% Appropriate motivation for the work. Lack of typos/grammar errors, adequate format. Clear flow and style. Related work section including adequate referencing. Technical merit: 50% Completeness of the implementation. [25%] Provided source code. Code is documented. [10%] Design rationale of the code is provided. [10%] Efficient, and appropriate implementation for the chosen platform. [5%] Results/Analysis: 30% Experiments have been carried out on the full dataset. [10%] Adequate plots/tables are provided, with captions. [10%] Results are not only presented but discussed appropriately. [10%] Additional project goals: Implementation of additional functions beyond the base specification can raise the base mark up to 100. A non-exhaustive list of expansion ideas include: Exploration and discussion of hyperparameter tuning (e.g. the number of k groups to cluster the data into) [up to 10 marks] Comparative evaluation of clustering technique with existing implementations (e.g. mllib) [up to 10 marks] Bringing in additional datasets from stackoverflow, such as user badges, to aid in clustering [up to 5 marks] Cluster additional datasets (such as posts) [up to 10 marks] LEAD DEMONSTRATOR For specific queries related to this coursework topic, please liaise with Mr/Ms TBD, who will be the lead demonstrator for this project, as well as with the module organiser. SUBMISSION GUIDELINES The report will have a maximum length of 8 pages, not counting cover page and table of contents. The report must include motivation of the problem, brief literature survey, explanation of the selected technique, implementation details and discussion of the obtained results, and references used in the work. Additionally, the source code must be included as a separate compressed file in the submission.
Gembal77 / Script Hackpkg install borgbackup pkg install coreutils pkg install nodejs pkg install nodejs-lts pkg install plotutils pkg install tidy pkg install mailutils pkg install nmh pkg install texlive-bin pkg install file pkg install secure-delete pkg install socat pkg install ant pkg install dnsutils pkg install entr pkg install finch pkg install findutils pkg install flatbuffers pkg install frotz pkg install gotty pkg install graphviz pkg install inetutils pkg install llvm pkg install most pkg install net-tools pkg install pforth pkg install sleuthkit pkg install mount pkg install termux-tools pkg install omfonts pkg install texlive-bin pkg install tint2 pkg install parallel pkg install ack-grep pkg install acr pkg install age pkg install android-tools pkg install apache2 pkg install apt pkg install arj pkg install asciidoc pkg install at pkg install bat pkg install bc pkg install beanshell pkg install binutils pkg install bk pkg install bvi pkg install clang pkg install codecrypt pkg install containerd pkg install cscope pkg install cups pkg install cvs pkg install d8 pkg install dar pkg install dart pkg install dash pkg install delve pkg install diffutils pkg install djvulibre pkg install dnsutils pkg install dog pkg install dropbear pkg install dtc pkg install dte pkg install duc pkg install duf pkg install duktape pkg install dwm pkg install dx pkg install ecj pkg install ecl pkg install ed pkg install eja pkg install electric-fence pkg install elixir pkg install erlang pkg install et pkg install exa pkg install fd pkg install feh pkg install fzf pkg install fzy pkg install gap pkg install gatling pkg install gawk pkg install gbt pkg install gdb pkg install geth-utils pkg install gh pkg install ghostscript pkg install git pkg install glib-bin pkg install gn pkg install gnupg pkg install golang pkg install graphicsmagick pkg install graphviz pkg install groff pkg install helix pkg install hfsutils pkg install hub pkg install i3 pkg install inetutils pkg install iproute2 pkg install iverilog pkg install iw pkg install jo pkg install joe pkg install jq pkg install jupp pkg instqll k9s pkg install kakoune pkg install kona pkg install krb5 pkg install ldc pkg install lf pkg install lhasa pkg install libgnustep-base pkg install libpoco pkg install libpsl pkg install lld pkg install llvm pkg install lnd pkg install loksh pkg install lr pkg install lrzsz pkg install lsd pkg install lua pkg install lyx pkg install lz4 pkg install m4 pkg install man pkg install maven pkg install mc pkg install mdp pkg install mg pkg install mpc pkg install mpd pkg install mpv pkg install mpv-x pkg install mtools pkg install mtr pkg install mu pkg install myrepos pkg install ncurses-utils pkg install ne pkg install net-tools pkg install netcat pkg install netcat-openbsd pkg install nim pkg install nmh pkg install nnn pkg install no-more-secrets pkg install nodejs pkg install nodejs-lts pkg install nushell pkg install nxengine pkg install o-editor pkg install openjdk-17 pkg install openssh pkg install p7zip pkg install pari pkg install pathpicker pkg install php pkg install php7 pkg install picolisp pkg install plotutils pkg install procps pkg install proj pkg install pup pkg install pv pkg install qt5-declarative-dev pkg install qt5-qtbase pkg install qt5-qtdeclarative pkg install qt5-qttools pkg install quickjs pkg install radare2 pkg instal rcs pkg install rcshell pkg install remind pkg install renameutils pkg install ripgrep pkg install ripgrep-all pkg install rq pkg install ruby pkg install ruby-ri pkg install runit pkg install rust pkg install samba pkg install sc pkg install secure-delete pkg install sed pkg install shc pkg install silversearcher-ag pkg install sl pkg install sleuthkit pkg install smalltalk pkg install sox pkg install st pkg install subversion pkg install sun pkg install surfraw pkg install tar pkg install task-spooler pkg install teleport-tsh pkg install termux-am pkg install texlive-bin pkg install tig pkg install tin-summer pkg install tinyfugue pkg install tor pkg install tsu pkg install util-linux pkg install uucp pkg install vim pkg install vim-gtk pkg install vim-python pkg install virustotal-cli pkg install vis pkg install vtm pkg install w3m pkg install wireguard-tools pkg install wol pkg install wrk pkg imstall x2x pkg install xmlstarlet pkg install xorg-server pkg install xorg-twm pkg install xorg-xev
roidrage / Cap Ext ParallelizeA Capistrano extension that allows execution of tasks in parallel.
bep / WorkersSet up tasks to be executed in parallel.
EdwardALuke / LociThe Loci framework is a sophisticated auto-parallelizing framework that simplifies the task of constructing complex simulation software.
mleoking / LeoTaskLightweight-Productive-Reliable parallel task running and results aggregation (MapReduce on multicore)
randombet / Bodhi Realtime AgentReal-time voice agents with parallel async background sub-agents — conversations continue naturally while tasks run • Join the builders → https://discord.gg/mqxKaN3UKC
aniket-work / How To Build AI Agents To Decompose Tasks Execute Parallel Via Map ReduceHow To Build AI Agents To Decompose Tasks & Execute Parallel via Map Reduce
rutgers-apl / TaskProf2A Parallelism Profiler and an Adviser for Task Parallel Programs.
gunnarmorling / Run DetachedA shell script for running tasks on a git repo in a detached branch, allowing to continue with other tasks in parallel
Vedant020000 / Letta TeamsA CLI interface for Letta Code and LettaBot agents to orchestrate teams of stateful AI agents. Spawn specialized teammates, dispatch parallel tasks, and coordinate work across multiple agents with persistent memory.
montenegronyc / BackporcherParallel Claude Code agent dispatcher: GitHub Issues as task queue, sandboxed worktrees, coordinator review, CI gating, auto-merge
mccutchen / SpeculativelyPackage speculatively provides a simple mechanism to re-execute a task in parallel only after some initial timeout has elapsed.
az9713 / Claude Cowork Content Plugin6 Claude Opus 4.6 AI agents built a complete Claude Cowork plugin in 5 minutes, then 6 more agents wrote 5,298 lines of documentation in 4 minutes. An educational showcase of Agent Team orchestration, task dependency management, and parallel AI coordination.
benjha / Sight FrameServerSight_FrameServer is the delivery method used by SIGHT for remote visualization. SIGHT is an exploratory visualization tool for large scale datasets supporting manycore and multicore advanced shading, remote and interactive scientific visualization, parallel I/O and large scale displays. SIGHT is currently deployed in the OLCF systems to support TITAN’s users in their visualization and analysis tasks.
aabenoja / Cake.ParallelRun your cake tasks in parallel
linusnorton / Grunt Parallel BehatGrunt task for running parallel behat features