34 skills found · Page 2 of 2
MeigenChou / Min2phaseAn implementation of Kociemba's two-phase algorithm for solving rubik's cube. (Adapted from Shuang Chen's Java code)
raymondtruong / Cv Cube SolverA computer vision Rubik's cube solver implementing Kociemba's two-phase algorithm.
luckasRanarison / KewbRubik's cube solver using Kociemba's two phase algorithm
AyberkCemAksoy / Aircraft Wing Structural Analysis Tool With Python For Aerospace And Mechanical EngineersHalf-wing structural sizing tool for Aircraft Structures. Two-phase optimizer: Phase 1 screens spar/skin/rib combinations for strength; Phase 2 adaptively places ribs via sweep-based buckling analysis. Outputs minimum-weight config with stress reports, buckling tables, and planform plots. Supports grid search and genetic algorithm.
Wangxs404 / Projection Algorithm For Two Phase Fluid Structure InteractionA Non-staggered Projection Algorithm for Two-Phase Fluid-Structure Interaction Simulation Using the Phase-Field/Immersed-Boundary Method
satyanugraha / Classifying Twitter User As Resident Or TouristResearches confirms that social media provides good insights on what people think, feel, concern, etc. It is expected that those insight mined from Twitter data has potential to support a better decision-making, especially in public sectors. Public sector wants to know local’s insight level; therefore they need to make sure they use the conversation from residents. However, the ground truth shows that tweets are mixed from the residents and tourist. This study investigates the best automatic fashion model to classify tweets posted by resident and tourist, in NTB. Indonesia. To do so, several consecutive phases were conducted. Those are pre-processing, data training, classification system, data testing, accuracy comparison, and result visualization. First of all, a Twitter dataset, which has 700,000 tweets posted by approximately 26,000 users in Nusa Tenggara Barat, Indonesia was prepared. The dataset divided into two sets, tweets from 4,000 users for data training and 22,000 users for data testing. Then, three popular classification algorithms were applied to the datasets. There are Multinomial Naïve Bayes, Support Vector Machines and Decision Tree. After that, 7 features are created. There are Bag of Words, Normalizer location, Total Tweet, Total Day, Tweet per Day, Total Location and Location per Day. Experiment shows that Multinomial Naïve Bayes with Bag of Words feature has 86% accuracy, while the rest of features give less than 65% accuracy. This is different with Support Vector Machines and Decision Tree results. These two algorithms produce better accuracy results excluding Bag of Words feature. It implies that Support Vector Machine and Decision Tree are more powerful when processing numerical value. However, among all classification system, Multinomial Naïve Bayes still being the most accurate algorithm for the model.
lokyGit / Ionosphere Signals PredictionThis project is about analyzing Ionosphere data and measuring the accuracies of the electromagnetic signal data. The radar statistics were gathered by an arrangement in Goose Bay, Labrador. This system involves a phased array of 16 high-frequency transmitters with an aggregate transferred power on the order of 6.4 kilowatts. Expected waves were handled by exercising an autocorrelation function whose arguments are the time of a pulse and the pulse number. There were 17 pulse numbers for the Goose Bay system. Two attributes per pulse number describe instances in this database. This dataset describes high-frequency antenna returns from high energy particles in the atmosphere, and whether the return shows structure or not. The problem is a binary classification that contains 351 instances and 35 numerical attributes. The majority of the data in this set are continuous data points which range between -1 and 1, with one binomial variable which defines the type of the electromagnetic signals. The objective of the project is to measure the accuracies of ‘good’ instances and ‘bad’ cases by feeding the dataset to the machine learning models mentioned below and report some of the measures to improve the overall performance of the models. Predicting the good and bad signals is very important as these signals propagate through distant places and contribute in providing better communication and help in improving the navigation. We will predict the good and bad signal results using 3 methods - KNN, GLM and decision tree and then use ensemble techniques to improve the accuracy of the model. In the ensemble technique, we will use the stacking method. We observed that generalized linear model has better classification rate among the rest and after implementing stacking technique we were able to improve the overall performance of the stacked models. Introduction Source Information: -- Donor: Vince Sigillito (vgs@aplcen.apl.jhu.edu) -- Date: 1989 -- Source: Space Physics Group, Applied Physics Laboratory, Johns Hopkins University, MD 20723 The first 34 columns are continuous numerical data which represent 17 pulse numbers of received electromagnetic signals. There are two attributes per pulse number, which is the time of the pulse and the pulse number. The 35th column is categorical data "good" or "bad". "good" means those radar showing evidence of some type of structure in the ionosphere. “bad" implies those radar does not indicate their signals pass through the ionosphere. Implementation of the Project First, we install the necessary packages and load the required libraries as mentioned below and then we read the dataset in R. We convert the last column label feature from character to factor. Next, to identify the important features we applied fitted Boruta model with the data and found out that column two i.e, V2 is not important and therefore, we removed V2 from the dataset and Created the significant dataset with important variables only. Then we split the dataset to train dataset and test dataset. Once, we have the training and test datasets we made use of knn() available in Class library for implementing KNN algorithm and glm() to implement logistic regression and rpart () to implement decision tree methods on our dataset. We chose these methods for our prediction and data analysis as we have binomial variable data with a binomial output. Because the above-mentioned algorithms perform better while dealing with categorical data points, we decided to implement the aforesaid classification methods. After completing with our modelling, we decided to improve the resulted accuracies of the models by implementing ensemble technique and we chose stacking for this case because it’s designed to combine model outputs of different types.
Nabeel-105 / Covid 19 And Pneumonia Detection Using Chest Xray Images Full Desktop Application "Detection of Covid-19 & Pneumonia” is a desktop application which utilizes moderate desktop system. System will be using machine learning algorithms, which will help to detect to “covid-19 & Pneumonia” using chest “x-ray images”. It is a real time disease detection system. Due to current situation around the world, most of the people are suffering from “Covid-19” and “Pneumonia”. The system will be used for the identification of the effected one’s. The main purpose of our project is to develop a system that will identify the patients either they are suffering from “Covid-19” or “Pneumonia”. In the current situation the practices taking place for the detection of “Covid-19” & “Pneumonia” includes hardware which is quite expensive and out of reach for the normal use. Hardware required qualified person to operate it. The knowledge and implementation on this current scenario is quite limited. This system will be in reach for an ordinary person, which will not require any expertise to operate it. Our system will detect two different diseases in real time. This system is based on recognition of “Covid-19” and “Pneumonia” on the basis of “Chest X-ray” images. This system is hardware free which makes it useful for every patient, and will evaluate efficient results. Keeping for hardware free this system will be accessible and easily available for the usage, hence no special qualified operator is required to operate it. This system is developed in spyder using python and Convolutional Neural Network (CNN). System testing, load testing, compatibility testing and integration testing techniques are performed on this system. Quality, accuracy, performance and consistency is checked through these testing techniques. All the modules containing image loading, model saving, detection, report generating are working perfectly. No significant errors are found during the testing phase. It is only limited to chest x-rays as an input images. As it is a desktop application, which will only work on desktop and windows environment. It requires only digital images of chest x-rays as an input. As future work, to overcome these limitations we can add the variety of input types. We can deploy this system on different platforms like web and android.
kotarot / ChampleRubik's cube (3x3x3) Scrambler/Solver based on Kociemba's Two-Phase Algorithm. Just converted C codes from Java (Cube Explorer).
aishwaryarajan75 / Two Phase LockingDemonstrated rigorous two-phase locking for transaction processing by implementing the would-wait algorithm using Hash Map, Priority Queue and List data structures. Environment: MySQL, Java, Eclipse
rutiannnn / HexSimulationA Python implementation of an algorithm to obtain a phase diagram of two-dimensional interacting lattice bosons based on cluster meanfield theory in physics.
0x1DA9430 / CubeSolverA 3x3x3 Rubik's Cube Solver for Android
csbebetter / RFID Dynamic TrackingIt is just a ROS package, which reads the RFID tag through two antennas to obtain the phase, and outputs the speed and angular velocity of the car through the pid algorithm
Harsh188 / GSoC RedHenLab MTVSS 2022This proposal proposes a multi-modal multi-phase pipeline to tackle television show segmentation on the Rosenthal videotape collection. The two-stage pipeline will begin with feature filtering using pre-trained classifiers and heuristic-based approaches. This stage will produce noisy title sequence segmented data containing audio, video, and possibly text. These extracted multimedia snippets will then be passed to the second pipeline stage. In the second stage, the extracted features from the multimedia snippets will be clustered using RNN-DBSCAN. Title sequence detection is possibly the most efficient path to high precision segmentation for the first and second tiers of the Rosenthal collection (which have fairly structured recordings). This detection algorithm may not bode well for the more unstructured V8+ and V4 VCR tapes in the Rosenthal collection. Therefore the goal is to produce accurate video cuts and split metadata results for the first and second tiers of the Rosenthal collection.