12 skills found
mandeep147 / Amazon Product Recommender SystemSentiment analysis on Amazon Review Dataset available at http://snap.stanford.edu/data/web-Amazon.html
shaungt1 / Open Source Datasets For Data ScienceBest free, open-source datasets for data science and machine learning projects. Top government data including census, economic, financial, agricultural, image datasets, labeled and unlabeled, autonomous car datasets, and much more. Data.gov NOAA - https://www.ncdc.noaa.gov/cdo-web/ atmospheric, ocean Bureau of Labor Statistics - https://www.bls.gov/data/ employment, inflation US Census Data - https://www.census.gov/data.html demographics, income, geo, time series Bureau of Economic Analysis - http://www.bea.gov/data/gdp/gross-dom... GDP, corporate profits, savings rates Federal Reserve - https://fred.stlouisfed.org/ curency, interest rates, payroll Quandl - https://www.quandl.com/ financial and economic Data.gov.uk UK Dataservice - https://www.ukdataservice.ac.uk Census data and much more WorldBank - https://datacatalog.worldbank.org census, demographics, geographic, health, income, GDP IMF - https://www.imf.org/en/Data economic, currency, finance, commodities, time series OpenData.go.ke Kenya govt data on agriculture, education, water, health, finance, … https://data.world/ Open Data for Africa - http://dataportal.opendataforafrica.org/ agriculture, energy, environment, industry, … Kaggle - https://www.kaggle.com/datasets A huge variety of different datasets Amazon Reviews - https://snap.stanford.edu/data/web-Am... 35M product reviews from 6.6M users GroupLens - https://grouplens.org/datasets/moviel... 20M movie ratings Yelp Reviews - https://www.yelp.com/dataset 6.7M reviews, pictures, businesses IMDB Reviews - http://ai.stanford.edu/~amaas/data/se... 25k Movie reviews Twitter Sentiment 140 - http://help.sentiment140.com/for-stud... 160k Tweets Airbnb - http://insideairbnb.com/get-the-data.... A TON of data by geo UCI ML Datasets - http://mlr.cs.umass.edu/ml/ iris, wine, abalone, heart disease, poker hands, …. Enron Email dataset - http://www.cs.cmu.edu/~enron/ 500k emails from 150 people From 2001 energy scandal. See the movie: The Smartest Guys in the Room. Spambase - https://archive.ics.uci.edu/ml/datase... Emails Jeopardy Questions - https://www.reddit.com/r/datasets/com... 200k Questions and answers in json Gutenberg Ebooks - http://www.gutenberg.org/wiki/Gutenbe... Large collection of books
rsreetech / LDATopicModellingIn this notebook i will be demonstarting Latent Dirchlet Allocation(LDA) for topic modelling. I will be using the Amazon fine food reviews dataset from Kaggle(https://www.kaggle.com/snap/amazon-fine-food-reviews) for performing LDA based topic modelling I will be using the gensim package for LDA topic modelling and pyLDAvis for visualization of LDA topic model
jcatw / Snap FacebookA python script that parses the SNAP facebook dataset into a single NetworkX network
JuliaGraphs / SNAPDatasets.jlGraphs.jl-formatted graph files taken from the SNAP Datasets collection.
lorismat / Snap 3d NetworkA python script to generate a ready-to-visualize network from any SNAP dataset in 3D via the NetworkX library
alexaverbuch / Shortestpath BenchRun Shortest Path Algorithm in Neo4j, against publicly available datasets from http://snap.stanford.edu/data
surajnakka / Graph Data AnalysisWorked with a number of real-world datasets available at http://snap.stanford.edu/ to identify the importance of certain nodes in terms of their degree, betweenness, closeness centralities and clustering co-efficient. • Studied the different random graph generator models to identify the best synthetic random graph generator that reflects real-world datasets better and experimented it with the large social networking sites such as Facebook. • Visualized the graphs using tools such as Gephi, Cystoscape and GraphViz and experimented with different layouts to identify the best visualization layout that distinguishes the critical characteristics of the network.
nidhisridhar / Fuzzy Community DetectionImplementation of the paper "Fuzzy-rough community in social networks" by Sankar K Pal in Python. The dataset used us the Facebook graph of 4038 nodes from Stanford's SNAP.
blurred-machine / Amazon Fine Food Review Analysis Using NLP TechniquesThis repository consists of analysis over Amazon fine food purchase reviews by customers. The data has been collected by Stanford Network Analysis Project(SNAP). This dataset consists of reviews of fine foods from amazon. The data span a period of more than 10 years, including all ~500,000 reviews up to October 2012. Reviews include product and user information, ratings, and a plain text review. It also includes reviews from all other Amazon categories.
RohithM191 / TSNE On Amazon Fine Food Reviews DatasetAmazon-Food-Reviews-Analysis-and-Modelling Using Various Machine Learning Models Performed Exploratory Data Analysis, Data Cleaning, Data Visualization and Text Featurization(BOW, tfidf, Word2Vec). Build several ML models like KNN, Naive Bayes, Logistic Regression, SVM, Random Forest, GBDT, LSTM(RNNs) etc. Objective: Given a text review, determine the sentiment of the review whether its positive or negative. Data Source: https://www.kaggle.com/snap/amazon-fine-food-reviews About Dataset The Amazon Fine Food Reviews dataset consists of reviews of fine foods from Amazon. Number of reviews: 568,454 Number of users: 256,059 Number of products: 74,258 Timespan: Oct 1999 - Oct 2012 Number of Attributes/Columns in data: 10 Attribute Information: Id ProductId - unique identifier for the product UserId - unqiue identifier for the user ProfileName HelpfulnessNumerator - number of users who found the review helpful HelpfulnessDenominator - number of users who indicated whether they found the review helpful or not Score - rating between 1 and 5 Time - timestamp for the review Summary - brief summary of the review Text - text of the review 1 Amazon Food Reviews EDA, NLP, Text Preprocessing and Visualization using TSNE Defined Problem Statement Performed Exploratory Data Analysis(EDA) on Amazon Fine Food Reviews Dataset plotted Word Clouds, Distplots, Histograms, etc. Performed Data Cleaning & Data Preprocessing by removing unneccesary and duplicates rows and for text reviews removed html tags, punctuations, Stopwords and Stemmed the words using Porter Stemmer Documented the concepts clearly Plotted TSNE plots for Different Featurization of Data viz. BOW(uni-gram), tfidf, Avg-Word2Vec and tf-idf-Word2Vec 2 KNN Applied K-Nearest Neighbour on Different Featurization of Data viz. BOW(uni-gram), tfidf, Avg-Word2Vec and tf-idf-Word2Vec Used both brute & kd-tree implementation of KNN Evaluated the test data on various performance metrics like accuracy also plotted Confusion matrix using seaborne Conclusions: KNN is a very slow Algorithm takes very long time to train. Best Accuracy is achieved by Avg Word2Vec Featurization which is of 89.38%. Both kd-tree and brute algorithms of KNN gives comparatively similar results. Overall KNN was not that good for this dataset. 3 Naive Bayes Applied Naive Bayes using Bernoulli NB and Multinomial NB on Different Featurization of Data viz. BOW(uni-gram), tfidf. Evaluated the test data on various performance metrics like accuracy, f1-score, precision, recall,etc. also plotted Confusion matrix using seaborne Printed Top 25 Important Features for both Negative and Positive Reviews Conclusions: Naive Bayes is much faster algorithm than KNN The performance of bernoulli naive bayes is way much more better than multinomial naive bayes. Best F1 score is acheived by BOW featurization which is 0.9342 4 Logistic Regression Applied Logistic Regression on Different Featurization of Data viz. BOW(uni-gram), tfidf, Avg-Word2Vec and tf-idf-Word2Vec Used both Grid Search & Randomized Search Cross Validation Evaluated the test data on various performance metrics like accuracy, f1-score, precision, recall,etc. also plotted Confusion matrix using seaborne Showed How Sparsity increases as we increase lambda or decrease C when L1 Regularizer is used for each featurization Did pertubation test to check whether the features are multi-collinear or not Conclusions: Sparsity increases as we decrease C (increase lambda) when we use L1 Regularizer for regularization. TF_IDF Featurization performs best with F1_score of 0.967 and Accuracy of 91.39. Features are multi-collinear with different featurization. Logistic Regression is faster algorithm. 5 SVM Applied SVM with rbf(radial basis function) kernel on Different Featurization of Data viz. BOW(uni-gram), tfidf, Avg-Word2Vec and tf-idf-Word2Vec Used both Grid Search & Randomized Search Cross Validation Evaluated the test data on various performance metrics like accuracy, f1-score, precision, recall,etc. also plotted Confusion matrix using seaborne Evaluated SGDClassifier on the best resulting featurization Conclusions: BOW Featurization with linear kernel with grid search gave the best results with F1-score of 0.9201. Using SGDClasiifier takes very less time to train. 6 Decision Trees Applied Decision Trees on Different Featurization of Data viz. BOW(uni-gram), tfidf, Avg-Word2Vec and tf-idf-Word2Vec Used both Grid Search with random 30 points for getting the best max_depth Evaluated the test data on various performance metrics like accuracy, f1-score, precision, recall,etc. also plotted Confusion matrix using seaborne Plotted feature importance recieved from the decision tree classifier Conclusions: BOW Featurization(max_depth=8) gave the best results with accuracy of 85.8% and F1-score of 0.858. Decision Trees on BOW and tfidf would have taken forever if had taken all the dimensions as it had huge dimension and hence tried with max 8 as max_depth 6 Ensembles(RF&GBDT) Applied Random Forest on Different Featurization of Data viz. BOW(uni-gram), tfidf, Avg-Word2Vec and tf-idf-Word2Vec Used both Grid Search with random 30 points for getting the best max_depth, learning rate and n_estimators. Evaluated the test data on various performance metrics like accuracy, f1-score, precision, recall,etc. also plotted Confusion matrix using seaborne Plotted world cloud of feature importance recieved from the RF and GBDT classifier Conclusions: TFIDF Featurization in Random Forest (BASE-LEARNERS=10) with random search gave the best results with F1-score of 0.857. TFIDF Featurization in GBDT (BASE-LEARNERS=275, DEPTH=10) gave the best results with F1-score of 0.8708.
gamleksi / SNAP Community DetectionWe compared community detection methods with Stanford Large Network Dataset Collection (SNAP)