Ecosnap
Recycle your plastic better with Artificial Intelligence ♻️
Install / Use
/learn @alyssaxuu/EcosnapREADME
EcoSnap
https://user-images.githubusercontent.com/7581348/208559445-a449cef6-0ae1-4c08-b9a5-c591062c3a3e.mp4
Recycle your plastic better with Artificial Intelligence ♻️
EcoSnap tells you how and where to recycle your items from a simple picture, with advice tailored to your location. We built this product in a week for Ben's Bites AI Hackathon.
👉 Try it now - it's free with no sign in needed
You can support this project (and many others) through GitHub Sponsors! ❤️
Made by Alyssa X & Leo. Read more about how we built this here.
<a href="https://www.producthunt.com/posts/ecosnap?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-ecosnap" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=374164&theme=neutral" alt="EcoSnap - Recycle your plastic better with Artificial Intelligence | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a>
Table of contents
Features
📸 Snap or upload a picture of a plastic code<br> 📱 Install the PWA on your phone for easy access<br> 🔍 Search for specific item to know how to dispose of it<br> ♻️ Learn how to recycle effectively using AI<br> 🥤 Keep track of how many plastic items you've recycled<br> 🌍 Change your location for specific advice<br> ✨...and much more to come - all for free & no sign in needed!
Installation
You can deploy to Vercel directly by clicking here.
Important: Make sure to update the environment variable for NEXT_PUBLIC_MODEL_URL in the .env file, and set it to an absolute URL where you host the model.json (make sure to include the other shard bin files alongside the JSON).
The AI Model
Data
The model was trained on image examples of the 7 different resin codes, the data for this can be found in ml/seven_plastics. It is a combination of the following Kaggle Dataset and images collected by the authors and contributors.
Training
The final model was trained using TensorFlow's EfficientNet implementation, the model weights were frozen for transfer learning, so the model could learn the resin codes faster! The model was trained in Python on a GPU-powered machine, for faster training! You can find the training script in ml/train.py and try it for yourself, there you will see that different meta architectures and parameters were experimented with before arriving at the final model.
Prediction
To predict the plastic resin code, the model had to be integrated with the front end app for real-time results, to do this we had to convert the model in a way that was compatible with TensorFlow.js. We used Web Workers to prevent the main thread from being block while running the prediction in the client.
The app passes the image Tensor onto the model that then gives a probability for each of the plastic resin codes, the one with the highest probability gets shown to the user, along with bespoke advice!
Feedback
Training a specific model is hard, the model always gets things wrong. So if it does, we give the user an opportunity to tell us what the right code was! This benefits in several ways:
- The user gets the information they need on how to recycle their item
- We can see how the model is performing in production
- We get new data (if the user lets us) to train the model with and improve it for everyone
While we implemented the front end for the feedback loop, we ended up not connecting it to the backend as it added complexity and cost, and we wanted the app to be very lightweight and running entirely on the client. We'd also have to communicate clearly to the user how exactly their images would be used, and set up either an opt-in or opt-out system, which felt a bit cumbersome.
Credit
- Kaggle Dataset - for the plastic codes
- Collletttivo - for the Mattone font
- Stubborn - for some of the illustrations
- Unsplash - for the images
Libraries used
- Tensorflow - for training the model and doing the prediction
- React Camera Pro - for the camera
Feel free to reach out to us at hi@alyssax.com, to Alyssa or Leo directly if you have any questions or feedback! Hope you find this useful 💜
Related Skills
claude-opus-4-5-migration
84.4kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
model-usage
341.0kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
TrendRadar
50.0k⭐AI-driven public opinion & trend monitor with multi-platform aggregation, RSS, and smart alerts.🎯 告别信息过载,你的 AI 舆情监控助手与热点筛选工具!聚合多平台热点 + RSS 订阅,支持关键词精准筛选。AI 智能筛选新闻 + AI 翻译 + AI 分析简报直推手机,也支持接入 MCP 架构,赋能 AI 自然语言对话分析、情感洞察与趋势预测等。支持 Docker ,数据本地/云端自持。集成微信/飞书/钉钉/Telegram/邮件/ntfy/bark/slack 等渠道智能推送。
mcp-for-beginners
15.7kThis open-source curriculum introduces the fundamentals of Model Context Protocol (MCP) through real-world, cross-language examples in .NET, Java, TypeScript, JavaScript, Rust and Python. Designed for developers, it focuses on practical techniques for building modular, scalable, and secure AI workflows from session setup to service orchestration.
