Step 1 / 9
Choose your goal
Supervised with a target column, or unsupervised for clustering, PCA, and anomalies.
Open source · MIT
Upload a CSV, profile and chart it in the UI, configure preprocessing, then train one or many scikit-learn pipelines and compare results. Download each model as a zip with metadata and local prediction scripts supervised or unsupervised, all open source.
Everything here is in the repo: a guided UI plus a real training API not a slide deck. From spreadsheet to compared models to portable bundles, without writing training code yourself.
Step by step flow: choose a goal (labels or not), upload CSV, set target or exclusions, tune features, then train. Clustering, decomposition, and anomaly modes included.
Row/column stats, inferred classification vs regression, missing values, correlations, distribution charts, and light data quality hints before you commit to a model.
Pick feature columns, scaling (auto or fixed), optional IQR clipping, SMOTE for imbalanced classification, and numeric binning. It all ships inside the saved pipeline.
Select several algorithms in one go (supervised or unsupervised). Each run is its own job: compare metrics side by side, download separate bundles, and predict with whichever model you choose.
Call the API from the dashboard with JSON rows, or unzip the bundle and use predict_local.py with the same encoders and transforms the server trained.
MIT license. FastAPI backend with job artifacts and OpenAPI docs, Next.js dashboard, pandas and scikit-learn (plus imbalanced-learn where needed). Clone and extend.
The inapp guided flow matches what you see below from choosing a goal through training, results, and optional inbrowser predictions. goal through training, results, and optional in-browser predictions.
Step 1 / 9
Supervised with a target column, or unsupervised for clustering, PCA, and anomalies.
Step 2 / 9
Drop a file or resume with a dataset id. Column names live in the first row.
Step 3 / 9
See row count, dtypes, and missing values. The app suggests classification vs regression.
Step 4 / 9
Column overview, EDA modules, and a clear target selector with smart defaults.
Step 5 / 9
Toggle feature columns, scaling, outliers, and optional binning — no notebooks.
Step 6 / 9
Choose from supported estimators and set the holdout fraction for test metrics.
Step 7 / 9
One summary of mode, target, features, and scaling before you start the job.
Step 8 / 9
Metrics in plain language, downloadable pipeline bundle, and optional live predict.
Step 9 / 9
Send JSON rows that match your training features and read predictions instantly.
Swipe or scroll horizontally · use arrow keys when the strip is focused · click a screenshot for a full-size preview
Imagine a CSV with columns like area, bedrooms, price (target). You upload the file, choose supervised, set price as the target, leave useful columns as features, pick one algorithm or several to compare (e.g. tree vs linear vs ensemble), and train. The app shows metrics on a holdout split, previews your data (distributions, correlations, quality checks), and lets you download a zip per model with the fitted pipeline plus predict_local.py so you can score new rows offline with the same preprocessing.
Sample rows (illustrative):
area,bedrooms,bathrooms,price 1420,3,2,285000 980,2,1,198000 2100,4,3,410000
Clone the repo, install Python + Node dependencies, then npm start launches the API and the dashboard together.
git clone https://github.com/vinit5112/Zero_Code_ML_Training.git
cd zero-code-ml
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
npm run install:all
npm startOn Windows, use py -3 -m venv .venv and .\.venv\Scripts\Activate.ps1 instead — see the repo README for full OS notes.
Full setup (Windows, ports, env) is in the repository README.