You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This library is written TypeScript and Rust and it uses FFI.
Why Classy?
It's fast.
It gives you some freedom to experiment with different combinations of loss functions, activation functions, etc.
It's easy to use.
Features
Optimization Algorithms:
Gradient Descent
Stochastic Average Gradients
Ordinary Least Squares
Optimizers for updating weights:
RMSProp
ADAM
Schedulers for learning rate:
One-cycle Scheduler
Decay
Regularization
Activation Functions:
Linear (regression, SVM, etc.)
Sigmoid (logistic regression)
Softmax (multinomial logistic regression)
Tanh (it's just there)
Loss Functions:
Mean Squared Error (regression)
Mean Absolute Error (regression)
Cross-Entropy (multinomial classification)
Binary Cross-Entropy / Logistic Loss (binary classification)
Hinge Loss (binary classification, SVM)
Quick Example
Regression
import{Matrix}from"jsr:@lala/appraisal@0.7.5";import{GradientDescentSolver,adamOptimizer,huber,}from"jsr:@lala/classy@1.2.1";constx=[100,23,53,56,12,98,75];consty=x.map((a)=>[a*6+13,a*4+2]);constsolver=newGradientDescentSolver({// Huber loss is a mix of MSE and MAEloss: huber(),// ADAM optimizer with 1 + 1 input for intercept, 2 outputs.optimizer: adamOptimizer(2,2),});// Train for 700 epochs in 2 minibatchessolver.train(newMatrix(x.map((n)=>[n]),"f32"),newMatrix(y,"f32"),{silent: false,fit_intercept: true,epochs: 700,n_batches: 2});constres=solver.predict(newMatrix(x.map((n)=>[n]),"f32"));for(leti=0;i<res.nRows;i+=1){console.log(Array.from(res.row(i)),y[i]);}