Skip to content

optimisation library implementing automatic differentiation for elements such as scalars, vectors, matrices, and tensors like, for example, those involved in the computational graphs of deep learning

License

Notifications You must be signed in to change notification settings

akinwilson/dydx

Repository files navigation

dydx

Tests

alt text

Overview

dydx is a library implementing automatic differentiation; the gradient-based optimisation technqiue and linear algebra routines from scratch. I.e. using only python's built-in libraries and avoiding others such as numpy, pytorch, tensorflow, scipy, pandas etc.

To demonstrate purposes, it is applied to various problems in numerical linear algebra and machine learning, especially deep learning. The library is applied to three problems in the examples/ folder; singular value decomposition of a random non-square matrix, modelling the non-linearities of the XOR gate function and finally, an industry supervised dataset of insurance claims.

Installation

Create a vritual environment, clone the repository and install the package via running:

pip install .

You can also install the package without cloning this repository via running:

pip install git+https://github.com/akinwilson/dydx

Usage

Check out the examples/ folder to see how the library is used. You can from the root of this repository run the examples via:

python examples/xor_gate.py

Note: you may alter fitting parameters from the command line like:

python examples/xor_gate.py --epochs 250 --learning-rate 0.01 --layer-seeds 636915800,29155285,01355285

The rest of the examples can be run via:

python examples/singular_value_decomposition.py

And

python examples/insurance_claims.py

The preset values for the arguments of each optimisation problem shown in the examples should yield desirable results.

Running tests

To run tests locally, install developer requirements

pip install -r requirements_dev.txt

then, with

python -m pytest

you can run the tests locally.

Further improvements

Hardware optimisation

To further improve upon the speed/efficency of the library, if available, utilisation of a GPU's computational parallelism properties is paramount. With the spirit of doing everything from scratch, I have looked at python's bindings to cuda which is what libraries such as numba use under the hood.

Citation

Jorge Nocedal, Stephen Wright. Numercial Optimisationn. In Numerical Optimization (Springer Series in Operations Research and Financial Engineering), 2009 Springer, pp. 204-221. Springer, 2009.

@inproceedings{wrightNumericalOptimisation09,
  title={Numercial Optimisationn},
  author={Jorge Nocedal, Stephen Wright},
  booktitle={Numerical Optimization (Springer Series in Operations Research and Financial Engineering)},
  pages={204--221},
  year={2009},
  organization={Springer}
}

About

optimisation library implementing automatic differentiation for elements such as scalars, vectors, matrices, and tensors like, for example, those involved in the computational graphs of deep learning

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages