Commit 5e10b3c4 authored by Cem Anil's avatar Cem Anil
Browse files

1) Add .gitignore.

2) Update README and config.
parent 81b9d5ed
led / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
# Misc.
.idea
out
/data
......@@ -187,6 +187,12 @@ estimation.
One can then modify the model to see which Lipschitz architectures obtain a tighter lower bound on the Wasserstein
distance between the generator and empirical data distribution.
(warning) Unless the training conditions were exactly the same, the GANs obtained in the GAN training step
might be slightly different (due to high sensitivity of the training dynamics on initial conditions). Although the estimated
Wasserstein distances will be different in this case, the relative ordering and approximate ratios of the performance of
each Lipschitz architectures should be the the same as reported in the paper. We will remedy this by uploading a
trained GAN checkpoint in a future commit.
### Training LWGAN (Lipschitz WGANs)
We can use the same WGAN training methodology, but build a discriminator network comprised of our methods (i.e. Bjorck
orthonormalized linear transformations and GroupSort activations)
......
......@@ -11,7 +11,7 @@
"dataset": "mnist",
"split": "",
"epoch": 256,
"epoch": 50,
"batch_size": 64,
"input_size": 28,
......
......@@ -11,7 +11,7 @@
"dataset": "mnist",
"split": "",
"epoch": 256,
"epoch": 50,
"batch_size": 64,
"input_size": 28,
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment