Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
In a Training Loop ๐
14
2
18
AbstractPhila
PRO
AbstractPhil
Follow
mnemic's profile picture
jorgemunozl's profile picture
tyro12's profile picture
72 followers
ยท
95 following
https://civitai.com/user/AbstractPhila
AbstractEyes
AI & ML interests
datasets, research papers, experimentation, vision, classification, text encoders, tokenization, llms, diffusion, distillation, and more.
Recent Activity
updated
a model
about 20 hours ago
AbstractPhil/geolip-scene-classifier-proto
replied
to
their
post
about 23 hours ago
GLIP - Geometric Linear Interpolative Patchwork aka geolip. https://github.com/AbstractEyes/glip-autoencoder To tinker with the topology directly you can play with it here, though I admit it's imperfect in this form - it's quite the tinker toy to see the effects of patching. https://claude.ai/public/artifacts/697287e4-fa18-4753-8b57-904d5e2022ed This is the repo that will contain the next experimental stage, which is based entirely on the research and structural boundaries applied by said research. It'll be a little rigid while I get Claude set up. In order to directly train these layered topological response patchworks you must install and use the geovocab2, geofractal, and wide_compiler repos. This is due to the wide_compiler's wide_linear high-speed efficiency for ensemble processing, the geovocab2 factory structure with multiple formulas including highly efficient designs meant for kernel compilation, and a series of reusable utilities in geofractal including some of the more complex losses and difficult to optimally tune gate structures surrounding them. Many of the underlying formulas are outlined here; https://huggingface.co/AbstractPhil/geometric-experiment-history/blob/main/FORMULAS.md Utilization and training USING the pretrained or untrained geolip patchwork will be as simple as loading the model in pytorch and will not require external dependencies of the geolip package, numpy, or pytorch depending on the task. It will come packaged with recommended losses but I encourage experimentation because I simply cannot cover all spectrums. More details to come as development progresses. The system is coming together and the state of the utilizable autoencoder will be ready within a couple weeks. The entire system is built for convenience and reusability, so the structure will be built similarly to autoencoder systems that currently exist, with a few tweaks here and there for important elements - so the interface will be familiar to those who use it.
replied
to
their
post
1 day ago
GLIP - Geometric Linear Interpolative Patchwork aka geolip. https://github.com/AbstractEyes/glip-autoencoder To tinker with the topology directly you can play with it here, though I admit it's imperfect in this form - it's quite the tinker toy to see the effects of patching. https://claude.ai/public/artifacts/697287e4-fa18-4753-8b57-904d5e2022ed This is the repo that will contain the next experimental stage, which is based entirely on the research and structural boundaries applied by said research. It'll be a little rigid while I get Claude set up. In order to directly train these layered topological response patchworks you must install and use the geovocab2, geofractal, and wide_compiler repos. This is due to the wide_compiler's wide_linear high-speed efficiency for ensemble processing, the geovocab2 factory structure with multiple formulas including highly efficient designs meant for kernel compilation, and a series of reusable utilities in geofractal including some of the more complex losses and difficult to optimally tune gate structures surrounding them. Many of the underlying formulas are outlined here; https://huggingface.co/AbstractPhil/geometric-experiment-history/blob/main/FORMULAS.md Utilization and training USING the pretrained or untrained geolip patchwork will be as simple as loading the model in pytorch and will not require external dependencies of the geolip package, numpy, or pytorch depending on the task. It will come packaged with recommended losses but I encourage experimentation because I simply cannot cover all spectrums. More details to come as development progresses. The system is coming together and the state of the utilizable autoencoder will be ready within a couple weeks. The entire system is built for convenience and reusability, so the structure will be built similarly to autoencoder systems that currently exist, with a few tweaks here and there for important elements - so the interface will be familiar to those who use it.
View all activity
Organizations
AbstractPhil
's models
127
Sort:ย Recently updated
AbstractPhil/PONY-SIM-V4
Updated
Mar 28, 2025
โข
1
AbstractPhil/SIM-V5
Updated
Mar 27, 2025
โข
1
AbstractPhil/SDXL-SIM-REFINER
Updated
Mar 16, 2025
AbstractPhil/SDXL-SIM_NAI-VPRED
Updated
Mar 16, 2025
AbstractPhil/SDXL-Simulacrum-V3-1
0.2B
โข
Updated
Mar 3, 2025
AbstractPhil/sdxl-interpolated
Text-to-Image
โข
Updated
Feb 10, 2025
AbstractPhil/sdxl-interpolated-nai-xl-11
Text-to-Image
โข
Updated
Feb 9, 2025
Previous
1
...
3
4
5
Next