You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A project for scalable hierachical clustering, thanks to a Flexible,
Incremental, Scalable, Hierarchical Density-Based Clustering
algorithms (FISHDBC, for the friends).
This package lets you use an arbitrary dissimilarity function you write (or reuse from somebody else's work!) to cluster
your data.
There are plenty of configuration options, inherited by HNSWs and HDBSCAN,
but the only compulsory argument is a dissimilarity function between arbitrary
data elements:
import flexible_clustering
clusterer = flexible_clustering.FISHDBC(my_dissimilarity)
for elem in my_data:
clusterer.add(elem)
labels, probs, stabilities, condensed_tree, slt, mst = clusterer.cluster()
for elem in some_new_data: # support cheap incremental clustering
clusterer.add(elem)
# new clustering according to the newly available data
labels, probs, stabilities, condensed_tree, slt, mst = clusterer.cluster()
Make sure to run everything from outside the source directory, to
avoid confusing Python path.
Cluster labels for each point. Noisy samples are given the label -1.
probabilities :ndarray, shape (n_samples, )
Cluster membership strengths for each point. Noisy samples are assigned
0.
cluster_persistence :array, shape (n_clusters, )
A score of how persistent each cluster is. A score of 1.0 represents
a perfectly stable cluster that persists over all distance scales,
while a score of 0.0 represents a perfectly ephemeral cluster. These
scores can be guage the relative coherence of the clusters output
by the algorithm.
condensed_tree :record array
The condensed cluster hierarchy used to generate clusters.