Files @ f0fbb0fe4462
Branch filter:

Location: kallithea/docs/usage/statistics.rst - annotation

f0fbb0fe4462 1.2 KiB text/prs.fallenstein.rst Show Source Show as Raw Download as Raw
mads
git: update check for invalid URL characters to work with Python versions that include an attempt at fixing the very same problem

With changes like
https://github.com/python/cpython/commit/76cd81d60310d65d01f9d7b48a8985d8ab89c8b4
making it to Python 3.10 and being backported to previous Python versions, the
approach in a8a51a3bdb61 no longer works when combined with
urllib.parse.urlparse in d2f59de17bef: path will never contain the invalid
characters.

To catch this case anyway, add a new check to verify that the parsed URL can
roundtrip back to the original representation with urllib.parse.urlunparse .

The actual exception might vary, but one of them should always fire.

There is a risk that the new check will reject some URLs that somehow isn't
normalized. No such cases have been found yet.
.. _statistics:

=====================
Repository statistics
=====================

Kallithea has a *repository statistics* feature, disabled by default. When
enabled, the amount of commits per committer is visualized in a timeline. This
feature can be enabled using the ``Enable statistics`` checkbox on the
repository ``Settings`` page.

The statistics system makes heavy demands on the server resources, so
in order to keep a balance between usability and performance, statistics are
cached inside the database and gathered incrementally.

When Celery is disabled:

  On each first visit to the summary page a set of 250 commits are parsed and
  added to the statistics cache. This incremental gathering also happens on each
  visit to the statistics page, until all commits are fetched.

  Statistics are kept cached until additional commits are added to the
  repository. In such a case Kallithea will only fetch the new commits when
  updating its statistics cache.

When Celery is enabled:

  On the first visit to the summary page, Kallithea will create tasks that will
  execute on Celery workers. These tasks will gather all of the statistics until
  all commits are parsed. Each task parses 250 commits, then launches a new
  task.