GitStar

Berri AIOrganization

Berri AI

@berriai • The fastest way to take your LLM app to production. Use this route to separate flagship concentration from portfolio breadth before you treat a publisher as broadly strong.

Portfolio concentration

63%

Top three share

Shows whether the organization is driven by one breakout repo or several visible projects.

Breadth

30 repos

Visible snapshot

12 repositories updated in the last 90 days.

Leading language

Python

Portfolio mix

Python (13), Unknown (5), Go (3)

Average size

5

Stars per repository

Useful for distinguishing one flagship-heavy publisher from a repeatable portfolio.

Updated: 2026-04-28(1d ago)GitHub API fallback30 repositories

Portfolio Shape

63%

of the visible star count comes from this organization's top three repositories.

Average Repository Size

5

stars per repository in this same snapshot.

Current Mix

Python

is the most common language here, with 12 repositories updated in the last 90 days.

Why this rank

This organization stands out because its public portfolio is relatively balanced across 30 repositories.

Balanced portfolio across 30 reposTop 3 share 63%

Organization pages work best when you separate portfolio breadth from flagship concentration. In Berri AI's case, the visible top three repositories account for about 63% of total stars in this snapshot, which helps explain whether the organization is known for one breakout project or for a broader repeatable portfolio.

The dominant language mix here is Python (13), Unknown (5), Go (3). That makes this page useful not just for popularity checks, but also for seeing what technical shape an organization's public ecosystem actually has.

Source: GitHub API fallback. This is the same cache-first snapshot used by the organization ranking list, so the summary view and the detail view should stay aligned.

Top Repositories

#RepositoryLanguage⭐ Stars
1berriai/litellm-pgvectorPython66
2berriai/terraform-provider-litellm

litellm terraform provider

Go22
3berriai/litellm-skills

Agent Skills for managing live LiteLLM proxy deployments — users, teams, keys, orgs, models, MCP servers, agents

Shell15
4berriai/example_litellm_gcp_cloud_run

Example Repo to deploy LiteLLM Proxy (AI Gateway) on GCP Cloud Run

Dockerfile8
5berriai/litellm-security-wg

LiteLLM Ecosystem Security Working Group

7
6berriai/litellm-guardrails

Registry of public custom code guardrails for the litellm proxy server

Svelte6
7berriai/litellm-ecs-deploymentHCL6
8berriai/litellm-backstageTypeScript6
9berriai/litellm-observatory

End-to-end testing suite for LiteLLM deployments - provider tests, performance metrics, and API validation

Python5
10berriai/example_anthropic_endpoint

An example anthropic API Endpoint

Python4
11berriai/Automated_Perf_TestsPython3
12berriai/provider-litellm-http

Crossplane Provider designed to facilitate sending LiteLLM HTTP requests as resources.

Go3
13berriai/litellm-performance-benchmarks

A reproducible benchmarking suite for measuring LiteLLM latency, throughput, and scalability under real-world workloads.

Python2
14berriai/provider-litellm

LiteLLM Gateway (Proxy) crossplane provider

Go2
15berriai/litellm-docs

The official LiteLLM docs repository

JavaScript1
16berriai/fake_custom_provider

mock provider for backend eng tests

Python1
17berriai/mock-token-exchange-server

Mock OAuth 2.0 Token Exchange Server (RFC 8693) for testing LiteLLM OBO flow

Python1
18berriai/mock-oauth2-mcp-server

A mock OAuth2 + MCP (Model Context Protocol) server for testing client_credentials flows. Useful for E2E testing LiteLLM proxy MCP OAuth2 M2M authentication.

Python1
19berriai/serxng-deploymentDockerfile1
20berriai/cloudzero-litellm-etl

LiteLLM data analysis, transformation and transmission to CloudZero utility

1
21berriai/locust-load-testerPython1
22berriai/prometheus-deploy

A Blueprint for deploying Prometheus to render.com

Dockerfile1
23berriai/proxy_load_tester_21
24berriai/oss-pr-review-agent

Separate PR Review Agent for OSS users

Python0
25berriai/pr-review-agent-skills

Skills for LiteLLM's pr review agent

Python0
26berriai/LiteLLM-Performance

Performance tracking and benchmarks for LiteLLM.

0
27berriai/litellm-aws-installerTypeScript0
28berriai/assing_instance_serverPython0
29berriai/proxy_load_tester_4Python0
30berriai/proxy_load_tester_30

Next step after the organization read

Open a flagship repository, compare a couple of portfolio leaders, or return to the organization map when you want a broader concentration read.

Learn and methodology

Keep trust-building context reachable, but behind the first data read instead of ahead of it.

How to Read This Snapshot

Total stars are useful as a discovery signal, but they do not tell you whether a team maintains every repository equally. Pair this page with release cadence, maintainer activity, and the flagship concentration shown above before making adoption decisions.

For broader background on GitStar's ranking logic and editorial guidance, see Methodology & Editorial Standards.