47 blog posts with this tag.
Announcing our induction into the JPMorganChase Hall of Innovation.
Why should we care about analytics? What could go wrong?
Introducing Supercomplete, a new modality that predicts your next intent.
Codeium is named a Challenger in the Magic Quadrant.
Announcing Codeium on VMware Private AI by Broadcom.
Zillow uses Codeium to accelerate their developers.
An analysis of the Hybrid deployment from Codeium, a first-of-its-kind offering.
A deeper dive into recent enterprise readiness capabilities: indexing access controls, subteam analytics, and audit logging.
WWT uses Codeium to accelerate their developers.
We break down the value of context awareness and finetuning to personalize the Codeium system to particular enterprises.
A feature-by-feature analysis of the new GitHub Copilot tier.
Developers can now specify and persist known relevant context for Codeium to emphasize in results.
Announcing expanded and improved context awareness capabilities in Codeium.
Breaking down the reasons why on-premise ends up being the more cost-effective method of deployment for generative AI applications.
Vector Informatik uses Codeium to accelerate their developers.
Clearwater Analytics uses Codeium to accelerate their developers.
Announcing our partnership and work with Dell.
Announcing our integrations and work with Atlassian.
How latency constraints enable us to use our infrastructure expertise to make better products.
Announcing our integration and partnership with MongoDB. Get started with MongoDB and Codeium in under 5 minutes using this tutorial.
We introduce two metrics, Characters per Opportunity (CPO) and Percentage Code Written (PCW), which we believe should be the gold standards for benchmarking AI code assistants and assessing end value driven, respectively.
Announcing our SOC 2 Type 2 Report.
We have built state-of-the-art post-generation attribution to further our compliance story.
Anduril uses Codeium to accelerate their developers.
Demystifying common misconceptions about the difficulties in running Codeium in a self-hosted manner.
Our take on industry benchmarks and how to actually evaluate AI code LLMs and tools.
Our approach to building AI products that developers trust and love.
Dell and Codeium are working together to bring generative AI applications on-premise.
Announcing our SOC2 Type I Report.
Clarifying the GitHub Copilot for Business offering.
Why it is hard to create a context reasoning engine for code LLMs that consistently works.
A deep dive into context awareness of Codeium and how it stacks up against GitHub Copilot and CopilotX.
Why real-time context for AI code assistants is a meaningful and tricky problem.
An analysis on the capabilities and performance of the AI code assistant from GitLab.
How we built fine-tuning into our Enterprise offering for highest quality and lowest cost.
Proof that Codeium fine-tuned on a repository significantly outperforms GitHub Copilot.
Codeium for Enterprises is purpose built to run on-prem or in your VPC - no data or telemetry ever leaves.
In-line Fill-in-the-Middle suggestions, valuable suggestions produced only by Codeium.
Generative AI poses risks for companies that do not have strict data governance
An in-depth analysis on the capabilities and performance of Amazon CodeWhisperer post-general access release.
Demonstrating that GitHub Copilot trains on non-permissive licenses and is unable to filter out suggestions properly, while Codeium does not expose users to legal risk.
More security incidents, this time with ChatGPT, further support self-hosted solutions.
Codeium for Enterprises is HIPAA compliant due to full self-hosting.
How tuning the model layer of LLM applications creates the highest quality experiences for enterprises.
Analyzing how fill-in-the-middle allows Codeium to make better suggestions.
Looking into the likelihood of security incidents and how self-hosting is the solution.
Codeium for Enterprises is the only AI acceleration offering to provide code security, fine-tuning, and state-of-the-art quality.