Topic: llm

GraphRAG – SD Times Open Source Project of the Week

GraphRAG is an open source research project out of Microsoft for creating knowledge graphs from datasets that can be used in retrieval-augmented generation (RAG). RAG is an approach in which data is fed into an LLM to give more accurate responses. For instance, a company might use RAG to be able to use its own … continue reading

Elastic launches low-code interface for experimenting with RAG implementation

Elastic has just released a new tool called Playground that will enable users to experiment with retrieval-augmented generation (RAG) more easily. RAG is a practice in which local data is added to an LLM, such as private company data or data that is more up-to-date than the LLMs training set. This allows it to give … continue reading

RAG is the next exciting advancement for LLMs

One of the challenges with generative AI models has been that they tend to hallucinate responses. In other words, they will present an answer that is factually incorrect, but will be confident in doing so, sometimes even doubling down when you point out that what they’re saying is wrong. “[Large language models] can be inconsistent … continue reading

Safe AI development: Integrating explainability and monitoring from the start

As artificial intelligence advances at breakneck speed, using it safely while also increasing its workload is a critical concern. Traditional methods of training safe AI have focused on filtering training data or fine-tuning models post-training to mitigate risks. However, in late May, Anthropic created a detailed map of the inner workings of its Claude 3 … continue reading

Apple releases eight new open LLMs

Apple has released eight new small LLMs as part of CoreNet, which is the company’s library for training deep neural networks.  The models, called OpenELM (Open-source Efficient Language Models), come in eight different options: four are pre trained models and four are instruction tuned and each comes in sizes of 270M, 250M, 1.1B, and 3B … continue reading

Snowflake releases Arctic, a cost-effective LLM for enterprise intelligence

The database company Snowflake is adding another large language model (LLM) into the AI ecosystem. Snowflake Arctic is an LLM designed for complex enterprise workloads, with cost-effectiveness as a key highlight.  It can efficiently complete enterprise intelligence tasks like SQL generation, coding, and instruction following, meeting or exceeding benchmarks in those areas when compared to … continue reading

Databricks releases new open LLM

Databricks has just launched a new LLM designed to enable customers to build and fine-tune their own custom LLMs. The company hopes that by releasing this model, it will further democratize access to AI and enable its customers to build their own models based on their own data.  According to Databricks, the new model, DBRX, … continue reading

Predibase launches 25 fine-tuned LLMs

Predibase has just announced a new collection of fine-tuned LLMs in a suite called LoRA Land. LoRA Land contains just over 25 LLMs that have been optimized for specific purposes and that perform as well as or better than GPT-4.  Examples of the available models include code generation, customer support automation, SQL generation, and more.  … continue reading

Kong introduces new collection of AI plugins in Kong Gateway 3.6

Kong recently unveiled a collection of six open-source AI plugins for Kong Gateway 3.6. The plugins offer integration with multiple LLMs and significantly advance AI technology accessibility for developers and platform teams. According to Kong, the goal of this release is to enhance developer productivity by simplifying the integration process of LLMs into various products. … continue reading

AI2 releases OLMo, an open LLM

The Allen Institute for AI (AI2) today released OLMo, an open large language model designed to provide understanding around what goes on inside AI models and to advance the science of language models. “Open foundation models have been critical in driving a burst of innovation and development around generative AI,” said Yann LeCun, chief AI … continue reading

Integrating customer-centric AI into your products

Fine-tuning has been the sole method by which a model could be adapted to accomplish specific tasks. Today, the current large language model can be prompt-engineered to achieve similar results. An AI task that would have taken 6 months in the past can now be accomplished in a matter of minutes or hours.  This development … continue reading

Bugcrowd announces rating taxonomy for LLMs

Bugcrowd has announced updates to its Vulnerability Rating Taxonomy (VRT), which categorizes and prioritizes crowdsourced vulnerabilities.  The new update specifically addresses vulnerabilities in Large Language Models (LLMs) for the first time. The VRT is an open-source initiative aiming to standardize how suspected vulnerabilities reported by hackers are classified.  “This new release of VRT not only … continue reading

DMCA.com Protection Status