Docker announced a new GenAI Stack in partnership with Neo4j, LangChain, and Ollama during its annual DockerCon developer conference keynote. This GenAI Stack is designed to help developers quickly and easily build generative AI applications without searching for and configuring various technologies.

It includes pre-configured components like large language models (LLMs) from Ollama, vector and graph databases from Neo4j, and the LangChain framework. Docker also introduced its first AI-powered product, Docker AI.

The GenAI Stack addresses popular use cases for generative AI and is available in the Docker Learning Center and on GitHub. It offers pre-configured open-source LLMs, assistance from Ollama for setting up LLMs, Neo4j as the default database for improved AI/ML model performance, knowledge graphs to enhance GenAI predictions, LangChain orchestration for context-aware reasoning applications, and various supporting tools and resources. This initiative aims to empower developers to leverage AI/ML capabilities in their applications efficiently and securely.

“Developers are excited by the possibilities of GenAI, but the rate of change, number of vendors, and wide variation in technology stacks makes it challenging to know where and how to start,” said Scott Johnston, CEO of Docker CEO Scott Johnston. “Today’s announcement eliminates this dilemma by enabling developers to get started quickly and safely using the Docker tools, content, and services they already know and love together with partner technologies on the cutting edge of GenAI app development.”

Developers are provided with easy setup options that offer various capabilities, including effortless data loading and vector index creation. This allows developers to import data, create vector indices, add questions and answers, and store them within the vector index. 

This setup enables enhanced querying, result enrichment, and the creation of flexible knowledge graphs. Developers can generate diverse responses in different formats, such as bulleted lists, chain of thought, GitHub issues, PDFs, poems, and more. Additionally, developers can compare results achieved between different configurations, including LLMs on their own, LLMs with vectors, and LLMs with vector and knowledge graph integration.