DeepLearning.AI and AWS unveiled a new course called Generative AI with Large Language Models on Coursera. 

This hands-on course aims to equip data scientists and engineers with the skills needed to become proficient in utilizing large language models (LLMs) for practical applications. Participants will gain expertise in various aspects, including selecting appropriate models, training them effectively, fine-tuning their performance, and deploying them for real-world scenarios.

It offers a comprehensive exploration of LLMs within the context of generative AI projects, covering the entire lifecycle of a typical generative AI project, encompassing crucial steps such as problem scoping, LLM selection, domain adaptation, model optimization for deployment, and integration into business applications. The course not only emphasizes practical aspects but also delves into the scientific foundations behind LLMs and their effectiveness.

The course is designed to be flexible and self-paced, divided into three weeks of content that totals approximately 16 hours. It includes a variety of learning materials, such as videos, quizzes, labs, and supplementary readings. The hands-on labs, facilitated by AWS Partner Vocareum, allow participants to directly apply the techniques in an AWS environment specifically provided for the course. All the necessary resources for working with LLMs and exploring their efficacy are included.

Week 1 of the course will cover generative AI use cases, project lifecycle, and model pre-training where students will examine the transformer architecture that powers many LLMs, see how these models are trained, and consider the compute resources required to develop them. 

Week 2 covers the options for adapting pre-trained models to specific tasks and datasets through a process called fine-tuning.

Finally, Week 3 will require users to make the LLM responses more humanlike and align them with human preferences using a technique called reinforcement learning from human feedback.