Using Large Language Transformer Models for Research in R – August 2023

Event Phone: 1-610-715-0115

We're sorry, but all tickets sales have ended because the event is expired.

There are no upcoming dates for this event.


Cancellation Policy: If you cancel your registration at least two weeks before the course is scheduled to begin, you are entitled to a full refund (minus a processing fee of $50).
In the unlikely event that Statistical Horizons LLC must cancel a seminar, we will do our best to inform you as soon as possible of the cancellation. You would then have the option of receiving a full refund of the seminar fee or a credit towards another seminar. In no event shall Statistical Horizons LLC be liable for any incidental or consequential damages that you may incur because of the cancellation.
An 8-Hour Livestream Seminar Taught by Hudson Golino, Ph.D. & Alexander Christensen, Ph.D.

This seminar is currently sold out. Please email info@statisticalhorizons.com to be added to the waitlist.

Over the years, we’ve gotten many requests for short introductory courses. Today we are proud to unveil our newest “mini” course: Using Large Language Transformer Models for Research in R. In just 8 hours (over 2 days), you will gain the skills to use natural language processing (NLP) techniques and large language transformer models (LLMs) for research applications using the R programming language. We hope you enjoy the course.

This seminar will introduce you to basic techniques to convert unstructured text data to structured data in R. As a necessary precursor to large language transformer models (LLMs), the course will also cover word embeddings and their use, and you will gain hands-on experience implementing word embeddings in R.

Additionally, the course will cover the concept of zero-shot classification, which involves using LLMs for text classification without the need for labeled data. You will learn about Hugging Face Transformers and implement zero-shot classification in R. Finally, the course will cover automatic text classification and summarization using R and pre-trained transformer models.

Overall, the goal of this course is to provide you with a comprehensive (applied) understanding of LLMs for research applications. By the end of the course, you will be equipped with the necessary skills to apply these techniques to analyze and extract insights from unstructured text data in your research work.

Why are Large Language Transformer Models (LLMs) so popular nowadays?

Large language transformer models, such as GPT-4, have gained popularity for several reasons:

  1. State-of-the-art performance: These models have achieved state-of-the-art performance on a wide range of natural language processing tasks, including language translation, text summarization, question answering, and language generation.
  2. Zero-shot learning: LLMs can perform tasks for which they have not been explicitly trained, a property known as zero-shot learning. This is because they have been trained on a vast amount of diverse text data, allowing them to understand the underlying patterns and relationships in natural language.
  3. Scalability: LLMs are highly scalable and can be fine-tuned for specific tasks with relatively small amounts of task-specific data.
  4. General-purpose: LLMs are designed to be general-purpose, meaning they can be used for a wide variety of natural language processing tasks without the need for specialized models for each task.
  5. Ease of use: Many LLMs are available as pre-trained models, allowing developers and researchers to use them without the need for extensive training or expertise in natural language processing.

Overall, the combination of state-of-the-art performance, zero-shot learning, scalability, general-purpose design, and ease of use make large language transformer models highly attractive for a wide range of natural language processing applications.

Our course is designed as a first introduction to natural language processing and large language models for research applications, covering some basic concepts and applications of transformer models in R.

Venue: