🔔
🎄
🎁
🦌
🛷
NEW
Anthropic's Claude 3 Haiku Now Fine-Tunable in Amazon Bedrock - Blockchain.News

Anthropic's Claude 3 Haiku Now Fine-Tunable in Amazon Bedrock

Peter Zhang Jul 11, 2024 04:23

Anthropic introduces fine-tuning for Claude 3 Haiku in Amazon Bedrock, enhancing model performance and customization for specialized business tasks.

Anthropic's Claude 3 Haiku Now Fine-Tunable in Amazon Bedrock

Anthropic has announced that customers can now fine-tune Claude 3 Haiku, the company's fastest and most cost-effective model, within Amazon Bedrock. This new capability allows businesses to customize the model’s knowledge and capabilities, making it more effective for specialized tasks, according to Anthropic.

Overview of Fine-Tuning

Fine-tuning is a widely-used technique to enhance model performance by creating a customized version tailored to specific workflows. To fine-tune Claude 3 Haiku, users need to prepare a set of high-quality prompt-completion pairs, which are the ideal outputs for given tasks. The fine-tuning API, currently in preview, uses this data to create a custom Claude 3 Haiku model. Businesses can test and refine their custom model through the Amazon Bedrock console or API until it meets their performance goals and is ready for deployment.

Benefits

Fine-tuning Claude 3 Haiku offers several benefits:

  • Better results on specialized tasks: Enhance performance for domain-specific actions, such as classification and interactions with custom APIs, by encoding company and domain knowledge.
  • Faster speeds at lower cost: Reduce costs for production deployments and achieve faster results compared to other models like Sonnet or Opus.
  • Consistent, brand-aligned formatting: Generate consistently structured outputs tailored to specific requirements, ensuring compliance with regulatory and internal protocols.
  • Easy-to-use API: Enable companies of all sizes to innovate efficiently without extensive in-house AI expertise. Fine-tuning is accessible without deep technical knowledge.
  • Safe and secure: Keep proprietary training data within customers’ AWS environment, preserving the Claude 3 model family’s low risk of harmful outputs.

Anthropic has demonstrated the effectiveness of fine-tuning by moderating online comments on internet forums, improving classification accuracy from 81.5% to 99.6% and reducing tokens per query by 85%.

Customer Spotlight

SK Telecom, one of South Korea's largest telecommunications operators, has trained a custom Claude model to improve support workflows and enhance customer experiences by leveraging their industry-specific expertise. Eric Davis, Vice President of AI Tech Collaboration Group, noted a 73% increase in positive feedback for agent responses and a 37% improvement in key performance indicators for telecommunications-related tasks.

Thomson Reuters, a global content and technology company, has also seen positive results with Claude 3 Haiku. Joel Hron, Head of AI and Labs at Thomson Reuters, highlighted the company's aim to provide accurate, fast, and consistent user experiences by fine-tuning Claude around their industry expertise and specific requirements. Hron anticipates measurable improvements and faster speeds in AI results.

How to Get Started

Fine-tuning for Claude 3 Haiku in Amazon Bedrock is now available in preview in the US West (Oregon) AWS Region. Initially, text-based fine-tuning with context lengths up to 32K tokens is supported, with plans to introduce vision capabilities in the future. Additional details are available in the AWS launch blog and the documentation.

To request access, contact your AWS account team or submit a support ticket in the AWS Management Console.

Image source: Shutterstock