Transactional
Groq

Llama 3.1 8B

Fast and efficient Llama model on Groq

Context Window128K
Max Output8K
Input Price$0.05/1M
Output Price$0.08/1M
CAPABILITIES

What This Model Can Do

Chat Completions

Multi-turn conversations with context

Vision

Analyze and understand images

Function Calling

Call external functions and APIs

JSON Mode

Guaranteed valid JSON output

Streaming

Real-time response streaming

System Prompt

Custom system instructions

CODE

Quick Start

import { Transactional } from '@usetransactional/node';

const client = new Transactional(process.env.TRANSACTIONAL_API_KEY);

const response = await client.ai.chat.completions.create({
  model: 'llama-3.1-8b-instant',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain quantum computing in simple terms.' }
  ],
  temperature: 0.7,
  max_tokens: 1024,
});

console.log(response.choices[0].message.content);

Ready to Use Llama 3.1 8B?

Get started with a free account. Pay only for what you use.