Back To Top
Experience the Future of Intelligent Investing Today
Getting started with the Llama 3 language model by Meta sounds exciting! Meta introduced the Llama 3 series last week, which is a significant advancement in language models featuring configurations of 8 billion and 70 billion parameters. The social media giant plans to release an even larger model of 400 billion parameter model in the future.
This release represents one of the most substantial developments of the year, with Meta simultaneously launching a series of open-source AI models, products, and research initiatives. This coordinated rollout underscores Meta’s commitment to leading the field in technological innovation.
Model Training and Data:
Performance Benchmarks:
Meta will soon publish a paper detailing breakthroughs from the Llama 3 models. Notably, these models utilize a Tiktoken-based tokenizer with a 128K vocabulary and Grouped Query Attention, boosting efficiency and performance. Both 8B and 70B models continued improving beyond expected limits after training on 15 trillion tokens.
Meta’s commitment to open-source innovation is highlighted by the Llama 3 series. The forthcoming 400B model, already scoring 85 on MMLU, with upcoming features like multimodality and expanded context, promises to reshape the open-source arena.
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
"Arrrr, me hearty! Me name be Captain Chat, the scurviest pirate chatbot to ever sail the Seven Seas! Me be here to swab the decks o' yer mind with me trusty responses, savvy? I be ready to hoist the Jolly Roger and set sail fer a swashbucklin' good time, matey! So, what be bringin' ye to these fair waters?"
Llama 3 as a Pirate Chatbot
You can play around with the Huggingface Demo
You can get started with Meta’s latest AI Assistant which incorporates LLama 3. Here are some of the features
Meta AI’s integration into Facebook enhances user interaction with their feed. If you stumble upon something intriguing, like a post about the northern lights, you can immediately use Meta AI to fetch additional details, such as the best times to witness the aurora borealis directly within your feed.
Newsletter
One thought on “Getting Started with LLama 3: The Most Advanced Open-Source Model”