Telegram Group & Telegram Channel
🌟 INTELLECT-1: Release of the first decentralized learning model.

PRIME Intellect has published INTELLECT-1 ( Instruct + Base ), the first 10 billion parameter language model collaboratively trained in 50 days by 30 participants worldwide.

PRIME Intellect used its own PRIME platform, designed to address the main problems of decentralized learning: network unreliability and dynamic management of computing nodes.

The platform utilized a network of 112 H100 GPUs across 3 continents and achieved a compute utilization rate of 96% under optimal conditions.

The training corpus consisted of 1 trillion public dataset tokens with the following percentage distribution: 55% fineweb-edu, 10% fineweb, 20% Stack V1, 10% dclm-baseline, 5% open-web-math.

▶️ Technical specifications:

🟢 Parameters: 10B;
🟢 Layers: 42;
🟢 Attention Heads: 32;
🟢 Hidden Size: 4096;
🟢 Context Length: 8192;
🟢 Vocabulary Size: 128256.

INTELLECT-1 achieved 37.5% accuracy on the MMLU test and 72.26% on HellaSwag, and outperformed several other open-source models on WinoGrande with a score of 65.82%.

While these figures lag slightly behind today's popular models, the results of the experiment are a critical step toward democratizing AI development and preventing the consolidation of AI capabilities within a few organizations.

▶️ GGUF quantized versions of INTELLECT-1_Instruct in 3-bit (5.46 GB) to 8-bit (10.9 GB) bit depths from the LM Studio community.

▶️ Example of inference on Transformers:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("PrimeIntellect/INTELLECT-1")
tokenizer = AutoTokenizer.from_pretrained("PrimeIntellect/INTELLECT-1")

input_text = "%prompt%"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)

print(output_text)


📌 Licensing: Apache 2.0 License.


🟡 Article
🟡 HF Model Kit
🟡 Set of GGUF versions
🟡 Technical report
🟡 Demo
🖥 GitHub

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM



tg-me.com/Machine_learn/3117
Create:
Last Update:

🌟 INTELLECT-1: Release of the first decentralized learning model.

PRIME Intellect has published INTELLECT-1 ( Instruct + Base ), the first 10 billion parameter language model collaboratively trained in 50 days by 30 participants worldwide.

PRIME Intellect used its own PRIME platform, designed to address the main problems of decentralized learning: network unreliability and dynamic management of computing nodes.

The platform utilized a network of 112 H100 GPUs across 3 continents and achieved a compute utilization rate of 96% under optimal conditions.

The training corpus consisted of 1 trillion public dataset tokens with the following percentage distribution: 55% fineweb-edu, 10% fineweb, 20% Stack V1, 10% dclm-baseline, 5% open-web-math.

▶️ Technical specifications:

🟢 Parameters: 10B;
🟢 Layers: 42;
🟢 Attention Heads: 32;
🟢 Hidden Size: 4096;
🟢 Context Length: 8192;
🟢 Vocabulary Size: 128256.

INTELLECT-1 achieved 37.5% accuracy on the MMLU test and 72.26% on HellaSwag, and outperformed several other open-source models on WinoGrande with a score of 65.82%.

While these figures lag slightly behind today's popular models, the results of the experiment are a critical step toward democratizing AI development and preventing the consolidation of AI capabilities within a few organizations.

▶️ GGUF quantized versions of INTELLECT-1_Instruct in 3-bit (5.46 GB) to 8-bit (10.9 GB) bit depths from the LM Studio community.

▶️ Example of inference on Transformers:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("PrimeIntellect/INTELLECT-1")
tokenizer = AutoTokenizer.from_pretrained("PrimeIntellect/INTELLECT-1")

input_text = "%prompt%"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)

print(output_text)


📌 Licensing: Apache 2.0 License.


🟡 Article
🟡 HF Model Kit
🟡 Set of GGUF versions
🟡 Technical report
🟡 Demo
🖥 GitHub

@Machine_learn

BY Machine learning books and papers




Share with your friend now:
tg-me.com/Machine_learn/3117

View MORE
Open in Telegram


Machine learning books and papers Telegram | DID YOU KNOW?

Date: |

Can I mute a Telegram group?

In recent times, Telegram has gained a lot of popularity because of the controversy over WhatsApp’s new privacy policy. In January 2021, Telegram was the most downloaded app worldwide and crossed 500 million monthly active users. And with so many active users on the app, people might get messages in bulk from a group or a channel that can be a little irritating. So to get rid of the same, you can mute groups, chats, and channels on Telegram just like WhatsApp. You can mute notifications for one hour, eight hours, or two days, or you can disable notifications forever.

Should You Buy Bitcoin?

In general, many financial experts support their clients’ desire to buy cryptocurrency, but they don’t recommend it unless clients express interest. “The biggest concern for us is if someone wants to invest in crypto and the investment they choose doesn’t do well, and then all of a sudden they can’t send their kids to college,” says Ian Harvey, a certified financial planner (CFP) in New York City. “Then it wasn’t worth the risk.” The speculative nature of cryptocurrency leads some planners to recommend it for clients’ “side” investments. “Some call it a Vegas account,” says Scott Hammel, a CFP in Dallas. “Let’s keep this away from our real long-term perspective, make sure it doesn’t become too large a portion of your portfolio.” In a very real sense, Bitcoin is like a single stock, and advisors wouldn’t recommend putting a sizable part of your portfolio into any one company. At most, planners suggest putting no more than 1% to 10% into Bitcoin if you’re passionate about it. “If it was one stock, you would never allocate any significant portion of your portfolio to it,” Hammel says.

Machine learning books and papers from us


Telegram Machine learning books and papers
FROM USA