Telegram Group & Telegram Channel
🚀Только что выпущено новое семейство моделей генерации кода Salesforce (SFR-Embedding-Code), занявшее 1-е место на бенчмарке CoIR!

Модель доступна в в 2-х размерах: 2B, 400M.

Основные характеристики:
1️⃣ Модель 2B: Занимает первое место в CoIR.
2️⃣ Модель 400M: демонстрирует лучшие показатели среди моделей на 0,5B параметров.
3️⃣ Поддерживает 12 языков программирования, Python, Java, C++, JavaScript, C# и другие!

Пример Запуска:

import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel

# Each query needs to be accompanied by an corresponding instruction describing the task.
query_instruction_example = "Given Code or Text, retrieval relevant content"
queries = [
"how to implement quick sort in Python?"
]

# No instruction needed for retrieval passages
passages = [
"def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)",
"def bubble_sort(arr):\n n = len(arr)\n for i in range(n):\n for j in range(0, n-i-1):\n if arr[j] > arr[j+1]:\n arr[j], arr[j+1] = arr[j+1], arr[j]\n return arr"
]

# load model with tokenizer
model = AutoModel.from_pretrained('Salesforce/SFR-Embedding-Code-2B_R', trust_remote_code=True)

# get the embeddings
max_length = 32768
query_embeddings = model.encode_queries(queries, instruction=query_instruction_example, max_length=max_length)
passage_embeddings = model.encode_corpus(passages, max_length=max_length)

# normalize embeddings
query_embeddings = F.normalize(query_embeddings, p=2, dim=1)
passage_embeddings = F.normalize(passage_embeddings, p=2, dim=1)

scores = (query_embeddings @ passage_embeddings.T) * 100
print(scores.tolist())



Документация
Модель 400M
Модель 2B


📌Лицензирование моделей: CC-BY-NC-SA-4.0 License.

@ai_machinelearning_big_data


#CodeAI #MLResearch #SOTA #OpenScience #code #llm #ml
39👍22🔥9



group-telegram.com/ai_machinelearning_big_data/6575
Create:
Last Update:

🚀Только что выпущено новое семейство моделей генерации кода Salesforce (SFR-Embedding-Code), занявшее 1-е место на бенчмарке CoIR!

Модель доступна в в 2-х размерах: 2B, 400M.

Основные характеристики:
1️⃣ Модель 2B: Занимает первое место в CoIR.
2️⃣ Модель 400M: демонстрирует лучшие показатели среди моделей на 0,5B параметров.
3️⃣ Поддерживает 12 языков программирования, Python, Java, C++, JavaScript, C# и другие!

Пример Запуска:

import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel

# Each query needs to be accompanied by an corresponding instruction describing the task.
query_instruction_example = "Given Code or Text, retrieval relevant content"
queries = [
"how to implement quick sort in Python?"
]

# No instruction needed for retrieval passages
passages = [
"def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)",
"def bubble_sort(arr):\n n = len(arr)\n for i in range(n):\n for j in range(0, n-i-1):\n if arr[j] > arr[j+1]:\n arr[j], arr[j+1] = arr[j+1], arr[j]\n return arr"
]

# load model with tokenizer
model = AutoModel.from_pretrained('Salesforce/SFR-Embedding-Code-2B_R', trust_remote_code=True)

# get the embeddings
max_length = 32768
query_embeddings = model.encode_queries(queries, instruction=query_instruction_example, max_length=max_length)
passage_embeddings = model.encode_corpus(passages, max_length=max_length)

# normalize embeddings
query_embeddings = F.normalize(query_embeddings, p=2, dim=1)
passage_embeddings = F.normalize(passage_embeddings, p=2, dim=1)

scores = (query_embeddings @ passage_embeddings.T) * 100
print(scores.tolist())



Документация
Модель 400M
Модель 2B


📌Лицензирование моделей: CC-BY-NC-SA-4.0 License.

@ai_machinelearning_big_data


#CodeAI #MLResearch #SOTA #OpenScience #code #llm #ml

BY Machinelearning











Share with your friend now:
group-telegram.com/ai_machinelearning_big_data/6575

View MORE
Open in Telegram


Telegram | DID YOU KNOW?

Date: |

Telegram Messenger Blocks Navalny Bot During Russian Election Investors took profits on Friday while they could ahead of the weekend, explained Tom Essaye, founder of Sevens Report Research. Saturday and Sunday could easily bring unfortunate news on the war front—and traders would rather be able to sell any recent winnings at Friday’s earlier prices than wait for a potentially lower price at Monday’s open. The S&P 500 fell 1.3% to 4,204.36, and the Dow Jones Industrial Average was down 0.7% to 32,943.33. The Dow posted a fifth straight weekly loss — its longest losing streak since 2019. The Nasdaq Composite tumbled 2.2% to 12,843.81. Though all three indexes opened in the green, stocks took a turn after a new report showed U.S. consumer sentiment deteriorated more than expected in early March as consumers' inflation expectations soared to the highest since 1981. Also in the latest update is the ability for users to create a unique @username from the Settings page, providing others with an easy way to contact them via Search or their t.me/username link without sharing their phone number. In a statement, the regulator said the search and seizure operation was carried out against seven individuals and one corporate entity at multiple locations in Ahmedabad and Bhavnagar in Gujarat, Neemuch in Madhya Pradesh, Delhi, and Mumbai.
from tw


Telegram Machinelearning
FROM American