Notice: file_put_contents(): Write of 18146 bytes failed with errno=28 No space left on device in /var/www/group-telegram/post.php on line 50
Machinelearning | Telegram Webview: ai_machinelearning_big_data/6580 -
Telegram Group & Telegram Channel
🚀Только что выпущено новое семейство моделей генерации кода Salesforce (SFR-Embedding-Code), занявшее 1-е место на бенчмарке CoIR!

Модель доступна в в 2-х размерах: 2B, 400M.

Основные характеристики:
1️⃣ Модель 2B: Занимает первое место в CoIR.
2️⃣ Модель 400M: демонстрирует лучшие показатели среди моделей на 0,5B параметров.
3️⃣ Поддерживает 12 языков программирования, Python, Java, C++, JavaScript, C# и другие!

Пример Запуска:

import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel

# Each query needs to be accompanied by an corresponding instruction describing the task.
query_instruction_example = "Given Code or Text, retrieval relevant content"
queries = [
"how to implement quick sort in Python?"
]

# No instruction needed for retrieval passages
passages = [
"def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)",
"def bubble_sort(arr):\n n = len(arr)\n for i in range(n):\n for j in range(0, n-i-1):\n if arr[j] > arr[j+1]:\n arr[j], arr[j+1] = arr[j+1], arr[j]\n return arr"
]

# load model with tokenizer
model = AutoModel.from_pretrained('Salesforce/SFR-Embedding-Code-2B_R', trust_remote_code=True)

# get the embeddings
max_length = 32768
query_embeddings = model.encode_queries(queries, instruction=query_instruction_example, max_length=max_length)
passage_embeddings = model.encode_corpus(passages, max_length=max_length)

# normalize embeddings
query_embeddings = F.normalize(query_embeddings, p=2, dim=1)
passage_embeddings = F.normalize(passage_embeddings, p=2, dim=1)

scores = (query_embeddings @ passage_embeddings.T) * 100
print(scores.tolist())



Документация
Модель 400M
Модель 2B


📌Лицензирование моделей: CC-BY-NC-SA-4.0 License.

@ai_machinelearning_big_data


#CodeAI #MLResearch #SOTA #OpenScience #code #llm #ml



group-telegram.com/ai_machinelearning_big_data/6580
Create:
Last Update:

🚀Только что выпущено новое семейство моделей генерации кода Salesforce (SFR-Embedding-Code), занявшее 1-е место на бенчмарке CoIR!

Модель доступна в в 2-х размерах: 2B, 400M.

Основные характеристики:
1️⃣ Модель 2B: Занимает первое место в CoIR.
2️⃣ Модель 400M: демонстрирует лучшие показатели среди моделей на 0,5B параметров.
3️⃣ Поддерживает 12 языков программирования, Python, Java, C++, JavaScript, C# и другие!

Пример Запуска:

import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel

# Each query needs to be accompanied by an corresponding instruction describing the task.
query_instruction_example = "Given Code or Text, retrieval relevant content"
queries = [
"how to implement quick sort in Python?"
]

# No instruction needed for retrieval passages
passages = [
"def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)",
"def bubble_sort(arr):\n n = len(arr)\n for i in range(n):\n for j in range(0, n-i-1):\n if arr[j] > arr[j+1]:\n arr[j], arr[j+1] = arr[j+1], arr[j]\n return arr"
]

# load model with tokenizer
model = AutoModel.from_pretrained('Salesforce/SFR-Embedding-Code-2B_R', trust_remote_code=True)

# get the embeddings
max_length = 32768
query_embeddings = model.encode_queries(queries, instruction=query_instruction_example, max_length=max_length)
passage_embeddings = model.encode_corpus(passages, max_length=max_length)

# normalize embeddings
query_embeddings = F.normalize(query_embeddings, p=2, dim=1)
passage_embeddings = F.normalize(passage_embeddings, p=2, dim=1)

scores = (query_embeddings @ passage_embeddings.T) * 100
print(scores.tolist())



Документация
Модель 400M
Модель 2B


📌Лицензирование моделей: CC-BY-NC-SA-4.0 License.

@ai_machinelearning_big_data


#CodeAI #MLResearch #SOTA #OpenScience #code #llm #ml

BY Machinelearning










❌Photos not found?❌Click here to update cache.


Share with your friend now:
group-telegram.com/ai_machinelearning_big_data/6580

View MORE
Open in Telegram


Telegram | DID YOU KNOW?

Date: |

In 2014, Pavel Durov fled the country after allies of the Kremlin took control of the social networking site most know just as VK. Russia's intelligence agency had asked Durov to turn over the data of anti-Kremlin protesters. Durov refused to do so. The company maintains that it cannot act against individual or group chats, which are “private amongst their participants,” but it will respond to requests in relation to sticker sets, channels and bots which are publicly available. During the invasion of Ukraine, Pavel Durov has wrestled with this issue a lot more prominently than he has before. Channels like Donbass Insider and Bellum Acta, as reported by Foreign Policy, started pumping out pro-Russian propaganda as the invasion began. So much so that the Ukrainian National Security and Defense Council issued a statement labeling which accounts are Russian-backed. Ukrainian officials, in potential violation of the Geneva Convention, have shared imagery of dead and captured Russian soldiers on the platform. But Kliuchnikov, the Ukranian now in France, said he will use Signal or WhatsApp for sensitive conversations, but questions around privacy on Telegram do not give him pause when it comes to sharing information about the war. "We as Ukrainians believe that the truth is on our side, whether it's truth that you're proclaiming about the war and everything else, why would you want to hide it?," he said. "Like the bombing of the maternity ward in Mariupol," he said, "Even before it hits the news, you see the videos on the Telegram channels."
from us


Telegram Machinelearning
FROM American