M

Manuel de la Peña

Community User

Docker, Inc

Spain

Displaying 1 to 25 of 50 repositories

mdelapenya/deepseek-coder

103

0

By mdelapenya

Updated a month ago
DeepSeek Coder is a capable coding model trained on two trillion code and natural language tokens
mdelapenya/moondream

54

0

By mdelapenya

Updated 2 months ago
moondream2 is a small vision language model designed to run efficiently on edge devices
mdelapenya/qwen2

47

0

By mdelapenya

Updated 2 months ago
Qwen2 is a new series of large language models from Alibaba group
mdelapenya/llama3.2

173

0

By mdelapenya

Updated 2 months ago
Llama 3.2 of Meta goes small with 1B and 3B models.
mdelapenya/all-minilm

50

0

By mdelapenya

Updated 2 months ago
Embedding models on very large sentence level datasets
mdelapenya/gemma2

11

0

By mdelapenya

Updated 3 months ago
Google Gemma 2 is a high-performing and efficient model available in three sizes: 2B, 9B, and 27B
mdelapenya/gemma

9

0

By mdelapenya

Updated 3 months ago
mdelapenya/bge-m3

18

0

By mdelapenya

Updated 4 months ago
BGE-M3 is a new model from BAAI distinguished for its versatility in Multi-Functionality, Multi-Ling
mdelapenya/llava

12

0

By mdelapenya

Updated 4 months ago
LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna
mdelapenya/codegemma

13

0

By mdelapenya

Updated 4 months ago
CodeGemma is a collection of powerful, lightweight models that can perform a variety of coding tasks
mdelapenya/phi3.5

8

0

By mdelapenya

Updated 4 months ago
A lightweight AI model with 3.8 billion parameters with performance overtaking similarly and larger
mdelapenya/starcoder2

8

0

By mdelapenya

Updated 4 months ago
StarCoder2 is the next generation of transparently trained open code LLMs that comes in three sizes:
mdelapenya/smollm

10

0

By mdelapenya

Updated 4 months ago
A family of small models with 135M, 360M, and 1.7B parameters, trained on a new high-quality dataset
mdelapenya/snowflake-arctic-embed

8

0

By mdelapenya

Updated 4 months ago
A suite of text embedding models by Snowflake, optimized for performance
mdelapenya/llama3.1

39

0

By mdelapenya

Updated 4 months ago
Llama 3.1 is a new state-of-the-art model from Meta available in 8B, 70B and 405B parameter sizes
mdelapenya/bespoke-minicheck

6

0

By mdelapenya

Updated 4 months ago
A SOTA fact-checking model developed by Bespoke Labs
mdelapenya/mistral

7

0

By mdelapenya

Updated 4 months ago
The 7B model released by Mistral AI, updated to version 0.3
mdelapenya/llava-phi3

7

0

By mdelapenya

Updated 4 months ago
A new small LLaVA model fine-tuned from Phi 3 Mini
mdelapenya/nemotron-mini

7

0

By mdelapenya

Updated 4 months ago
A commercial-friendly small language model by NVIDIA optimized for roleplay, RAG QA, and function ca
mdelapenya/qwen2.5

9

1

By mdelapenya

Updated 4 months ago
Qwen2.5 models are pretrained on Alibaba latest large-scale dataset, encompassing up to 18 trillion
mdelapenya/qwen2.5-coder

6

0

By mdelapenya

Updated 4 months ago
The latest series of Code-Specific Qwen models, with significant improvements in code generation, co
mdelapenya/nomic-embed-text

24

0

By mdelapenya

Updated 4 months ago
A high-performing open embedding model with a large token context window