Forwarded from Machinelearning
🔥 Introducing Würstchen: Fast Diffusion for Image Generation

Diffusion model, whose text-conditional component works in a highly compressed latent space of images

Würstchen - это диффузионная модель, которой работает в сильно сжатом латентном пространстве изображений.

Почему это важно? Сжатие данных позволяет на порядки снизить вычислительные затраты как на обучение, так и на вывод модели.

Обучение на 1024×1024 изображениях гораздо затратное, чем на 32×32. Обычно в других моделях используется сравнительно небольшое сжатие, в пределах 4x - 8x пространственного сжатия.

Благодаря новой архитектуре достигается 42-кратное пространственное сжатие!

🤗 HF: https://huggingface.co/blog/wuertschen

📝 Paper: https://arxiv.org/abs/2306.00637

📕 Docs: hhttps://huggingface.co/docs/diffusers/main/en/api/pipelines/wuerstchen

🚀 Demo: https://huggingface.co/spaces/warp-ai/Wuerstchen

ai_machinelearning_big_data
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Well, AI can learn that humans might be deceiving.

Upd: as our readers noted, post originally was written by Denis here.
But then Yudkowski retweeted and it was spread on X.
LLM models are in their childhood years

Source.
Forwarded from ilia.eth | ØxPlasma
Position: Analyst/Researcher for AI Team at Cyber.fund

About Cyber.fund:
Cyber.fund is a pioneering $100mm research-driven fund specializing in the realm of web3, decentralized AI, autonomous agents, and self-sovereign identity. Our legacy is built upon being the architects behind monumental projects such as Lido, p2p.org, =nil; foundation, Neutron, NEON, and early investments in groundbreaking technologies like Solana, Ethereum, EigenLayer among 150+ others. We are committed to advancing the frontiers of Fully Homomorphic Encryption (FHE) for Machine Learning, privacy-first ML (Large Language Models), AI aggregations, and routing platforms alongside decentralized AI solutions.

Who Are We Looking For?
A dynamic individual who straddles the worlds of business acumen and academic rigor with:
- A robust theoretical foundation in Computer Science and a must-have specialization in Machine Learning.
- An educational background from a technical university, with a preference for PhD holders from prestigious institutions like MIT or МФТИ.
- A track record of publications in the Machine Learning domain, ideally at the level of NeuroIPS.
- Experience working in startups or major tech companies, ideally coupled with a background in angel investing.
- A profound understanding of algorithms, techniques, and models in ML, with an exceptional ability to translate these into innovative products.
- Fluent English, intellectual curiosity, and a fervent passion for keeping abreast of the latest developments in AI/ML.

Responsibilities:
1) Investment Due Diligence: Conduct technical, product, and business analysis of potential AI/ML investments. This includes market analysis, engaging with founders and technical teams, and evaluating the scalability, reliability, risks, and limitations of products.

2) Portcos Support: Provide strategic and technical support to portfolio companies in AI/ML. Assist in crafting technological strategies, hiring, industry networking, identifying potential project challenges, and devising solutions.

3) Market and Technology Research: Stay at the forefront of ML/DL/AI trends (e.g., synthetic data, flash attention, 1bit LLM, FHE for ML, JEPA, etc.). Write publications, whitepapers, and potentially host X spaces/streams/podcasts on these subjects (in English). Identify promising companies and projects for investment opportunities.

How to Apply?
If you find yourself aligning with our requirements and are excited by the opportunity to contribute to our vision, please send your CV to [email protected]. Including a cover letter, links to publications, open-source contributions, and other achievements will be advantageous.

Location:
Location is flexible, but the candidate should be within the time zones ranging from EET to EST (Eastern Europe to the East Coast of the USA).

This is not just a job opportunity; it's a call to be part of a visionary journey reshaping the landscape of AI and decentralized technology. Join us at Cyber.fund and be at the forefront of the technological revolution.
Let’s get back to posting 😌
Forwarded from Machinelearning
⚡️ Awesome CVPR 2024 Papers, Workshops, Challenges, and Tutorials!

На конференцию 2024 года по компьютерному зрению и распознаванию образов (CVPR) поступило 11 532 статей, из которых только 2 719 были приняты, что составляет около 23,6% от общего числа.

Ниже приведен список лучших докладов, гайдов, статей, семинаров и датасетов с CVPR 2024.

Github

@ai_machinelearning_big_data
⚡️ PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models

Significantly improved finetuned perf by simply changing the initialization of LoRA's AB matrix from Gaussian/zero to principal components.

On GSM8K, Mistral-7B fine-tuned with PiSSA achieves an accuracy of 72.86%, outperforming LoRA’s 67.7% by 5.16%.

Github: https://github.com/GraphPKU/PiSSA
Paper: https://arxiv.org/abs/2404.02948

@opendatascience
🥔 YaART: Yet Another ART Rendering Technology

💚 This study introduces YaART, a novel production-grade text-to-image cascaded diffusion model aligned to human preferences using Reinforcement Learning from Human Feedback (RLHF).

💜 During the development of YaART, Yandex especially focus on the choices of the model and training dataset sizes, the aspects that were not systematically investigated for text-to-image cascaded diffusion models before.

💖 In particular, researchers comprehensively analyze how these choices affect both the efficiency of the training process and the quality of the generated images, which are highly important in practice.

Paper page - https://ya.ru/ai/art/paper-yaart-v1
Arxiv - https://arxiv.org/abs/2404.05666
Habr - https://habr.com/ru/companies/yandex/articles/805745/

@opendatascience
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥 ControlNet++: Improving Conditional Controls
with Efficient Consistency Feedback

Proposes an approach that improves controllable generation by explicitly optimizing pixel-level cycle consistency

proj: https://liming-ai.github.io/ControlNet_Plus_Plus/
abs: https://arxiv.org/abs/2404.07987

@opendatascience
This media is not supported in your browser
VIEW IN TELEGRAM
⚡️Map-relative Pose Regression🔥(#CVPR2024 highlight)

For years absolute pose regression did not work. There was some success by massively synthesising scene-specific data. We train scene-agnostic APR and it works.

Paper: https://arxiv.org/abs/2404.09884
Page: https://nianticlabs.github.io/marepo


@opendatascience
Forwarded from Machinelearning
This media is not supported in your browser
VIEW IN TELEGRAM
👑Llama 3 is here, with a brand new tokenizer! 🦙

Вышла Llama 3


Meta выпустила новую SOTA Llama 3 в двух версиях на 8B и 70B параметров.

Длина контекста 8К, поддержка 30 языков.

HF: https://huggingface.co/spaces/ysharma/Chat_with_Meta_llama3_8b
Blog: https://ai.meta.com/blog/meta-llama-3/

Вы можете потестить 🦙 MetaLlama 3 70B и 🦙 Meta Llama 3 8B с помощью 🔥 бесплатного интерфейса: https://llama3.replicate.dev/

@ai_machinelearning_big_data
Discover, download, and run local LLMs

LM Studio allows to run #LLM model of your choice locally

Link: https://lmstudio.ai/
2024/04/27 06:18:02
Back to Top
HTML Embed Code: