This media is not supported in your browser
VIEW IN TELEGRAM
Project Starline: Feel like you're there, together
- blogpost
Very impressive press release from Google that pushes the boundaries of telepresence.
- blogpost
Very impressive press release from Google that pushes the boundaries of telepresence.
Imagine looking through a sort of magic window, and through that window, you see another person, life-size and in three dimensions. You can talk naturally, gesture and make eye contact.
There's not much information about the technical part yet. The project was under development for a few years and some trial deployments are planned this year.To make this experience possible, we are applying research in computer vision, machine learning, spatial audio and real-time compression. We've also developed a breakthrough light field display system that creates a sense of volume and depth that can be experienced without the need for additional glasses or headsets.
echoinside
#sdf #tools #implicit_geometry Автор видео @ephtracy в твиттер делиться прогрессом по созданию собственного 3д редактора работающего с signed distance field. В том же блендере возможно моделировать hard surface объекты с помощью metaballs, что даёт похожие…
Small release for win64
https://ephtracy.github.io/index.html?page=magicacsg
review: https://youtu.be/rgwNsNCpbhg?t=208
#sdf #tools
https://ephtracy.github.io/index.html?page=magicacsg
review: https://youtu.be/rgwNsNCpbhg?t=208
#sdf #tools
ephtracy.github.io
MagicaVoxel
MagicaVoxel Official Website
I was quite lazy with the channel, but I'm coming back! 👻✨
Look at this impressive idea — non-adversarial domain adaptation of pretrained generator using CLIP loss.
We already saw some examples of image stylization driven by clip loss and text description, also generating images with clip loss within some domain. But in this work we change the generator itself to produce images with mixed domain. No data is required from this new domain.
* github
* colab
Look at this impressive idea — non-adversarial domain adaptation of pretrained generator using CLIP loss.
We already saw some examples of image stylization driven by clip loss and text description, also generating images with clip loss within some domain. But in this work we change the generator itself to produce images with mixed domain. No data is required from this new domain.
* github
* colab
OpenCV with Roboflow launched Modelplace, a marketplace for AI models.
I wonder why nobody did this before. There's also AWS Marketplace, but it doesn't look convenient for creators.
If I missed some similar marketplace, please let me know in comments. ☺️
Modelplace doesn't look polished enough yet — you need to contact them to publish a model, there aren't many models available. But maybe we will have something like blender market for creators at some point. The direction and proposed features of Modelplace looks quite promising.
https://modelplace.ai/models
I wonder why nobody did this before. There's also AWS Marketplace, but it doesn't look convenient for creators.
If I missed some similar marketplace, please let me know in comments. ☺️
Modelplace doesn't look polished enough yet — you need to contact them to publish a model, there aren't many models available. But maybe we will have something like blender market for creators at some point. The direction and proposed features of Modelplace looks quite promising.
https://modelplace.ai/models
CLIPDraw
There's one known work — optimizing curves to match some classifiers' predictions. And there are so many other works aiming at imitating artists' strokes. But now this idea has been extended to CLIP prompts optimization. The style looks completely different, but it also depends on your prompt. For example, it looks like adding "watercolor painting" to your prompt improves its final visual look (in my opinion). Also there's an option to exclude some of keywords if you see smth not great on your final image 🐸.
I think it's quite fun to play with it. And it's a good base to make your own svg optimization thing. It's so nice when you can optimize smth in low resolution and then just render it in higher resolution without quality loss.
Here is the result of "Watercolor painting of sunflower in van Gogh style". I also like this finger and signature. 🌻
- paper
- colab
- twitter
#art
There's one known work — optimizing curves to match some classifiers' predictions. And there are so many other works aiming at imitating artists' strokes. But now this idea has been extended to CLIP prompts optimization. The style looks completely different, but it also depends on your prompt. For example, it looks like adding "watercolor painting" to your prompt improves its final visual look (in my opinion). Also there's an option to exclude some of keywords if you see smth not great on your final image 🐸.
I think it's quite fun to play with it. And it's a good base to make your own svg optimization thing. It's so nice when you can optimize smth in low resolution and then just render it in higher resolution without quality loss.
Here is the result of "Watercolor painting of sunflower in van Gogh style". I also like this finger and signature. 🌻
- paper
- colab
#art
#promo
For Russian speakers
———
3-дневный интенсив по модульному синтезу в VCV Rack - открытом и бесплатном эмуляторе модульных синтезаторов. Рассмотрим основы синтеза, научимся применять модуляры в создании музыки от попа до нойза
КОГДА:
23-29 августа (даты уточняются)
ПРОГРАММА
• Общие принципы синтеза
• Виды синтеза
• Логические элементы и интерфейсы
• Практика
ФОРМАТ
Онлайн + возможно оффлайн в Москве
ДЛЯ УЧАСТИЯ
• Нужно - компьютер с установленным VCV Rack, мин. опыт работы со звуком
• Не нужно - оконченная музыкалка, чёрный пояс по аблетону
ЦЕНА
1200р. Оплата через @selfoscillation_bot
Все деньги пойдут на помощь промо-группе @LESXXV, попавшей на деньги из-за кражи оборудования с феста
ЧАТ КУРСА, ИНФО
https://www.tg-me.com/joinchat-TDfLE5JMTLgwYzgy
О ЛЕКТОРЕ
@ferluht, участник vk.com/ed9m_8, исследователь ИИ и адепт модульного синтеза
Примеры работ сделанных полностью на VCV Rack:
https://open.spotify.com/album/0z2sluqa7HFYNipwYAugEX
https://youtu.be/Y2RphGohREE
https://vk.com/video-97759962_456239065
For Russian speakers
———
3-дневный интенсив по модульному синтезу в VCV Rack - открытом и бесплатном эмуляторе модульных синтезаторов. Рассмотрим основы синтеза, научимся применять модуляры в создании музыки от попа до нойза
КОГДА:
23-29 августа (даты уточняются)
ПРОГРАММА
• Общие принципы синтеза
• Виды синтеза
• Логические элементы и интерфейсы
• Практика
ФОРМАТ
Онлайн + возможно оффлайн в Москве
ДЛЯ УЧАСТИЯ
• Нужно - компьютер с установленным VCV Rack, мин. опыт работы со звуком
• Не нужно - оконченная музыкалка, чёрный пояс по аблетону
ЦЕНА
1200р. Оплата через @selfoscillation_bot
Все деньги пойдут на помощь промо-группе @LESXXV, попавшей на деньги из-за кражи оборудования с феста
ЧАТ КУРСА, ИНФО
https://www.tg-me.com/joinchat-TDfLE5JMTLgwYzgy
О ЛЕКТОРЕ
@ferluht, участник vk.com/ed9m_8, исследователь ИИ и адепт модульного синтеза
Примеры работ сделанных полностью на VCV Rack:
https://open.spotify.com/album/0z2sluqa7HFYNipwYAugEX
https://youtu.be/Y2RphGohREE
https://vk.com/video-97759962_456239065
If you’re not an attendee but you want to be, Unity have teamed up with SIGGRAPH to offer free Basic-tier access to everyone. Use code UNITY21. More details:
https://unity.com/event/siggraph-2021
https://unity.com/event/siggraph-2021
Unity
Unity Real-Time Development Platform | 3D, 2D, VR & AR Engine
Create and grow real-time 3D games, apps, and experiences for entertainment, film, automotive, architecture, and more. Get started with Unity today.
This media is not supported in your browser
VIEW IN TELEGRAM
Finally, somebody merged NeRF and CLIP ☺️🔥
The author is one of the creators of Diet NeRF.
- original twitter thread
- some other attempts to shape point clouds with CLIP
The author is one of the creators of Diet NeRF.
- original twitter thread
- some other attempts to shape point clouds with CLIP
Text to 3D implemented with CLIP + NeRF (an evolution of DietNeRF!)
Prompt: "a 3d render of a jenga tower in unreal engine"
Large Steps in Inverse Rendering of Geometry 😱😱😱
[EPFL, SIGGRAPH Asia 2021]
- twitter thread
- project page
- pdf
Inverse reconstruction from images to meshes just made a huge step forward.
No code yet. The examples in the paper are not close enough to possible input images in reality. But it still looks very cool.
[EPFL, SIGGRAPH Asia 2021]
- twitter thread
- project page
Inverse reconstruction from images to meshes just made a huge step forward.
No code yet. The examples in the paper are not close enough to possible input images in reality. But it still looks very cool.
We propose a simple and practical alternative that casts differentiable rendering into the framework of preconditioned gradient descent. Our preconditioner biases gradient steps towards smooth solutions without requiring the final solution to be smooth. In contrast to Jacobi-style iteration, each gradient step propagates information among all variables, enabling convergence using fewer and larger steps.
Our method is not restricted to meshes and can also accelerate the reconstruction of other representations, where smooth solutions are generally expected. We demonstrate its superior performance in the context of geometric optimization and texture reconstruction.
Forwarded from Gradient Dude
This media is not supported in your browser
VIEW IN TELEGRAM
🔥StyleGAN3 by NVIDIA!
Do you remember the awesome smooth results by Alias-Free GAN I wrote about earlier? The authors have finally posted the code and now you can build your amazing projects on it.
I don't know about you, but my hands are already itching to try it out.
🛠 Source code
🌀 Project page
⚙️ Colab
Do you remember the awesome smooth results by Alias-Free GAN I wrote about earlier? The authors have finally posted the code and now you can build your amazing projects on it.
I don't know about you, but my hands are already itching to try it out.
🛠 Source code
🌀 Project page
⚙️ Colab
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
[Nvidia]
- code
- project page
- pdf
- twitter
[Nvidia]
- code
- project page
We demonstrate near-instant training of neural graphics primitives on a single GPU for multiple tasks. In gigapixel image we represent an image by a neural network. SDF learns a signed distance function in 3D space whose zero level-set represents a 2D surface. NeRF [Mildenhall et al. 2020] uses 2D images and their camera poses to reconstruct a volumetric radiance-and-density field that is visualized using ray marching. Lastly, neural volume learns a denoised radiance and density field directly from a volumetric path tracer. In all tasks, our encoding and its efficient implementation provide clear benefits: instant training, high quality, and simplicity. Our encoding is task-agnostic: we use the same implementation and hyperparameters across all tasks and only vary the hash table size which trades off quality and performance.
Я давно не писала, но вот у меня появились новости.
Ищется Senior Applied Research Engineer в Ready Player Me, с возможностью быть лидом R&D команды (если есть желание брать ответственность и желательно опыт). Как понять что вам это интересно? Посмотрите вот этот видос (взят отсюда,
Вакансия remote в Европе. Напишите мне: @fogside по любым вопросам.
Податься можно сразу вот сюда.
Почему это классная возможность
- У вас будет много свободы креативить на пересечении ML и компьютерной графики в команде умных людей + с возможностью учиться и получать фидбек от супер крутых 3д художников, риггеров, спецов по VFX которые работали над известными играми и фильмами. Т.е. идеально, если вам интересно развиться в этой индустрии.
- Возможность и необходимость доносить свои разработки до прода и видеть как юзеры это используют. Это не чистый 100% академ рисеч. Это щастье
- Можно слать свою работу на конфы и компания поддержит вас в этом. Мы выступали с работами на Siggraph Asia, CVPR.
- Возможность и поддержка в написании патентов.
- Remote-first компания, вас никто не заставит идти в офис + есть много классных плющек.
Еще больше контекста
Я намного глубже понимаю теперь как создают 3д контент и как делать продукты для креативных людей.
Об этом всем я думаю, я еще напишу подробнее. Но это привело меня к решению покинуть компанию и пойти делать что-то свое.
Таким образом я ищу кого-то, кто мог бы быть также счастлив на этой позиции и смог бы делать что нравится.
Please open Telegram to view this post
VIEW IN TELEGRAM