Introducing Sora, OpenAi's text-to-video model.

Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.
openai.com/sora
Announcing Stable Diffusion 3, our most capable text-to-image model, utilizing a diffusion transformer architecture for greatly improved performance in multi-subject prompts, image quality, and spelling abilities.

Today, we are opening the waitlist for early preview. This phase is crucial for gathering insights to improve its performance and safety ahead of open release.

You can sign up to join the waitlist and learn more here: bit.ly/3OR2qQF #stablediffusion3
BREAKING: The European Parliament has just APPROVED the AI Act. What everyone should know:

➵ The AI Act follows a risk-based approach. Some AI systems are banned, such as those involving:

- Cognitive behavioral manipulation of people or specific vulnerable groups;
- Social scoring: classifying people based on behavior, socioeconomic status, or personal characteristics;
- Biometric identification and categorization of people;
- Real-time and remote biometric identification systems, such as facial recognition.
Some AI systems fall in the "high-risk" category, such as those involving:

- Critical infrastructures (e.g. transport) that could put the life and health of citizens at risk;
- Educational or vocational training that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
- Safety components of products (e.g. AI application in robot-assisted surgery);
- Employment, management of workers, and access to self-employment (e.g. CV-sorting software for recruitment procedures);
- Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
- Migration, asylum, and border control management (e.g. automated examination of visa applications);
- Administration of justice and democratic processes (e.g. AI solutions to search for court rulings).

➵ High-risk AI systems will be assessed before being put on the market and also throughout their lifecycle. People will have the right to file complaints about AI systems to designated national authorities.

➵ Generative AI, like ChatGPT, will not be classified as high-risk but will have to comply with transparency requirements and EU copyright law. Some of the obligations are:

- Disclosing that the content was generated by AI;
- Designing the model to prevent it from generating illegal content;
- Publishing summaries of copyrighted data used for training.

The AI Act is expected to officially become law by May or June, and its provisions will start taking effect in stages:
- 6 months later: countries will be required to ban prohibited AI systems;
- 1 year later: rules for general-purpose AI systems will start applying;
- 2 years later: the whole AI Act will be enforceable.

➵ Fines for non-compliance can be up to 35 million Euros or 7% of worldwide annual turnover.
Every LLM has a personality

GPT-4 is like a lazy reticent teenager refusing to answer questions, requiring a lot of pushing to write code and do things

Claude 3 is like an eager and thoughtful adult with a bias for action

Gemini is a holier-than-thou wokist that strives too hard to be politically correct

- Source Twitter
Mind blowing stuff by Claude 3, even it's basic version is somehow close to GPT-4 🔥

And Claude 3 OPUS is surpassing everyone in all criterias 🔥😀

https://www.anthropic.com/news/claude-3-family
2024/06/20 01:50:17
Back to Top
HTML Embed Code: