livellosegreto.it is one of the many independent Mastodon servers you can use to participate in the fediverse.
Livello Segreto è il social etico che ha rispetto di te e del tuo tempo.

Administered by:

Server stats:

1.3K
active users

#NeuralNetworks

9 posts8 participants0 posts today
Calin Sandu<p>In an increasingly connected world, the Internet of Things plays a pivotal role in automating and enhancing various aspects of daily life and industrial operations. However, the complexity and sheer volume of data generated by IoT devices pose a significant challenge for ensuring security and operational efficiency. One crucial solution lies in anomaly detection using neural networks, which can identify unusual..</p><p><a href="https://ml-nn.eu/a1/83.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ml-nn.eu/a1/83.html</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/Programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://mastodon.social/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a></p>
The-14<p>To understand the future of AI, take a look at the failings of Google&nbsp;Translate<br><a href="https://mastodon.world/tags/Tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tech</span></a> <a href="https://mastodon.world/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.world/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.world/tags/GoogleTranslate" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GoogleTranslate</span></a> <a href="https://mastodon.world/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mastodon.world/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.world/tags/DeepSeek" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepSeek</span></a> <a href="https://mastodon.world/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ChatGPT</span></a> <a href="https://mastodon.world/tags/Transformers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Transformers</span></a> <a href="https://mastodon.world/tags/MachineTranslation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineTranslation</span></a> <a href="https://mastodon.world/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <br><a href="https://the-14.com/to-understand-the-future-of-ai-take-a-look-at-the-failings-of-google-translate/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">the-14.com/to-understand-the-f</span><span class="invisible">uture-of-ai-take-a-look-at-the-failings-of-google-translate/</span></a></p>
Calin Sandu<p>Check out our Code Store:</p><p><a href="https://ml-nn.eu/store.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ml-nn.eu/store.html</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/Programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://mastodon.social/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a></p>
Leanpub<p>The Hundred-Page Language Models Book by Andriy Burkov is on sale on Leanpub! Its suggested price is $50.00; get it for $20.00 with this coupon: <a href="https://leanpub.com/sh/kUMoPps8" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">leanpub.com/sh/kUMoPps8</span><span class="invisible"></span></a> <a href="https://mastodon.social/tags/Ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ai</span></a> <a href="https://mastodon.social/tags/Gpt" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gpt</span></a> <a href="https://mastodon.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://mastodon.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepLearning</span></a> <a href="https://mastodon.social/tags/DataScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DataScience</span></a> <a href="https://mastodon.social/tags/ComputerScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ComputerScience</span></a> <a href="https://mastodon.social/tags/Linguistics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Linguistics</span></a></p>
Calin Sandu<p>An attention mechanism is a key component in artificial neural networks, particularly in sequence modeling and natural language processing. It enables models to focus on specific parts of input data (such as words in a sentence) when making predictions or generating output.<br>In a nutshell, instead of the model treating all parts of the input sequence equally, the attention mechanism allows it to assign different..</p><p><a href="https://ml-nn.eu/a1/39.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ml-nn.eu/a1/39.html</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/Programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://mastodon.social/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a></p>
Calin Sandu<p>Markov Decision Processes (MDPs</p><p>Markov Decision Processes (MDPs) are mathematical frameworks used to model decision-making in environments where outcomes are partly random and partly under the control of a decision maker. They are widely used in various fields, including artificial intelligence, robotics, economics, and operations research, to optimize decisions over time.</p><p><a href="https://ml-nn.eu/a1/49.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ml-nn.eu/a1/49.html</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/Programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://mastodon.social/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a></p>
Calin Sandu<p>Top projects:<br>1. Build you own Mini LLM<br><a href="https://lnkd.in/dVQWkd4R" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/dVQWkd4R</span><span class="invisible"></span></a><br>2. Traffic Management<br><a href="https://lnkd.in/dDPrvjC2" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/dDPrvjC2</span><span class="invisible"></span></a><br>3. Fake News Classification<br><a href="https://lnkd.in/dfpqxXB4" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/dfpqxXB4</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://mastodon.social/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a></p>
Gert :debian: :gnu: :linux:<p>Nel caso non foste ancora riusciti ad implementare la vostra rete neurale per vincere al Lotto, questo potrebbe aiutare 🙂<br><a href="https://qoto.org/tags/neuralnetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>neuralnetworks</span></a> <br><a href="https://direct.mit.edu/books/oa-monograph/5608/Gradient-ExpectationsStructure-Origins-and" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">direct.mit.edu/books/oa-monogr</span><span class="invisible">aph/5608/Gradient-ExpectationsStructure-Origins-and</span></a></p>
Leanpub<p>Your First Machine Learning Book: A Gentle Introduction to the Science Behind Modern AI <a href="https://leanpub.com/yourfirstmachinelearningbook" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">leanpub.com/yourfirstmachinele</span><span class="invisible">arningbook</span></a> by Peng Shao is the featured book on the Leanpub homepage! <a href="https://leanpub.com" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">leanpub.com</span><span class="invisible"></span></a> <a href="https://mastodon.social/tags/Ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ai</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/DataScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DataScience</span></a> <a href="https://mastodon.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepLearning</span></a> <a href="https://mastodon.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://mastodon.social/tags/books" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>books</span></a> <a href="https://mastodon.social/tags/ebooks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ebooks</span></a></p><p>Ever wonder how search engines seem to read your mind, or why your social media feed feels like it knows you?</p><p>Find it on Leanpub!</p>
InterNews112<p>### Современные текстовые нейросети: от теории к практике<br>Начало 2025 года ознаменовалось динамичным развитием технологий искусственного интеллекта, где ключевую роль играют крупные языковые модели. Китайская компания DeepSeek бросила вызов лидерам рынка, представив бесплатный чат-бот с открытым исходным кодом, что спровоцировало снижение акций NVIDIA на 10% и заставило Кремниевую долину пересмотреть свои стратегии[1]. Этот прорыв демонстрирует, как новые подходы к обучению моделей и оптимизации вычислительных ресурсов трансформируют индустрию ИИ.<br>## Основные понятия: нейросети и токенизация<br>Искусственные нейронные сети имитируют работу человеческого мозга, используя многослойные структуры взаимосвязанных "нейронов" для обработки информации. В контексте языковых моделей это проявляется в способности анализировать и генерировать текст, выявляя сложные закономерности в данных[1].<br>**Токенизация** представляет собой процесс разбиения текста на смысловые единицы. Например, предложение "ИИ меняет мир" распадается на три токена: ["ИИ", "меняет", "мир"]. Современные языковые модели оперируют контекстными окнами от 4 тыс. до 1 млн токенов, что определяет их способность "запоминать" предыдущие взаимодействия[1].<br>## Ведущие языковые модели<br>### OpenAI ChatGPT<br>Пионер в области языковых моделей, представивший GPT-4 и ChatGPT-5, поддерживающие до 128 тыс. токенов контекста. Универсальность позволяет использовать их как для создания художественных текстов, так и для анализа юридических документов[1]. Коммерческое API стоит $0.03 за 1 тыс. токенов ввода и требует строгой модерации контента.<br>**Автомобильный аналог**: Mercedes-Benz. **Слоган**: "Лучшее или ничего".<br>**Ссылка**: <a href="https://chat.openai.com/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="">chat.openai.com/</span><span class="invisible"></span></a><br>### DeepSeek-V3<br>Китайская разработка с открытым исходным кодом, потрясшая рынок технологических компаний. Использует инновационные методы обучения, сокращая бюджет разработки до $6 млн по сравнению с многомиллиардными затратами конкурентов[1]. Бесплатный доступ через приложение R1 с контекстным окном 32 тыс. токенов делает её популярной среди исследователей.<br>**Автомобильный аналог**: Tesla. **Слоган**: "Ускоряя переход к устойчивой энергетике".<br>**Ссылка**: <a href="https://chat.deepseek.com/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="">chat.deepseek.com/</span><span class="invisible"></span></a><br>### Anthropic Claude 3<br>Разработка, ориентированная на анализ длинных текстов с рекордным контекстом в 1 млн токенов. Оптимальна для работы с технической документацией, однако стоимость API достигает $0.25 за 1 тыс. выходных токенов. Отличается строгими этическими фильтрами контента[2].<br>**Автомобильный аналог**: Volvo. **Слоган**: "For life".<br>**Ссылка**: <a href="https://www.anthropic.com/claude" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="">anthropic.com/claude</span><span class="invisible"></span></a><br>### Qwen2.5<br>Совместный проект Alibaba и китайских исследовательских институтов. Поддерживает 64 тыс. токенов и ориентирован на мультиязычность, демонстрируя лучшие результаты для азиатских языков[3]. Бесплатная версия доступна через облачный сервис Aliyun.<br>**Автомобильный аналог**: Toyota. **Слоган**: "Let's Go Places".<br>**Ссылка**: <a href="https://qianwen.aliyun.com/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="">qianwen.aliyun.com/</span><span class="invisible"></span></a><br>## Сравнительный анализ моделей<br>**Глубина анализа**:<br>- ChatGPT: 9/10 (универсальность)<br>- DeepSeek: 8.5/10 (исследовательская направленность)<br>- Claude 3: 9.5/10 (работа с длинными текстами)<br>- Qwen2.5: 8/10 (мультиязычность)<br>**Экономическая эффективность**:<br>- DeepSeek R1: бесплатно (32k токенов)<br>- ChatGPT Plus: $20/мес (128k токенов)<br>- Claude Team: $30/мес (1M токенов)<br>- Qwen2.5: бесплатно через Aliyun (64k токенов)<br>**Ограничения**:<br>- Политическая цензура у китайских моделей<br>- Высокие требования к оборудованию для локального запуска<br>- Возможные задержки ответа в облачных решениях при высокой нагрузке<br>## Будущее индустрии<br>Падение акций NVIDIA на 10% после выхода DeepSeek свидетельствует о переходе фокуса с аппаратных мощностей на алгоритмическую эффективность. По прогнозам Citi, к 2026 году 70% задач обработки естественного языка будут выполняться моделями с открытым исходным кодом[3].<br>Развитие локальных решений создаёт новый рынок "персонализированных ИИ", где пользователи смогут обучать модели под свои нужды без зависимости от облачных платформ. Это особенно важно для малого бизнеса и независимых исследователей[4].<br>## Локальные нейросети: установка и настройка<br>Платформа **Ollama** делает запуск ИИ-моделей доступным для персональных компьютеров. Требования:<br>- Видеокарта с 8+ ГБ памяти (RTX 2070/4060)<br>- 16 ГБ оперативной памяти<br>- Поддержка CUDA (NVIDIA) или ROCm (AMD)<br>Установка через терминал:<br>```bash<br>curl -fsSL <a href="https://ollama.ai/install.sh" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="">ollama.ai/install.sh</span><span class="invisible"></span></a> | sh<br>ollama run llama3<br>```<br>Этот код запускает модель LLaMA 3 с контекстом 8 тыс. токенов. Пользователи отмечают удобную интеграцию с Python-библиотеками для создания кастомных решений, хотя возможны трудности с мультиязычными ответами[6].<br>**Автомобильный аналог**: Jeep. **Слоган**: "Go Anywhere, Do Anything".<br>**Ссылка**: <a href="https://ollama.ai/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="">ollama.ai/</span><span class="invisible"></span></a><br>## Заключение<br>Выбор языковой модели зависит от конкретных задач: DeepSeek предлагает лучшую стоимость для академических исследований, ChatGPT остаётся лидером в универсальности, Claude 3 выделяется обработкой длинных текстов, а Qwen2.5 выигрывает в мультиязычности[5]. С развитием технологий токенизация и оптимизация вычислений продолжат играть ключевую роль в удешевлении и ускорении обработки данных.<br>### Хэштеги:<br><a href="https://qoto.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://qoto.org/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://qoto.org/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://qoto.org/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepLearning</span></a> <a href="https://qoto.org/tags/NLP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NLP</span></a> <a href="https://qoto.org/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://qoto.org/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ChatGPT</span></a> <a href="https://qoto.org/tags/ClaudeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ClaudeAI</span></a> <a href="https://qoto.org/tags/DeepSeek" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepSeek</span></a> <a href="https://qoto.org/tags/Qwen" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Qwen</span></a> <a href="https://qoto.org/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://qoto.org/tags/Tokenization" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tokenization</span></a> <a href="https://qoto.org/tags/OpenSourceAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenSourceAI</span></a> <a href="https://qoto.org/tags/TechTrends" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechTrends</span></a> <a href="https://qoto.org/tags/AIResearch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIResearch</span></a> <a href="https://qoto.org/tags/AIModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIModels</span></a> <a href="https://qoto.org/tags/AIInnovation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIInnovation</span></a><br>### Литература:<br>1. Bengio Y., Goodfellow I., Courville A. *Deep Learning*. MIT Press, 2016.<br>2. Vaswani A. et al. *Attention is All You Need*. NeurIPS, 2017.<br>3. Brown T. et al. *Language Models are Few-Shot Learners*. NeurIPS, 2020.<br>4. OpenAI Research. *Scaling Laws for Neural Language Models*, 2020.<br>5. Hestness J. et al. *Deep Learning Scaling is Predictable, Empirically*. arXiv:1712.00409, 2017.<br>6. Radford A. et al. *Improving Language Understanding by Generative Pre-Training*. OpenAI, 2018.<br>7. DeepSeek AI. *Technical Report on DeepSeek-V3*, 2025.<br>8. Anthropic AI. *Claude Model Architecture and Capabilities*, 2024.<br>9. Alibaba Cloud Research. *Qwen Model Overview*, 2024.<br>10. NVIDIA AI Labs. *Future of AI Hardware and Optimization*, 2024.<br>11. Citigroup AI Analysis. *Market Trends in LLM Development*, 2025.<br>12. Stanford NLP Group. *Comprehensive Guide to Tokenization*, 2023.</p><p><a href="https://bastyon.com/post?s=47273c436dce8b15495f4d1464b26e787967cdf03ace2adf285c579f24f09cf3&amp;ref=PMC55eKCrsxoJNkiB3f71AgFLQC3T9HkWV" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">bastyon.com/post?s=47273c436dc</span><span class="invisible">e8b15495f4d1464b26e787967cdf03ace2adf285c579f24f09cf3&amp;ref=PMC55eKCrsxoJNkiB3f71AgFLQC3T9HkWV</span></a></p>
Calin Sandu<p>Must read articles:</p><p>1.How Does ChatGPT Work?<br><a href="https://ml-nn.eu/a1/54.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ml-nn.eu/a1/54.html</span><span class="invisible"></span></a></p><p>2.Random Decision Forests<br><a href="https://ml-nn.eu/a1/62.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ml-nn.eu/a1/62.html</span><span class="invisible"></span></a></p><p>3.Machine Learning Terminology<br><a href="https://ml-nn.eu/a1/45.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ml-nn.eu/a1/45.html</span><span class="invisible"></span></a></p><p>4.What Are LLMs?<br><a href="https://ml-nn.eu/a1/71.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ml-nn.eu/a1/71.html</span><span class="invisible"></span></a></p><p>5.Training a Neural Network<br><a href="https://ml-nn.eu/a1/35.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ml-nn.eu/a1/35.html</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/Programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://mastodon.social/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a></p>
Calin Sandu<p>Natural Language Processing (NLP) has undergone revolutionary advancements in recent years, largely driven by the adoption of neural networks. These sophisticated computational models have transformed how machines understand, interpret, and generate human language.</p><p><a href="https://ml-nn.eu/a1/64.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ml-nn.eu/a1/64.html</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/Programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://mastodon.social/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a></p>
datatofu<p>Consider performing risk assessments using GNNs. Early risk assessments using graphs can mitigate losses and produce results beyond the purview of traditional methods. </p><p>E.g.: using a TH-GNN enables viewing companies and their respective investors as individual tribes. </p><p>This makes discerning risky companies from normal companies more manageable.</p><p><a href="https://datatofu.wordpress.com" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">datatofu.wordpress.com</span><span class="invisible"></span></a> </p><p>Tags: <a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linux</span></a> <a href="https://mastodon.social/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.social/tags/datascience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>datascience</span></a> <a href="https://mastodon.social/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a> <a href="https://mastodon.social/tags/python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>python</span></a> <a href="https://mastodon.social/tags/rstats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rstats</span></a> <a href="https://mastodon.social/tags/neuralnetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>neuralnetworks</span></a></p>
Miguel Afonso Caetano<p>Tell me about it... </p><p>"Artificial intelligence (AI) systems with human-level reasoning are unlikely to be achieved through the approach and technology that have dominated the current boom in AI, according to a survey of hundreds of people working in the field.</p><p>More than three-quarters of respondents said that enlarging current AI systems ― an approach that has been hugely successful in enhancing their performance over the past few years ― is unlikely to lead to what is known as artificial general intelligence (AGI). An even higher proportion said that neural networks, the fundamental technology behind generative AI, alone probably cannot match or surpass human intelligence. And the very pursuit of these capabilities also provokes scepticism: less than one-quarter of respondents said that achieving AGI should be the core mission of the AI research community.</p><p>"I don’t know if reaching human-level intelligence is the right goal,” says Francesca Rossi, an AI researcher at IBM in Yorktown Heights, New York, who spearheaded the survey in her role as president of the Association for the Advancement of Artificial Intelligence (AAAI) in Washington DC. “AI should support human growth, learning and improvement, not replace us.”</p><p>The survey results were unveiled in Philadelphia, Pennsylvania, on Saturday at the annual meeting of the AAAI. They include responses from more than 475 AAAI members, 67% of them academics."</p><p><a href="https://www.nature.com/articles/d41586-025-00649-4" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">nature.com/articles/d41586-025</span><span class="invisible">-00649-4</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/AGI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AGI</span></a> <a href="https://tldr.nettime.org/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://tldr.nettime.org/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepLearning</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a></p>
TU München<p>To train <a href="https://wisskomm.social/tags/neuralnetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>neuralnetworks</span></a> more efficiently and reduce <a href="https://wisskomm.social/tags/energyconsumption" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>energyconsumption</span></a>, researchers developed a method that directly calculates parameters based on probabilities, rather than using an iterative approach: <a href="http://go.tum.de/972782" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="">go.tum.de/972782</span><span class="invisible"></span></a></p><p><a href="https://wisskomm.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://wisskomm.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://wisskomm.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a></p><p>📷V.Hohenegger / LRZ</p>
Calin Sandu<p>Why Are AI Models Not Truly Intelligent?</p><p>Artificial Intelligence (AI) has advanced significantly, transforming industries and reshaping how we interact with technology. Despite these impressive capabilities, AI models remain fundamentally different from human intelligence. They excel at pattern recognition...</p><p><a href="https://ml-nn.eu/a1/82.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ml-nn.eu/a1/82.html</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/Programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://mastodon.social/tags/Python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Python</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a></p>
Timo Kissel<p>This is still solidly in the <a href="https://mastodon.world/tags/research" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>research</span></a> stage, so who know whether this will amount to much in the end, but it is very much a necessary direction to explore with potentially significant payoff</p><p><a href="https://mastodon.world/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.world/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.world/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a><br><a href="https://newatlas.com/brain/cortical-bioengineered-intelligence/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">newatlas.com/brain/cortical-bi</span><span class="invisible">oengineered-intelligence/</span></a></p>
Leanpub<p>The Hundred-Page Language Models Book by Andriy Burkov is on sale on Leanpub! Its suggested price is $50.00; get it for $20.00 with this coupon: <a href="https://leanpub.com/sh/8FyyKnzL" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">leanpub.com/sh/8FyyKnzL</span><span class="invisible"></span></a> <a href="https://mastodon.social/tags/Ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ai</span></a> <a href="https://mastodon.social/tags/Gpt" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gpt</span></a> <a href="https://mastodon.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://mastodon.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepLearning</span></a> <a href="https://mastodon.social/tags/DataScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DataScience</span></a> <a href="https://mastodon.social/tags/ComputerScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ComputerScience</span></a> <a href="https://mastodon.social/tags/Linguistics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Linguistics</span></a></p>
Tariq<p>People seem to really like one of my earlier projects. </p><p>It was even translated into 7 other languages !</p><p>"Make Your Own Neural Network"</p><p>* no previous expertise needed<br>* introduces basic python and Jupyter notebooks<br>* explains learning from examples<br>* builds a simple network to classify handwritten numbers</p><p>www.amazon.com/dp/B01EER4Z4G/</p><p>all the code is on GitHub<br><a href="https://github.com/makeyourownneuralnetwork/makeyourownneuralnetwork" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/makeyourownneuralne</span><span class="invisible">twork/makeyourownneuralnetwork</span></a></p><p><a href="https://mastodon.social/tags/python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>python</span></a> <a href="https://mastodon.social/tags/machinelearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>machinelearning</span></a> <a href="https://mastodon.social/tags/neuralnetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>neuralnetworks</span></a></p>
Nicola Fabiano :xmpp:<p>4/4<br>📝 What’s next?</p><p>Tomorrow, I will organize and prepare the submissions and consider the proofs to be corrected, but my mind is already moving forward. The next project? Probably a novel. A story has been knocking on the door of my imagination for some time now. Stay tuned for what’s coming next!</p><p><a href="https://fosstodon.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://fosstodon.org/tags/artificialintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>artificialintelligence</span></a> <a href="https://fosstodon.org/tags/DataProtection" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DataProtection</span></a> <a href="https://fosstodon.org/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a></p>