Deep Learning

Latest news headlines about artificial intelligence

Google DeepMind alumni unveil Bioptimus: Aiming to build first universal biology AI model

Feb. 20, 2024, 2:17 p.m. • VentureBeat • (2 Minute Read)

Bioptimus, a Paris-based startup, has emerged from stealth with a $35 million seed funding round, aiming to build the first universal AI foundation model for biology. The project, led by a team of Google DeepMind alumni and Owkin scientists, plans to connect different scales of biology with generative AI, from molecules to whole organisms. Bioptimus will leverage AWS compute, Owkin’s data generation capabilities, and multimodal patient data sourced from leading academic hospitals worldwide. The team asserts that their approach will differentiate them from models trained solely on public datasets, providing a more comprehensive understanding of biology. The project will be released as open source, fostering transparency and collaboration within the community.

Why artificial general intelligence lies beyond deep learning

Feb. 18, 2024, 7:15 p.m. • VentureBeat • (4 Minute Read)

The news story discusses the limitations of deep learning in achieving artificial general intelligence (AGI). It highlights the challenges of using deep learning, which relies on prediction and has difficulty handling unpredictable real-world scenarios. The article proposes the use of decision-making under deep uncertainty (DMDU) methods, such as Robust Decision-Making, as a potential framework for realizing AGI reasoning. It emphasizes the need to pivot towards decision-driven AI methods that can effectively handle uncertainties in the real world. The authors, Swaptik Chowdhury and Steven Popper, advocate for a departure from the deep learning paradigm and emphasize the importance of decision context in advancing towards AGI.

What is Apple's generative AI strategy?

Feb. 15, 2024, 2 p.m. • VentureBeat • (3 Minute Read)

Apple's generative AI strategy is gaining attention due to its significant contributions to the field, which have been largely unnoticed. Amidst the buzz surrounding other tech companies, Apple has released papers, models, and programming libraries that signal its growing influence in on-device generative AI. Leveraging its vertical integration and control over its hardware and software stack, Apple is focusing on optimizing generative models for on-device inference. The company's efforts include the release of open-source models, such as Ferret and MLLM-Guided Image Editing, and the creation of the MLX library for machine learning developers. These developments suggest that Apple is positioning itself as a key player in the future of on-device generative AI, setting the stage for potential advancements in its devices and applications.

The Evolution of AI: Differentiating Artificial Intelligence and Generative AI

Feb. 15, 2024, 7:16 a.m. • ai2.news • (15 Minute Read)

Roman Rember, discusses the emergence of Generative Artificial Intelligence (GenAI) as a subset that goes beyond traditional AI capabilities. While AI excels in specific tasks like data analysis and pattern prediction, GenAI acts as a creative artist by generating new content such as images, designs, and music. The article highlights the potential impact of GenAI on various industries and the workforce, citing a McKinsey report that anticipates up to 29.5% of work hours in the U.S. economy being automated by AI, including GenAI, by 2030. However, the integration of GenAI into teams poses unique challenges, such as potential declines in productivity and resistance to collaboration with AI agents. The article emphasizes the need for collaborative efforts between HR professionals and organizational leaders to address these challenges and establish common practices for successful integration. It also underscores the importance of robust learning programs and a culture emphasizing teaching and learning to harness the potential of GenAI for growth and innovation. The article provides a comprehensive overview of GenAI and its implications, aiming to inform and prepare organizations and individuals for the transformative power of this technology.

Andrej Karpathy confirms departure (again) from OpenAI

Feb. 14, 2024, 3:48 a.m. • VentureBeat • (1 Minute Read)

Andrej Karpathy has announced his departure from OpenAI for the second time, stating that his "immediate plan is to work on my personal projects and see what happens." He reassured his followers that his decision was not influenced by any specific event, issue, or drama. Karpathy first worked at OpenAI from 2016 to 2017 under co-founders Elon Musk and Sam Altman, then left to join Musk at Tesla as Director of AI until 2022. After leaving Tesla, he returned to OpenAI in 2023, only to now announce his departure once more. The news of his exit was reported by The Information, marking another transition in Karpathy's esteemed career in the field of artificial intelligence.

Protesters gather outside OpenAI office, opposing military AI and AGI

Feb. 13, 2024, 4:47 a.m. • VentureBeat • (3 Minute Read)

Dozens of protesters gathered outside the OpenAI headquarters in San Francisco on Monday evening to voice their opposition to the company's development of artificial intelligence (AI). The demonstration, organized by two groups—Pause AI and No AGI—urged OpenAI engineers to halt their work on advanced AI systems like the chatbot ChatGPT. The collective demanded an end to the development of artificial general intelligence (AGI) and expressed concerns about further military affiliations following reports that OpenAI had taken on the Pentagon as a client. Protest organizers emphasized the potential risks of AGI, advocating for a global pause on its development until it can be deemed safe. The protest reflects growing public distrust and ethical concerns surrounding AI development, particularly regarding its militarization and implications for society.

OpenAI's Eve humanoids make impressive progress in autonomous work

Feb. 13, 2024, 3:40 a.m. • New Atlas • (5 Minute Read)

OpenAI's Eve humanoids are making significant strides in autonomous work, showcasing their abilities in a recent video. While visually less impressive than their competitors, the Eve humanoid robots from 1X are capable of performing tasks independently through neural network control, with no teleoperation or scripted trajectory playback involved. These robots, lacking the advanced mobility and dextrous hands of other models, are designed for use in warehouses and factories, where their primary task will be picking up and moving objects. The company has employed a novel training method involving imitation learning via video and teleoperation, allowing the robots to efficiently learn and execute a range of tasks. With further developments underway, the future integration of these autonomous humanoid robots in various industries seems promising.

GenAI's Pace of Development Is Shattering Moore's Law

Feb. 12, 2024, 7:30 p.m. • PYMNTS.com • (3 Minute Read)

In a groundbreaking development, generative AI models have outpaced Moore's Law, with products from major tech firms like Google, OpenAI, Amazon, and Microsoft reaching their second, third, and even fourth versions within just a year. Google recently launched Gemini Advanced, the latest version of its AI chatbot, showcasing the rapid evolution in the AI landscape. These advancements are driven by continuous research and development, leading to improvements in model architecture, training methodologies, and application-specific enhancements. The rise of generative AI tools has sparked a multi-modal movement, enabling AI systems to process visual, verbal, audio, and textual data simultaneously. This rapid progress underscores the transformative potential of AI technology, with major implications for industries ranging from finance to healthcare.

New AI tool discovers realistic 'metamaterials' with unusual properties

Feb. 9, 2024, 7:13 p.m. • Phys.org • (5 Minute Read)

Researchers from TU Delft have developed a new AI tool that can discover and design realistic metamaterials with unusual properties. These metamaterials have the potential to create devices with unprecedented functionalities, such as coatings that can hide objects in plain sight or implants that behave like bone tissue. Unlike traditional materials, whose properties are determined by molecular composition, metamaterials' properties are determined by their unique structures. The AI tool incorporates deep-learning models to solve the inverse problem of finding the geometry that gives rise to desired properties, bypassing previous limitations. Additionally, the research focuses on addressing the durability of metamaterials, a practical problem often neglected in previous studies, resulting in fabrication-ready designs with exceptional functionalities. The potential applications of these metamaterials range from orthopedic implants to soft robots, presenting revolutionary opportunities in various fields. The study opens new possibilities for metamaterial applications, shifting the design process from intuition and trial-and-error to an inverse design approach using AI.

Autonomous underwater robot helps reduce biofouling by ships

Feb. 8, 2024, 9:30 p.m. • Interesting Engineering • (2 Minute Read)

The innovative underwater robot developed by ScrubMarine is poised to revolutionize the shipping industry by autonomously combatting biofouling, which is the accumulation of microorganisms, plants, and algae on marine vessels. This poses significant challenges to hull structures and propulsion systems, leading to increased fuel consumption. ScrubMarine's robotic solution promises to reduce fuel costs, maintenance needs, and environmental impact by addressing biofouling issues. The company is taking part in Heriot-Watt University's DeepTech LaunchPad program to further develop its underwater robot and navigate real-world applications. The program aims to foster innovation in various sectors, with plans to expand to other Scottish universities in the future. If successful, ScrubMarine and its cohort members will bring transformative technologies to market, signaling a new era of commercial success and sustainability across critical sectors.

Artificial intelligence framework for heart disease classification from audio signals | Scientific Reports

Feb. 7, 2024, 11:17 a.m. • Nature.com • (46 Minute Read)

In the article researchers investigate the use of machine learning (ML) and deep learning (DL) techniques to detect heart disease from audio signals. The study utilizes real heart audio datasets and employs data augmentation to improve the model’s performance by introducing synthetic noise to the heart sound signals. Additionally, the research develops a feature ensembler to integrate various audio feature extraction techniques. Several machine learning and deep learning classifiers are utilized for heart disease detection, and the multilayer perceptron model performs best, with an accuracy rate of 95.65%. The study demonstrates the potential of this methodology in accurately detecting heart disease from sound signals, presenting promising opportunities for enhancing medical diagnosis and patient care. The article also emphasizes the importance of early detection in the fight against cardiovascular disease and highlights the potential of advanced technologies, such as machine learning and artificial intelligence, in improving healthcare outcomes. Additionally, the research addresses the need for broader and more efficient ML and DL models to improve the accuracy and reliability of diagnosing cardiovascular diseases, aiming to make important advancements in the field. The article provides insights into the research gap, the proposed methodology, and the future developments in the field of heart disease detection from sound signals. Overall, the study contributes to the development of more accurate and reliable methods for diagnosing cardiovascular diseases, potentially improving patient outcomes and lessening the impact of cardiovascular disease.

OpenAI joins Meta in labeling AI generated images

Feb. 6, 2024, 9:49 p.m. • VentureBeat • (3 Minute Read)

OpenAI has joined Meta in implementing a new system for labeling AI-generated images. This move comes shortly after Meta's announcement of a similar measure to label AI-generated images through its Imagine AI image generator. OpenAI's marquee app, ChatGPT, and its AI image generator model, DALL-E 3, will now include new metadata tagging to allow identification of AI-generated imagery. According to OpenAI, the new metadata using C2PA specifications will enable anyone, including social platforms and content distributors, to verify that an image was created by their products. The company has also introduced a website called Content Credentials, allowing users to confirm if an image is AI-generated. The Coalition for Content Provenance and Authenticity (C2PA) has developed technical standards for certifying the source and history of media content, aiming to combat disinformation. OpenAI's implementation of C2PA aims to embed metadata in AI image files, allowing platforms to identify AI-generated content. However, OpenAI acknowledges that this metadata is not foolproof, as it can be removed accidentally or intentionally. Conversely, Meta's approach involves a public-facing labeling scheme using a sparkles emoji to immediately signify AI-generated images. Both companies' strategies rely on C2PA, with Meta also utilizing the IPTC Photo Metadata Standard from the International Press Telecommunications Council.

Apple releases 'MGIE', a revolutionary AI model for instruction-based image editing

Feb. 6, 2024, 9:03 p.m. • VentureBeat • (3 Minute Read)

Apple has released a new open-source AI model, called "MGIE," that can edit images based on natural language instructions. This model, developed in collaboration with researchers from the University of California, Santa Barbara, leverages multimodal large language models (MLLMs) to interpret user commands and perform pixel-level manipulations. It uses a novel end-to-end training scheme that jointly optimizes instruction derivation, visual imagination, and image editing modules. MGIE can handle a wide range of editing scenarios and is available as an open-source project on GitHub, allowing for easy use and customization. The release of MGIE demonstrates Apple's growing prowess in AI research and development, marking a significant breakthrough in the field of instruction-based image editing.

Scientists use AI to investigate structure and long-term behavior of galaxies

Feb. 5, 2024, 5:01 p.m. • Phys.org • (5 Minute Read)

In an innovative approach, scientists at Bayreuth University are using artificial intelligence (AI) to investigate the structure and long-term behavior of galaxies. Dr. Sebastian Wolfschmidt and Christopher Straub are employing a deep neural network based on Einstein's theory of relativity to quickly predict the stability of galaxy models. This AI-based method aims to efficiently verify or falsify astrophysical hypotheses within seconds, providing a significant advancement in the field. Their findings have been accepted for publication in the journal Classical and Quantum Gravity. The researchers have plans for further applications of similar methods and have utilized the supercomputer of the "Keylab HPC" at the University of Bayreuth for their calculations. This groundbreaking study sheds light on the potential of AI in understanding complex astrophysical phenomena, propelling advancements in galaxy research.

Google's Bard AI Chatbot Will Soon Be Called Gemini

Feb. 5, 2024, 7:37 a.m. • Analytics Insight • (5 Minute Read)

Google's AI chatbot Bard will soon be rebranded as Gemini, highlighting its enhanced capabilities, according to a leaked changelog by developer Dylan Roussel. Gemini is based on a large language model (LLM) and is capable of complex activities such as coding, deductive reasoning, and creative collaboration. It will be integrated into other Google services like YouTube, Gmail, and Maps to improve their functionality. Additionally, Google plans to release a premium membership level called Gemini Advanced, offering access to Gemini Ultra, the most powerful version of the AI. Furthermore, an Android app for Gemini is in development to allow users to utilize Google's AI for various purposes on their phones. This rebranding and new features are expected to position Google's Gemini as a competitive AI chatbot in the market.

How to Build an Effective and Engaging AI Healthcare Chatbot

Feb. 3, 2024, 8:37 a.m. • Analytics Insight • (6 Minute Read)

In the dynamic realm of healthcare, Artificial Intelligence (AI) has emerged as a game-changer, bringing forth innovative solutions to enhance patient engagement and streamline medical services. Among the remarkable AI applications, healthcare chatbots stand out as virtual assistants, poised to revolutionize the way patients interact with the healthcare ecosystem. These intelligent conversational agents offer a spectrum of services, from scheduling appointments to providing crucial medical information and symptom analysis. This comprehensive guide illuminates the pivotal steps involved in crafting a potent AI healthcare chatbot. Delving into the intricacies of compliance, data security, and personalized interactions, it navigates the intersection of cutting-edge technology and the nuanced landscape of healthcare, offering a roadmap for developers to create effective and engaging digital healthcare companions. AI healthcare chatbots serve as virtual assistants capable of engaging in natural language conversations with users. They can offer a wide range of services, including appointment scheduling, medication reminders, symptom analysis, and general health information dissemination. Building an effective healthcare chatbot involves a combination of technical prowess, understanding healthcare nuances, and ensuring a user-friendly experience. The guide delves into key steps to build an AI healthcare chatbot, including defining the purpose and scope, compliance with healthcare regulations, data security and privacy measures, natural language processing (NLP) integration, medical content integration, personalization and user profiles, appointment scheduling and reminders, symptom analysis and triage, continuous learning and updates, and multi-channel accessibility. The article also highlights the challenges and considerations in building AI healthcare chatbots, such as ethical considerations, potential biases in AI algorithms, and the need for ongoing maintenance and updates.

Hugging Face launches open source AI assistant maker to rival OpenAI's custom GPTs

Feb. 2, 2024, 10:51 p.m. • VentureBeat • (3 Minute Read)

Hugging Face, a New York City-based startup known for its popular open source AI repository, has launched third-party customizable Hugging Chat Assistants, presenting competition to OpenAI's custom GPTs. The new offering allows users to easily create their own AI chatbots with specific capabilities, similar to OpenAI's custom GPT Builder. Unlike OpenAI's paid subscription model, Hugging Chat Assistant is free and allows users to choose from several open-source large language models to power their AI assistants. While users praise the customizability and free nature of Hugging Chat Assistant, they note that it lacks certain features present in GPTs. This release demonstrates the open-source community's rapid progress in competing with closed rivals such as OpenAI.

AI-Powered Robot Reads Braille Twice as Fast as Humans

Jan. 29, 2024, 10:51 p.m. • Neuroscience News • (4 Minute Read)

Researchers at the University of Cambridge have developed a robotic sensor that uses artificial intelligence to read braille at a remarkable speed of 315 words per minute, with an accuracy of 87%. This speed is more than double the average reading speed of most human braille readers. The robotic sensor employs machine learning algorithms to interpret braille with high sensitivity, mimicking human-like reading behavior. While not designed as assistive technology, this breakthrough has implications for the development of sensitive robotic hands and prosthetics, challenging the engineering task of replicating human fingertip sensitivity in robotics. The team aims to scale the technology to the size of a humanoid hand or skin in the future, with hopes to broaden its applications beyond reading braille.

OpenAI launches new generation of embedding models and other API updates

Jan. 25, 2024, 8:53 p.m. • VentureBeat • (2 Minute Read)

OpenAI has released a new generation of embedding models and other API updates, aiming to enhance the performance and affordability of its offerings. The company's new embedding models, text-embedding-3-small and text-embedding-3-large, have shown significant improvements in multi-language retrieval and English tasks, with increased dimensions for capturing semantic information. OpenAI has also updated its GPT-4 Turbo and GPT-3.5 Turbo models, introducing features such as improved instruction following and a new 16k context version of GPT-3.5 Turbo. Furthermore, the company has enhanced its text moderation model to handle more languages and domains. OpenAI's efforts are focused on making its models and services more accessible and useful for developers and customers, with plans to further expand its offerings in the future.

Google shows off Lumiere, a space-time diffusion model for realistic AI videos

Jan. 24, 2024, 8:57 p.m. • VentureBeat • (2 Minute Read)

Google has unveiled Lumiere, a space-time diffusion model that aims to revolutionize AI video generation. Developed by researchers from Google, Weizmann Institute of Science, and Tel Aviv University, Lumiere offers a unique approach to realistic video synthesis. The model, trained on a dataset of 30 million videos, is capable of generating 80 frames at 16 fps and outperforms existing AI video models by producing higher motion magnitude while maintaining temporal consistency and overall quality. It uses a Space-Time U-Net architecture to generate the entire temporal duration of the video at once, through a single pass in the model, leading to more realistic and coherent motion, addressing a gap in the existing AI video market. Users can describe what they want in natural language and the model generates the corresponding video, and it also supports features such as inpainting, Cinemagraph, and stylized generation. Although Lumiere is not yet available for testing, it represents an exciting development in the rapidly evolving AI video market.