Neural Network

Latest news headlines about artificial intelligence

Can enterprise identities fix Gen AI's flaws? This IAM startup thinks so

Feb. 21, 2024, 2:52 p.m. • ZDNet • (5 Minute Read)

The identity and access management (IAM) startup IndyKite of San Francisco has introduced a new approach to addressing the trust problem in generative AI. By applying cybersecurity techniques to vet sources of data, the software aims to ensure the trustworthiness of leveraged data in any business or analytics model. This system, leveraging popular Neo4j graph database management software, aims to verify the origins of data before it is used to train programs, potentially addressing issues related to biases and data drift in generative AI. With $10.5 million in seed financing from Molten Ventures, Alliance Ventures, and SpeedInvest, IndyKite's efforts to enhance trust and accuracy in Gen AI are gaining attention in the cybersecurity and artificial intelligence fields.

Why artificial general intelligence lies beyond deep learning

Feb. 18, 2024, 7:15 p.m. • VentureBeat • (4 Minute Read)

The news story discusses the limitations of deep learning in achieving artificial general intelligence (AGI). It highlights the challenges of using deep learning, which relies on prediction and has difficulty handling unpredictable real-world scenarios. The article proposes the use of decision-making under deep uncertainty (DMDU) methods, such as Robust Decision-Making, as a potential framework for realizing AGI reasoning. It emphasizes the need to pivot towards decision-driven AI methods that can effectively handle uncertainties in the real world. The authors, Swaptik Chowdhury and Steven Popper, advocate for a departure from the deep learning paradigm and emphasize the importance of decision context in advancing towards AGI.

OpenAI's Sora pushes us one mammoth step closer towards the AI abyss

Feb. 16, 2024, 6:15 p.m. • Popular Science • (4 Minute Read)

The latest development in generative AI technology has been unveiled by OpenAI, prompting concerns about unregulated and consequence-free advancements. The program, named Sora, is capable of producing photorealistic media from simple text inputs, raising issues of legality, privacy, and objective reality. OpenAI's decision to grant limited access to Sora's capabilities without providing technical details has prompted concerns about the lack of regulatory oversight. As the company aims to achieve Artificial General Intelligence, the potential implications of Sora's development on the online landscape are vast and warrant attention from policymakers and industry watchdogs to address potential risks and harms.

New chip opens door to AI computing at light speed

Feb. 16, 2024, 10 a.m. • Phys.org • (5 Minute Read)

The University of Pennsylvania engineers have developed a groundbreaking chip that utilizes light waves rather than electricity to perform complex mathematical operations critical for training artificial intelligence. This silicon-photonic (SiPh) chip, a result of collaborative work between renowned researchers Nader Engheta and Firooz Aflatouni, has the potential to dramatically increase computer processing speeds while simultaneously reducing energy consumption. By leveraging Engheta's expertise in manipulating materials on a nanoscale to perform computations using light and Aflatouni's pioneering work in nanoscale silicon devices, the team has designed a chip capable of performing vector-matrix multiplication, a crucial operation in neural network development. This innovative chip not only offers faster processing and lower energy usage but also provides enhanced privacy, making it potentially unhackable. This breakthrough could revolutionize AI computing and has significant implications for future computer technologies.

The Evolution of AI: Differentiating Artificial Intelligence and Generative AI

Feb. 15, 2024, 7:16 a.m. • ai2.news • (15 Minute Read)

Roman Rember, discusses the emergence of Generative Artificial Intelligence (GenAI) as a subset that goes beyond traditional AI capabilities. While AI excels in specific tasks like data analysis and pattern prediction, GenAI acts as a creative artist by generating new content such as images, designs, and music. The article highlights the potential impact of GenAI on various industries and the workforce, citing a McKinsey report that anticipates up to 29.5% of work hours in the U.S. economy being automated by AI, including GenAI, by 2030. However, the integration of GenAI into teams poses unique challenges, such as potential declines in productivity and resistance to collaboration with AI agents. The article emphasizes the need for collaborative efforts between HR professionals and organizational leaders to address these challenges and establish common practices for successful integration. It also underscores the importance of robust learning programs and a culture emphasizing teaching and learning to harness the potential of GenAI for growth and innovation. The article provides a comprehensive overview of GenAI and its implications, aiming to inform and prepare organizations and individuals for the transformative power of this technology.

OpenAI's Eve humanoids make impressive progress in autonomous work

Feb. 13, 2024, 3:40 a.m. • New Atlas • (5 Minute Read)

OpenAI's Eve humanoids are making significant strides in autonomous work, showcasing their abilities in a recent video. While visually less impressive than their competitors, the Eve humanoid robots from 1X are capable of performing tasks independently through neural network control, with no teleoperation or scripted trajectory playback involved. These robots, lacking the advanced mobility and dextrous hands of other models, are designed for use in warehouses and factories, where their primary task will be picking up and moving objects. The company has employed a novel training method involving imitation learning via video and teleoperation, allowing the robots to efficiently learn and execute a range of tasks. With further developments underway, the future integration of these autonomous humanoid robots in various industries seems promising.

GenAI's Pace of Development Is Shattering Moore's Law

Feb. 12, 2024, 7:30 p.m. • PYMNTS.com • (3 Minute Read)

In a groundbreaking development, generative AI models have outpaced Moore's Law, with products from major tech firms like Google, OpenAI, Amazon, and Microsoft reaching their second, third, and even fourth versions within just a year. Google recently launched Gemini Advanced, the latest version of its AI chatbot, showcasing the rapid evolution in the AI landscape. These advancements are driven by continuous research and development, leading to improvements in model architecture, training methodologies, and application-specific enhancements. The rise of generative AI tools has sparked a multi-modal movement, enabling AI systems to process visual, verbal, audio, and textual data simultaneously. This rapid progress underscores the transformative potential of AI technology, with major implications for industries ranging from finance to healthcare.

A Comedian And A Neural Network Walk Into A Bar

Feb. 10, 2024, 7:04 p.m. • Forbes • (4 Minute Read)

In a groundbreaking collaboration, comedian Ana Fabrega and Cristóbal Valenzuela, CEO of AI startup Runway, showcased a video at the Seven on Seven event, demonstrating the unconventional partnership between art and artificial intelligence. Through a series of exchanges, Valenzuela prompted Fabrega with AI-generated content, to which she responded with her own creative interpretations. This experiment highlighted the potential for AI to lead artists in innovative directions and showcased the creative possibilities of human-algorithm collaboration. The event also featured other pioneering partnerships, shedding light on the evolving relationship between artificial intelligence and various artistic disciplines. Ultimately, the collaboration emphasized that art created with AI is enriched by human input, underscoring the role of humans in harnessing the full potential of this technology.

AI as the New Physicist: Forschungszentrum Jülich Pioneers the "Physics of AI"

Feb. 9, 2024, 5:30 a.m. • AiDebrief.com • (2 Minute Read)

Researchers at Forschungszentrum Jülich have developed an artificial intelligence that can formulate physical theories by recognizing patterns in complex data, akin to the accomplishments of historical physicists. This "Physics of AI" approach, explained by Prof. Moritz Helias, simplifies observed interactions through a neural network, offering a novel method to construct theories. Unlike traditional AIs, which internalize theories without explanation, this AI articulates its findings in the language of physics, making it a significant step towards explainable AI. It has been successfully applied to analyze interactions within images, demonstrating its potential to handle complex systems and bridge the gap between AI operations and human-understandable theories.

'Physics of AI': German scientists train AI to think like Albert Einstein

Feb. 8, 2024, 1:39 p.m. • Interesting Engineering • (2 Minute Read)

German scientists at the Forschungszentrum Jülich have achieved a breakthrough by training artificial intelligence (AI) to think like famous physicists such as Albert Einstein or Isaac Newton. Using this learning, the AI model can recognize patterns in complex data sets and form physical theories around them. This innovative "physics of AI" approach allows the AI to extract the theories it has learned and the language it uses to explain interactions between system components. Lead researcher Moritz Helias stated that this new understanding bridges the gap between AI's complex workings and theories that humans can comprehend. The research findings were published in the journal Physical Review X, marking a significant advancement in AI and physics integration.

Artificial intelligence framework for heart disease classification from audio signals | Scientific Reports

Feb. 7, 2024, 11:17 a.m. • Nature.com • (46 Minute Read)

In the article researchers investigate the use of machine learning (ML) and deep learning (DL) techniques to detect heart disease from audio signals. The study utilizes real heart audio datasets and employs data augmentation to improve the model’s performance by introducing synthetic noise to the heart sound signals. Additionally, the research develops a feature ensembler to integrate various audio feature extraction techniques. Several machine learning and deep learning classifiers are utilized for heart disease detection, and the multilayer perceptron model performs best, with an accuracy rate of 95.65%. The study demonstrates the potential of this methodology in accurately detecting heart disease from sound signals, presenting promising opportunities for enhancing medical diagnosis and patient care. The article also emphasizes the importance of early detection in the fight against cardiovascular disease and highlights the potential of advanced technologies, such as machine learning and artificial intelligence, in improving healthcare outcomes. Additionally, the research addresses the need for broader and more efficient ML and DL models to improve the accuracy and reliability of diagnosing cardiovascular diseases, aiming to make important advancements in the field. The article provides insights into the research gap, the proposed methodology, and the future developments in the field of heart disease detection from sound signals. Overall, the study contributes to the development of more accurate and reliable methods for diagnosing cardiovascular diseases, potentially improving patient outcomes and lessening the impact of cardiovascular disease.

Scientists use AI to investigate structure and long-term behavior of galaxies

Feb. 5, 2024, 5:01 p.m. • Phys.org • (5 Minute Read)

In an innovative approach, scientists at Bayreuth University are using artificial intelligence (AI) to investigate the structure and long-term behavior of galaxies. Dr. Sebastian Wolfschmidt and Christopher Straub are employing a deep neural network based on Einstein's theory of relativity to quickly predict the stability of galaxy models. This AI-based method aims to efficiently verify or falsify astrophysical hypotheses within seconds, providing a significant advancement in the field. Their findings have been accepted for publication in the journal Classical and Quantum Gravity. The researchers have plans for further applications of similar methods and have utilized the supercomputer of the "Keylab HPC" at the University of Bayreuth for their calculations. This groundbreaking study sheds light on the potential of AI in understanding complex astrophysical phenomena, propelling advancements in galaxy research.

A Camera-Wearing Baby Taught an AI to Learn Words

Feb. 1, 2024, 7 p.m. • Scientific American • (3 Minute Read)

In a recent study published in Science, a team of cognitive and computer scientists successfully trained an artificial intelligence (AI) model to match images to words using just 61 hours of naturalistic footage and sound captured from the perspective of a child named Sam. This research challenges the traditional belief that humans are born with built-in expectations and logical constraints that enable language acquisition. The findings suggest that the process of language acquisition might be simpler than previously thought, as demonstrated by the AI's ability to recognize and understand words from minimal data. The study also indicates that machines can learn similarly to humans and may not need as much input as currently used in AI models. While the AI model showed promising results in recognizing and matching words with corresponding images, the study also highlights important limitations and the need for further research in understanding the complexity of human language learning. This research has the potential to deepen our understanding of human cognition and improve education, illustrating the broader implications of AI research beyond corporate profit.

How Guns N' Roses Used AI to Create Wild New Music Video 'The General'

Jan. 31, 2024, 7:23 p.m. • Decrypt • (3 Minute Read)

In a groundbreaking move, iconic hard rock band Guns N’ Roses collaborated with design studio Creative Works London and AI platform Stable Diffusion to create the music video for their latest track, “The General.” Originally conceived as a fully animated video in Unreal Engine, the project evolved into a unique combination of AI-generated imagery and live-action footage, resulting in an unsettling, dreamlike visual experience. Creative Works London’s creative director, Dan Potter, utilized Stable Diffusion to augment the studio’s output, emphasizing that AI serves as a creative tool rather than a replacement for artists. This pioneering use of generative AI opens up new possibilities for integrating technology into live performances, potentially incorporating augmented reality elements in future shows. The innovative approach taken by Guns N’ Roses and Creative Works London demonstrates the potential for AI to amplify and augment artistic expression without overshadowing the role of human creativity.

AI Learns Like Humans: Emulating Sleep for Advanced Learning

Jan. 30, 2024, 10:30 p.m. • AiDebrief.com • (1 Minute Read)

Wake-Sleep Consolidated Learning (WSCL) is an innovative approach to continual learning in artificial intelligence, blending cognitive neuroscience with computational techniques to enhance deep neural networks in visual classification tasks. It emulates human brain processes through a 'wake' phase of sensory adaptation and a 'sleep' phase replicating NREM and REM stages for memory consolidation and exploration. Demonstrating superior performance on benchmark datasets like CIFAR-10, TinyImageNet, and FG-ImageNet, WSCL outperforms existing methods, highlighting the importance of sleep and dreaming phases in learning. This advancement suggests a promising direction in AI research, focusing on incorporating human-like learning processes into neural networks.

Can ChatGPT drive my car? The case for LLMs in autonomy

Jan. 30, 2024, 10 a.m. • InfoWorld • (4 Minute Read)

The news story discusses the potential of large language models (LLMs) in autonomous driving. The article emphasizes the limitations of current autonomous driving models and highlights the need for complex, human-like reasoning to address edge cases. It explains that LLMs have shown promise in surpassing these limitations by reasoning about complex scenarios and planning safe paths for autonomous vehicles. However, it also notes the real limitations that LLMs still have for autonomous applications, such as latency and hallucinations. The article concludes by expressing optimism that LLMs could transform autonomous driving by providing the safety and scale necessary for everyday drivers.

AI-Powered Robot Reads Braille Twice as Fast as Humans

Jan. 29, 2024, 10:51 p.m. • Neuroscience News • (4 Minute Read)

Researchers at the University of Cambridge have developed a robotic sensor that uses artificial intelligence to read braille at a remarkable speed of 315 words per minute, with an accuracy of 87%. This speed is more than double the average reading speed of most human braille readers. The robotic sensor employs machine learning algorithms to interpret braille with high sensitivity, mimicking human-like reading behavior. While not designed as assistive technology, this breakthrough has implications for the development of sensitive robotic hands and prosthetics, challenging the engineering task of replicating human fingertip sensitivity in robotics. The team aims to scale the technology to the size of a humanoid hand or skin in the future, with hopes to broaden its applications beyond reading braille.

Tesla Prepares for Optimus Robot Production

Jan. 25, 2024, 10:16 p.m. • RetailWire • (2 Minute Read)

In a move that hints at a futuristic shift, Tesla is gearing up to mass produce its Optimus humanoid robots. The company's expansion of its workforce in Palo Alto points to its ambitious production plans. The development of the second generation of the Optimus bot in just two years demonstrates Tesla's rapid progress in this area. With a vision for the future that includes multi-planetary life, the humanoid robots could potentially assist in tasks like terraforming and construction in challenging environments such as Mars. The company is actively seeking talent to work on high-volume mass production, with speculations suggesting the initiation of mass production by the end of the year. However, challenges remain, including the need to develop software for the robots to function effectively. Tesla's emphasis on perfecting its Full Self-Driving software before mass-producing the robots highlights the interplay between the two programs. Overall, the impending robot revolution signals exciting times ahead.

Robotic Breakthrough Mimics Human Walking Efficiency

Jan. 23, 2024, 12:12 a.m. • Neuroscience News • (4 Minute Read)

In a recent breakthrough, researchers from Tohoku University have successfully replicated human-like variable speed walking in a musculoskeletal model controlled by a reflex control method similar to the human nervous system. This advancement in robotics not only enhances our understanding of human locomotion but also sets new standards for robotic technology. The study utilized an innovative algorithm to optimize energy efficiency across various walking speeds, paving the way for future innovations in bipedal robots, prosthetics, and powered exoskeletons. This research holds immense potential for improving mobility solutions and everyday robotics, ultimately benefiting individuals with disabilities. The insights from this study will serve as a crucial foundation for future advancements in biomechanics, neuroscience, and robotics.

The Future of Fusion: Unlocking Complex Physics With AI's Precision

Jan. 21, 2024, 2:08 a.m. • SciTechDaily • (5 Minute Read)
In a groundbreaking fusion research study, MIT researchers have utilized artificial intelligence (AI) to develop a method for accurately predicting plasma behavior using camera images. This innovative technique provides crucial insights into the dynamics of plasma, a key component in achieving net fusion energy production. With fusion experiments conducted under extreme conditions, conventional diagnostic tools face limitations in collecting data on plasma dynamics. To address this challenge, the researchers have successfully bridged plasma modeling and experiments by leveraging AI and camera images to infer electron density and temperature fluctuations, thus enabling more precise predictive modeling of plasma turbulence. This groundbreaking work offers a new scientific path for understanding the complexities of fusion plasmas and evaluating theoretical predictions against experimental observations, bringing the potential of fusion energy one step closer to reality.