Computer Vision

Latest news headlines about artificial intelligence

China's Laws of Robotics: Shanghai publishes first humanoid robot guidelines

July 7, 2024, 10 a.m. • South China Morning Post • (1 Minute Read)
Shanghai has released China's first set of guidelines for governing humanoid robots, emphasizing the need to ensure these machines do not pose a threat to human security and effectively safeguard human dignity. The guidelines, unveiled during the World Artificial Intelligence Conference, call for measures such as risk warning procedures, emergency response systems, and ethical and lawful use training for users. The document, developed by a coalition of Shanghai-based industry organizations, also advocates for international cooperation in the humanoid robot sector, recommending the establishment of a global governance framework and an international think tank dedicated to governing these machines. Chinese companies are racing to develop cost-effective humanoid robots, as the country aims for mass production by 2025 and global leadership in the sector by 2027. Tesla's second-generation humanoid robot, Optimus, and other cutting-edge models were showcased at the conference, highlighting China's efforts to catch up with the US in AI and achieve technological self-sufficiency.

Tesla shows its humanoid robot Optimus at China AI conference, but behind glass

July 5, 2024, 2 a.m. • South China Morning Post • (1 Minute Read)
Tesla unveiled its humanoid robot, Optimus, at the 2024 World Artificial Intelligence Conference in Shanghai, sparking interest as one of the few American AI products at the event. Despite being showcased behind glass and without interactive capabilities, Optimus drew attention with its potential to handle various tasks using Tesla's neural network and computer vision technology. The display also featured 17 other robots from Chinese manufacturers, highlighting the increasing development and potential applications of humanoid robots in sectors such as education, entertainment, healthcare, elder care, and manufacturing. However, high production costs remain a barrier to widespread deployment, with prices ranging from $70,000 to $1 million for existing models, although Tesla's Optimus is anticipated to sell for up to $30,000.

How AI Agents are changing software development

July 4, 2024, noon • VentureBeat • (3 Minute Read)
The software engineering landscape is experiencing a transformation as large language models (LLMs) evolve into AI agents that can design, implement, and correct entire software modules, thereby enhancing the productivity of software engineers. LLMs are being integrated as coding assistants through chatbot interfaces like ChatGPT and Claude, and as plugins in integrated development environments (IDE) such as GitHub Copilot and Amazon’s coding assistant Q. Moreover, AI agents are being harnessed in agentic frameworks to complete software development projects end-to-end, although concerns linger about the safety and efficacy of AI-generated code. While AI is unlikely to replace software developers entirely, the value of LLMs in software development is clear, driving increased demand for software developers as AI tools mature.

Eyebot raised $6M for AI-powered kiosks that provide 90-second eye exams without optometrist

June 6, 2024, noon • TechCrunch • (6 Minute Read)
Eyebot, a Boston-based startup, has raised $6 million in funding to roll out AI-powered kiosks that offer 90-second eye exams without the need for an optometrist. The company's self-serve vision-testing terminals will be placed in shopping centers, grocery stores, and pharmacies in New England, aiming to provide convenient and quick eye exams in response to a nationwide shortage of eye care practitioners. Eyebot's computer vision technology can automatically scan a person's eyes, extracting eyeglass or contact lens prescriptions. The company has already finalized partnerships with major eyeglass and contact lens merchants and plans to make money by taking a commission on each sale through its kiosks. The funding round was led by AlleyCorp and Ubiquity Ventures, with participation from Susa Ventures, Village Global, Baukunst, Ravelin, and Spacecadet, and will be used for expansion to other geographies.

Apple's MM1 AI Model Shows a Sleeping Giant Is Waking Up

March 19, 2024, 8:25 p.m. • WIRED • (5 Minute Read)

Apple's MM1 AI Model Shows a Sleeping Giant Is Waking Up Apple's AI development, MM1, has surfaced with the potential to revolutionize the tech industry. The MM1 AI model, detailed in a recent Apple research paper, possesses the capability to answer questions and analyze images, hinting at Apple's venture into generative AI. Although Apple hasn't delved into AI-generated features, the emergence of MM1 indicates a significant shift. With similarities to AI models from other tech giants, MM1's multimodal large language model design allows it to respond to text prompts and answer complex questions about images, setting a new course for Apple in the AI landscape. Furthermore, the research paper divulges insights into MM1's training and potential applications, fueling anticipation for Apple's future endeavors in the generative AI realm.

Getting Started with Generative AI Using Hugging Face Platform on AWS | Amazon Web Services

March 12, 2024, 5:50 p.m. • AWS Blog • (6 Minute Read)

The emergence of generative artificial intelligence (AI) has captured the attention of enterprises worldwide, leading to the rapid adoption of this technology. Many organizations, equipped with strong AI and machine learning capabilities, are embracing generative AI and integrating it into their products. To facilitate this, they often leverage foundation models (FMs) from Amazon SageMaker JumpStart or Amazon Bedrock, utilizing the range of MLOps tools available on the Amazon Web Services (AWS) ecosystem. However, organizations with limited expertise encounter challenges in evaluating and utilizing advanced FMs. The Hugging Face Platform offers no-code and low-code solutions for training, deploying, and publishing state-of-the-art generative AI models for production workloads on managed infrastructure. The platform, available on AWS Marketplace since 2023, enables AWS customers to subscribe and connect their AWS account with their Hugging Face account, simplifying payment management for usage of managed services. The platform provides several premium features and managed services, including Inference Endpoints that offer easy and cost-efficient deployment of generative AI models, prioritizing enterprise security, LLM optimization, and comprehensive task support. Hugging Face Spaces allows the hosting of machine learning demo apps, enabling users to create their ML portfolio, showcase projects, and collaborate within the ML ecosystem. Additionally, Hugging Face AutoTrain facilitates the training of state-of-the-art models for various tasks such as NLP, computer vision, and tabular tasks, using a no-code approach. With these tools, even organizations with limited resources can effectively implement generative AI into their solutions, fostering innovation and competitiveness. The integration of the Hugging Face Platform with the AWS ecosystem promises to democratize machine learning and make cutting-edge AI technologies accessible to businesses of all sizes.

Researchers enhance peripheral vision in AI models

March 8, 2024, 5 a.m. • MIT News • (3 Minute Read)

In a breakthrough study, MIT researchers have made significant progress in enhancing peripheral vision in AI models. Unlike humans, AI lacks the ability of peripheral vision, which is crucial for detecting approaching hazards efficiently. The researchers developed an image dataset that simulates peripheral vision in machine learning models. The models, when trained with this dataset, displayed improved ability to detect objects in the visual periphery, although still not on par with human performance. The study also sheds light on the fundamental differences between human and AI vision strategies, suggesting the need for further research to bridge this gap and develop AI systems that can predict human performance in the visual periphery. This advance could have far-reaching implications, from improving driver safety to developing more user-friendly displays. The research will be presented at the International Conference on Learning Representations and has received support from the Toyota Research Institute and the MIT CSAIL METEOR Fellowship.

Using AI To Modernize Drug Development And Lessons Learned

Feb. 23, 2024, 11:35 p.m. • Forbes • (6 Minute Read)

The use of artificial intelligence (AI) to modernize drug development is a growing trend in the pharmaceutical industry, with many biopharmaceutical companies employing machine-learning models to enhance efficiency and reduce costs. These AI methods, including analyzing protein sequences and 3D structures of previous drug candidates, have the potential to significantly expedite the research process, with the potential to minimize drug screening time by 40 to 50%. Moreover, AI has proven valuable in regulatory intelligence, accelerating drug development functions and improving decision-making. A notable figure in this field, Dr. Dave Latshaw, founder and CEO of BioPhy, emphasizes the importance of interdisciplinary collaboration, data quality, and addressing ethical concerns in AI development. This news reveals the impact of AI on drug development and the lessons learned from industry leaders—a promising development in the quest for more efficient and cost-effective drug development processes.

What Air Canada Lost In 'Remarkable' Lying AI Chatbot Case

Feb. 19, 2024, 11:03 a.m. • Forbes • (4 Minute Read)

In a cautionary tale for global carriers utilizing AI for customer service, Air Canada lost a small claims court case against a passenger who was misled by the airline's AI-powered chatbot. The passenger, grieving over the death of their grandmother, sought bereavement fares through the chatbot, which provided inaccurate information contradictory to the airline's policy. Despite Air Canada's argument that the passenger had the opportunity to verify the chatbot's reply, the court ruled in favor of the passenger, awarding them $812.02 in damages and court fees. The Tribunal deemed the claim against Air Canada as "negligent misrepresentation," questioning the airline's reliance on AI for customer interactions. This case raises significant questions about airlines' liability for the performance of their AI-powered systems and underscores the potential legal and financial risks of AI hallucinations in customer service interactions.

How generative AI will benefit physical industries

Feb. 19, 2024, 10 a.m. • InfoWorld • (4 Minute Read)

The integration of generative AI in physical industries is expected to bring about significant benefits, particularly in sectors such as transportation, logistics, construction, energy, and field service. Unlike discriminative AI models, which rely on existing data for predictions, generative AI has the capability to synthesize entirely new data sets, making it indispensable for training AI models in scenarios where sourcing real data is dangerous, difficult, or sparse. This approach holds tremendous potential in addressing critical challenges, such as safety in the workplace, by enabling the creation of synthetic data sets for diverse and challenging use cases. Generative AI is poised to transform the physical economy, with the potential to mitigate the impact of natural disasters, combat climate change, and enhance operational efficiencies. Key factors for leveraging generative AI in physical businesses include investing in a highly skilled team and ensuring data quality, while also focusing on translating insights and capabilities into meaningful results for businesses and their clientele. Jairam Ranganathan, a leader in product management, design, data science, and strategy for Motive, emphasized the transformative potential of generative AI in reshaping the industries that fuel everyday lives.

The Evolution of AI: Differentiating Artificial Intelligence and Generative AI

Feb. 15, 2024, 7:16 a.m. • ai2.news • (15 Minute Read)

Roman Rember, discusses the emergence of Generative Artificial Intelligence (GenAI) as a subset that goes beyond traditional AI capabilities. While AI excels in specific tasks like data analysis and pattern prediction, GenAI acts as a creative artist by generating new content such as images, designs, and music. The article highlights the potential impact of GenAI on various industries and the workforce, citing a McKinsey report that anticipates up to 29.5% of work hours in the U.S. economy being automated by AI, including GenAI, by 2030. However, the integration of GenAI into teams poses unique challenges, such as potential declines in productivity and resistance to collaboration with AI agents. The article emphasizes the need for collaborative efforts between HR professionals and organizational leaders to address these challenges and establish common practices for successful integration. It also underscores the importance of robust learning programs and a culture emphasizing teaching and learning to harness the potential of GenAI for growth and innovation. The article provides a comprehensive overview of GenAI and its implications, aiming to inform and prepare organizations and individuals for the transformative power of this technology.

London Underground Is Testing Real-Time AI Surveillance Tools to Spot Crime

Feb. 8, 2024, 5:55 p.m. • WIRED • (7 Minute Read)

The London Underground is currently testing real-time AI surveillance tools aimed at identifying crime and unsafe situations among its passengers. Over the course of a year, Transport for London (TfL) established a trial utilizing 11 algorithms to monitor behavior at a specific Tube station. The AI system, combined with live CCTV footage, sought to detect criminal activity, fare evasion, and safety hazards, garnering over 44,000 alerts during the test period. While the trial aims to improve safety and intervention, privacy experts have raised concerns about the system's potential expansion and accuracy. The trial generated debate on the use of AI to analyze behavior and its ethical, legal, and societal implications, especially in the absence of specific laws governing such technology's use in public spaces. The ongoing analyses and potential expansion of this technology have prompted discussions on the need for public consultation and the guarantees of public trust and consent when implementing these surveillance tools.

UC Berkeley Researchers Introduce SERL: A Software Suite for Sample-Efficient Robotic Reinforcement Learning

Feb. 7, 2024, 1 p.m. • MarkTechPost • (5 Minute Read)

UC Berkeley researchers have developed SERL, a software suite aiming to make robotic reinforcement learning (RL) more accessible and efficient. The suite includes a sample-efficient off-policy deep RL method, tools for reward computation and environment resetting, as well as a high-quality controller tailored for widely adopted robots and challenging example tasks. The researchers' evaluation demonstrated that the learned RL policies significantly outperformed BC policies for various tasks, achieving efficient learning and obtaining policies within 25 to 50 minutes on average. The suite's release is expected to contribute to the advancement of robotic RL by providing a transparent view of its design and showcasing compelling experimental results.

Artificial intelligence framework for heart disease classification from audio signals | Scientific Reports

Feb. 7, 2024, 11:17 a.m. • Nature.com • (46 Minute Read)

In the article researchers investigate the use of machine learning (ML) and deep learning (DL) techniques to detect heart disease from audio signals. The study utilizes real heart audio datasets and employs data augmentation to improve the model’s performance by introducing synthetic noise to the heart sound signals. Additionally, the research develops a feature ensembler to integrate various audio feature extraction techniques. Several machine learning and deep learning classifiers are utilized for heart disease detection, and the multilayer perceptron model performs best, with an accuracy rate of 95.65%. The study demonstrates the potential of this methodology in accurately detecting heart disease from sound signals, presenting promising opportunities for enhancing medical diagnosis and patient care. The article also emphasizes the importance of early detection in the fight against cardiovascular disease and highlights the potential of advanced technologies, such as machine learning and artificial intelligence, in improving healthcare outcomes. Additionally, the research addresses the need for broader and more efficient ML and DL models to improve the accuracy and reliability of diagnosing cardiovascular diseases, aiming to make important advancements in the field. The article provides insights into the research gap, the proposed methodology, and the future developments in the field of heart disease detection from sound signals. Overall, the study contributes to the development of more accurate and reliable methods for diagnosing cardiovascular diseases, potentially improving patient outcomes and lessening the impact of cardiovascular disease.

French hospital trials 'socially assistive' robots to help the elderly

Feb. 5, 2024, 2:45 p.m. • Interesting Engineering • (2 Minute Read)

A batch of "socially assistive" robots has been hosted by a Parisian hospital as part of a new study. Developed by a Scottish artificial intelligence (AI) team from Heriot-Watt University, the robots were involved in routine tasks and interactions with elderly patients at the Assistance Publique Hôpitaux de Paris. These robots, part of the SPRING project, demonstrated the ability to engage in social interactions, answer questions, and provide directions, thereby lightening the workload for human staff and reducing the risk of infection transmission. The successful trials have provided valuable insights into the potential for global applications of this emerging technology, marking a significant milestone in the development of interactive robotics.

Google's Bard AI Chatbot Will Soon Be Called Gemini

Feb. 5, 2024, 7:37 a.m. • Analytics Insight • (5 Minute Read)

Google's AI chatbot Bard will soon be rebranded as Gemini, highlighting its enhanced capabilities, according to a leaked changelog by developer Dylan Roussel. Gemini is based on a large language model (LLM) and is capable of complex activities such as coding, deductive reasoning, and creative collaboration. It will be integrated into other Google services like YouTube, Gmail, and Maps to improve their functionality. Additionally, Google plans to release a premium membership level called Gemini Advanced, offering access to Gemini Ultra, the most powerful version of the AI. Furthermore, an Android app for Gemini is in development to allow users to utilize Google's AI for various purposes on their phones. This rebranding and new features are expected to position Google's Gemini as a competitive AI chatbot in the market.

How to Build an Effective and Engaging AI Healthcare Chatbot

Feb. 3, 2024, 8:37 a.m. • Analytics Insight • (6 Minute Read)

In the dynamic realm of healthcare, Artificial Intelligence (AI) has emerged as a game-changer, bringing forth innovative solutions to enhance patient engagement and streamline medical services. Among the remarkable AI applications, healthcare chatbots stand out as virtual assistants, poised to revolutionize the way patients interact with the healthcare ecosystem. These intelligent conversational agents offer a spectrum of services, from scheduling appointments to providing crucial medical information and symptom analysis. This comprehensive guide illuminates the pivotal steps involved in crafting a potent AI healthcare chatbot. Delving into the intricacies of compliance, data security, and personalized interactions, it navigates the intersection of cutting-edge technology and the nuanced landscape of healthcare, offering a roadmap for developers to create effective and engaging digital healthcare companions. AI healthcare chatbots serve as virtual assistants capable of engaging in natural language conversations with users. They can offer a wide range of services, including appointment scheduling, medication reminders, symptom analysis, and general health information dissemination. Building an effective healthcare chatbot involves a combination of technical prowess, understanding healthcare nuances, and ensuring a user-friendly experience. The guide delves into key steps to build an AI healthcare chatbot, including defining the purpose and scope, compliance with healthcare regulations, data security and privacy measures, natural language processing (NLP) integration, medical content integration, personalization and user profiles, appointment scheduling and reminders, symptom analysis and triage, continuous learning and updates, and multi-channel accessibility. The article also highlights the challenges and considerations in building AI healthcare chatbots, such as ethical considerations, potential biases in AI algorithms, and the need for ongoing maintenance and updates.

AI Salary Data: How Much Top Startups Like OpenAI and Anthropic Pay

Feb. 1, 2024, 11 a.m. • Business Insider • (12 Minute Read)

In this report by Business Insider, it has been revealed that top AI startups like OpenAI, Anthropic, and Hugging Face are offering substantial salaries to their employees as investor interest in AI startups rises. The data, obtained for 16 top AI companies in Europe and the US, shows that industry leaders such as CoreWeave and Databricks are offering base salaries of over $200,000. AI startups have successfully captured the attention of investors and talent alike, with some employees from well-established tech firms making the switch to smaller upstarts. The demand for AI talent is evident, with significant funding rounds for young companies and a surge in hiring. While salary transparency remains a challenge, efforts in both Europe and the US are being made to make salary data more visible to employees and candidates. The article includes specific salary range data from top AI companies to provide insight into the compensation offered.

Breakthrough could see robots with 'fingertips' as sensitive as humans

Jan. 30, 2024, 1:53 a.m. • New Atlas • (3 Minute Read)

A team of researchers, led by Parth Potdar and David Hardman from the University of Cambridge, has achieved a significant breakthrough in robotics by developing a sensor that, with the help of AI, can slide over braille text at twice the speed of human reading while maintaining an accuracy of 87.5%. The sensor, equipped with a camera, uses a machine-learning algorithm to remove motion blur from images, allowing it to detect and classify each letter of braille. This advancement paves the way for the development of robotic hands and prosthetics with fingertip sensitivity comparable to humans. The researchers hope to further scale up their technology to the size of a humanoid hand or skin, potentially revolutionizing tactile sensing systems in robotics.

AI-Powered Robot Reads Braille Twice as Fast as Humans

Jan. 29, 2024, 10:51 p.m. • Neuroscience News • (4 Minute Read)

Researchers at the University of Cambridge have developed a robotic sensor that uses artificial intelligence to read braille at a remarkable speed of 315 words per minute, with an accuracy of 87%. This speed is more than double the average reading speed of most human braille readers. The robotic sensor employs machine learning algorithms to interpret braille with high sensitivity, mimicking human-like reading behavior. While not designed as assistive technology, this breakthrough has implications for the development of sensitive robotic hands and prosthetics, challenging the engineering task of replicating human fingertip sensitivity in robotics. The team aims to scale the technology to the size of a humanoid hand or skin in the future, with hopes to broaden its applications beyond reading braille.