Categories
AI

The Human Touch: Photographer Outsmarts AI in Prestigious Competition

Ever since the advent of generative AI, the age-old battle of man versus machine has been looking decidedly one-sided. However, one photographer, intent on making the case for pictures captured with the human eye, has taken the fight to his algorithm-powered rivals – and won.

Miles Astray, a 38-year-old photographer, decided to challenge the surge of AI-generated images sweeping through conventional photography contests. In a bold move, he submitted his own human-made image, titled “Flamingone,” to the AI category in the prestigious 1839 Awards. The striking photograph, depicting an orb of pink feathers standing on knobbly legs, managed to convince a panel of judges to award him third place in the AI-generated category.

Astray was motivated to break the rules after witnessing a series of AI-generated images winning traditional photography awards. “It occurred to me that I could twist this story inside down and upside out the way only a human could and would, by submitting a real photo into an AI competition,” he explained. He deliberately chose a surreal and seemingly unbelievable image that could easily be mistaken for an AI creation.

However, once it was revealed that no AI was involved in the making of “Flamingone,” Astray was stripped of his award, which included a cash prize. The bronze medal and people’s choice award were then given to two other creators. “AI can already produce incredibly real-looking content, and if that content meets an unquestioning eye, you can easily deceive entire audiences,” Astray said.

A lone flamingo with its head tucked, stands on the white sand of a beach, with gentle ocean waves in the background. The flamingo's pink and orange feathers contrast strikingly against the light-colored sand.
FLAMINGONE by Miles Astray which won an AI image contest.

Astray’s act of subversion highlights the growing need for skepticism and vigilance in the face of increasingly realistic AI-generated content. “Up until now, we never had much of a reason to question the authenticity of photos, videos, and audios. This has changed overnight, and we’ll need to adapt to this. It has never been more important to be questioning. That’s an individual responsibility that will be even more crucial than tagging and flagging AI content.”

In a statement, the competition’s organizers acknowledged Astray’s powerful message but maintained that his submission was unfair. “Each category has distinct criteria that entrants’ images must meet. His submission did not meet the requirements for the AI-generated image category. We understand that was the point, but we don’t want to prevent other artists from their shot at winning in the AI category. We hope this will bring awareness (and a message of hope) to other photographers worried about AI.”

Astray’s victory echoes the actions of German artist Boris Eldagsen, who made headlines the previous year by winning a Sony World Photography Award with an AI-generated image. Eldagsen defended his entry in the “creative open” category, arguing that the creation process was complex and involved much more than simply typing in a few words and clicking ‘generate.’

For Astray, the confusion his image caused is precisely the point. “If the amount of seemingly real fakes in circulation keeps increasing, it’ll be hard to keep up with what’s real and what’s not,” he said. “I couldn’t live the life I’m living without technology, so I don’t demonize it, but I think it’s often a double-edged sword with the potential to do both good and harm.”

As the lines between human and AI-generated content continue to blur, Astray’s provocative move serves as a reminder of the unique creativity and unpredictability that only a human can bring to the art of photography.

Categories
AI

AI on the Ballot: The Unconventional Candidacy of AI Steve in the UK Election

In an unprecedented move, an artificial intelligence candidate is set to appear on the ballot for the United Kingdom’s general election next month. “AI Steve,” represented by Sussex businessman Steve Endacott, will be running alongside human candidates to represent constituents in the Brighton Pavilion area of Brighton and Hove, a city on England’s southern coast.

“AI Steve is the AI co-pilot,” Endacott explained in an interview. “I’m the real politician going into Parliament, but I’m controlled by my co-pilot.” Endacott is the chairman of Neural Voice, a company specializing in creating personalized voice assistants for businesses using AI avatars. AI Steve is one of seven characters developed by Neural Voice to showcase their technology.

The innovative idea behind AI Steve is to utilize AI for creating a politician who is always available to engage with constituents and consider their views. People can interact with AI Steve through a dedicated website, where they can ask questions or share their opinions on Endacott’s policies. The AI, powered by a large language model, provides responses in both voice and text, drawing from a comprehensive database of party policies. If a particular issue lacks a policy, the AI will conduct internet research before engaging with the voter and encouraging them to suggest a policy.

AI Steve, accessible to the public, recently responded to a query about Brexit by saying, “As a democracy, the UK voted to leave, and it’s my responsibility to implement and optimize this decision regardless of my personal views on the matter.” It further asked, “Do you have any thoughts on how Brexit should be managed in the future?”

Endacott aims to engage thousands of what he calls “validators” – people he believes represent the average citizen, particularly Brighton locals with long daily commutes. “We’re asking them once a week to score our policies from 1 to 10. If a policy gets more than 50%, it gets passed. And that’s the official party policy,” he explained. “Every single policy, I will say that my decision is my voters’ decision. And I’m connected to my voters at any time on a weekly basis via electronic means.”

Endacott previously ran unsuccessfully in a local election under the Conservative Party, receiving less than 500 votes. This time, the unique nature of his candidacy sparked significant interest on social media, leading to around 1,000 calls to the AI proxy in one night. According to those calls, the top issues for voters were concerns about the safety of Palestinians, trash bins, bicycle lanes, immigration, and abortion.

Endacott believes that having an AI representative enables him to respond to thousands of potential constituents daily. “I don’t have to go knock on their door, get them out of bed when they don’t want to talk to me,” he said, contrasting this with what he calls “the old form of politics.” Instead, people can choose to contact AI Steve at their convenience.

Describing himself as a “centralist” who closely aligns with, but is not a part of, the Green Party, Endacott was unable to register his own party, Smarter U.K., in time for this year’s election. He insists that his use of the AI avatar is not to promote his business interests, as he holds less than a 10% share in Neural River, the platform behind AI Steve. His primary goal is to push the government to enact changes to reduce carbon emissions, whether through winning an election or becoming a political influencer.

If elected, AI Steve would be the first AI legislator in public office. While this concept may seem outlandish to some, Endacott emphasizes that his platform is “not a joke.” He clarifies that the AI is not replacing a human politician but is a tool to bring “more humans” into politics. “It’s not AI taking over the world. It’s AI being used as a technical way of connecting to our constituents and reinventing democracy by saying, ‘You don’t just vote for somebody every four years; you actually control the vote on an ongoing basis,’” he said. “Which is very, very radical in the U.K. Probably even more radical in America

Categories
AI

Apple Intelligence

At the highly anticipated WWDC 2024, Apple introduced Apple Intelligence, a groundbreaking suite of features designed to revolutionize user interaction across macOS Sequoia, iOS 18, and iPadOS 18. This new system aims to provide a deeply personalized experience, leveraging advanced AI capabilities while prioritizing user privacy.

Apple Intelligence is designed to understand and cater to individual user needs and preferences, marking a significant leap in personalized technology. By analyzing user habits and interactions, it offers tailored suggestions and actions that enhance productivity and convenience. Here are some examples of how it works:

Personalized Photo Management: Users can ask Apple Intelligence to “show all the photos of Mom, Olivia, and me from last summer.” The AI will quickly sift through the photo library and present the relevant images, saving time and effort.

Contextual Podcast Playback: If you want to “play the podcast my wife sent the other day,” Apple Intelligence will identify the specific podcast link from recent messages and start playing it, streamlining the process of finding and enjoying content.

One of the cornerstones of Apple Intelligence is its robust privacy framework. With the introduction of Private Cloud Compute, Apple ensures that even when leveraging powerful server-based models for complex tasks, user data remains secure and private. This hybrid approach combines the privacy of on-device processing with the enhanced computational power of cloud-based models, all while maintaining the highest standards of data protection.

The new intelligence system is equipped to handle a wide array of tasks seamlessly. Some practical applications include:

Automated Form Filling: When filling out forms, users can request, “Find a photo of my driver’s license and fill in the ID number.” Apple Intelligence will locate the relevant photo, extract the necessary information, and automatically input it into the form.

Enhanced Siri Queries: Siri now supports more complex queries, such as “How do I enable Dark Mode?” It will provide step-by-step instructions or directly navigate to the settings, making device management more intuitive.

Contextual Image Generation: Users can enhance their notes by circling empty spaces and letting Apple Intelligence suggest images that fit the surrounding context, making visual content creation straightforward and engaging.

Apple Intelligence is designed to integrate seamlessly with third-party tools, starting with the market leader ChatGPT from OpenAI. This integration allows Siri to tap into the expertise of ChatGPT for specialized tasks. For example:

Menu Planning: Users can ask, “What are some menu ideas for an elaborate dinner party?” Siri, leveraging ChatGPT, will provide detailed and creative menu suggestions, complete with recipes and preparation tips.

Content Creation: Need help with writing? You can request, “Draft an email to my team about the new project updates,” and Apple Intelligence will generate a polished and professional message based on your input and context.

Apple is also empowering developers to leverage these new capabilities with a range of tools and frameworks. Developers can now incorporate generative intelligence into their apps, thanks to features like on-device code completion and smart assistance for coding in Swift and SwiftUI. New APIs support the creation of complex 3D apps and volumetric content, enabling developers to build more intelligent and responsive applications.

Apple Intelligence represents a significant advancement in how users interact with their devices. By combining personalized, context-aware interactions with robust privacy protections, Apple is setting a new standard for intelligent technology. With the additional support for developers, Apple Intelligence is poised to inspire the creation of a new generation of innovative and user-friendly applications. WWDC 2024 has indeed marked a new chapter in Apple’s journey towards more intuitive and intelligent user experiences.

Categories
AI

The Rise of AI-Generated Hate Content: A Growing Concern Globally and in Africa

A recent viral video of Adolf Hitler delivering an English speech, manipulated using artificial intelligence, has highlighted a troubling trend: the rise of AI-generated hate content. This video, which spread rapidly on social media, exemplifies how AI is being exploited to create and disseminate harmful misinformation. This issue is not just a Western problem but has significant implications for Africa as well.

Peter Smith, a journalist with the Canadian Anti-Hate Network, has observed a surge in AI-generated hate content. Chris Tenove, assistant director at the University of British Columbia’s Centre for the Study of Democratic Institutions, noted that hate groups have historically been quick to adopt new technologies. This adaptability is seen in Africa too, where generative AI is increasingly used to spread divisive and harmful content.

A UN advisory body has expressed deep concerns about the potential for generative AI to amplify antisemitic, Islamophobic, racist, and xenophobic content globally. In Africa, this risk extends to the spread of ethnic hatred, xenophobia, and misinformation, which can inflame existing tensions and conflict.

AI-generated hate content has tangible effects beyond the digital realm. For instance, AI-created propaganda has been used to incite violence in ethnically diverse regions. Richard Robertson from B’nai Brith Canada highlighted a disturbing increase in AI-generated antisemitic images and videos. In Africa, similar tools have been used to create inflammatory content that targets specific ethnic groups or nationalities, fueling discord and violence.

Deepfakes, realistic AI-generated videos of public figures, have been used to spread misinformation. In Africa, deepfakes have falsely attributed statements and actions to political leaders, exacerbating tensions and spreading false information that can destabilize communities and governments.

Experts like Jimmy Lin from the University of Waterloo stress the importance of safeguards in AI systems. However, AI models can be manipulated or “jailbroken” to produce harmful content. In Africa, the lack of robust regulatory frameworks for AI technology makes it even more critical to implement effective safeguards.

Countries worldwide are beginning to address these issues through legislation. In Canada, Bill C-63 seeks to define and combat content that incites hatred, including AI-generated content. Similarly, African governments need to develop and implement regulations to control the misuse of AI. These could include laws to identify AI-generated content and assess risks to ensure the safe operation of AI systems.

The rise of AI-generated hate content is a pressing global issue with significant implications for Africa. Coordinated efforts from governments, technology companies, and researchers are essential to prevent the misuse of AI and protect societies from the harmful impacts of AI-generated misinformation and hate. As AI technology continues to evolve, it is crucial to establish safeguards and regulations to mitigate these risks and promote its responsible use for the benefit of all.

Categories
AI

Scarlett Johansson’s Voice Battle with OpenAI: A Call for Transparency and Protection

Last September, Scarlett Johansson received an intriguing offer from Sam Altman, the CEO of OpenAI. He wanted her to lend her voice to the new ChatGPT 4.0 system. Altman believed that Johansson’s voice could bridge the gap between tech companies and creatives, making consumers feel more at ease with the significant changes AI brings. He specifically mentioned that her voice would be comforting to people.

After careful consideration and for personal reasons, Johansson declined the offer. Nine months later, her friends, family, and the general public began to notice that the latest system, named “Sky,” sounded remarkably like her. When Johansson heard the demo, she was shocked and angered. The resemblance was so striking that even her closest friends and various news outlets couldn’t tell the difference. Altman seemed to acknowledge the similarity by tweeting a single word: “her”—a reference to the film where Johansson voiced Samantha, an AI system that forms an intimate relationship with a human.

Two days before the ChatGPT 4.0 demo was released, Altman contacted Johansson’s agent, asking her to reconsider. However, the system was already live before they could connect. As a result of these actions, Johansson was forced to hire legal counsel. Her lawyers sent two letters to Altman and OpenAI, outlining their concerns and requesting a detailed explanation of how the “Sky” voice was created. Consequently, OpenAI reluctantly agreed to take down the “Sky” voice.

In an era where we are increasingly grappling with the challenges of deepfakes and the protection of personal likenesses, identities, and creative work, Johansson believes these issues demand absolute clarity. She emphasizes the need for transparency and calls for appropriate legislation to ensure individual rights are protected.

Johansson’s case underscores a broader issue facing the entertainment industry and the public at large. As AI technology continues to advance, the line between human and machine becomes increasingly blurred. This incident highlights the urgent need for regulations that protect individuals from unauthorized use of their voices and likenesses. 

Johansson’s battle with OpenAI serves as a crucial reminder of the potential ethical pitfalls in the rapidly evolving world of artificial intelligence. The outcome of her case could set a significant precedent for how AI technologies are regulated and how personal rights are upheld in the digital age.

Categories
AI

The Real Threat of AI in the 2024 Election: A Federal Assessment

As artificial intelligence (AI) advances, so do the challenges it poses, particularly in the context of the upcoming 2024 election. A new assessment by the Department of Homeland Security (DHS), obtained by ABC News, highlights the potential misuse of AI technologies, which could undermine the integrity of the electoral process.

According to the DHS report, generative AI tools offer opportunities for both domestic and foreign actors to interfere with the election. These tools can be used to aggravate emerging events, disrupt election processes, or attack infrastructure. The report warns that AI-generated content, such as deepfakes and manipulated media, can be exploited to influence and sow discord among voters.

John Cohen, former intelligence chief at DHS, emphasized the immediacy of the threat, stating that both foreign and domestic actors are increasingly using AI to conduct illegal operations. The DHS bulletin details past incidents of cyber-enabled attacks on elections, including voice spoofing and disinformation campaigns, and cautions that AI’s innovative capabilities could further these threats in the future.

For example, a robocall using AI-generated audio of President Joe Biden circulated before the New Hampshire primary, misleading voters. The DHS analysis underscored the challenge of countering such content swiftly, as its impact can spread rapidly across online platforms.

Elizabeth Neumann, a former DHS assistant secretary, noted the difficulty voters will face in discerning truth from AI-manipulated content, which could significantly impact their perceptions and decisions. The intertwining of misinformation, disinformation, and rapidly evolving AI technologies presents a complex threat landscape, exacerbated by current socio-political tensions and ongoing conflicts abroad.

Director of National Intelligence Avril Haines informed the Senate that the threat from AI-enabled foreign influence actors is growing. She emphasized that while the U.S. government is better prepared than ever to address these challenges, the sophistication of AI tools requires constant vigilance and adaptation.

Experts urge that public education and preparation are crucial. State and local officials must have robust plans to counteract and correct AI-generated misinformation quickly. As Cohen pointed out, using outdated strategies to combat modern threats is ineffective, likening it to bringing a knife to a gunfight.

Categories
AI

Sony Music’s Stand Against Unauthorized AI Training

Sony Music Group has taken a bold stance against the unauthorized use of its music to train AI systems. In a move that underscores the growing tension between intellectual property rights and technological advancement, Sony has warned over 700 tech companies and music streaming services against using its content without explicit permission.

The proliferation of AI technologies has led to increasing concerns about the use of copyrighted material for training purposes. Sony’s recent actions highlight the need for clearer guidelines and protections for content creators. By sending letters to numerous companies, Sony is asserting its rights and seeking to ensure that its music is not exploited without authorization.

This issue is part of a broader debate about the ethical and legal implications of AI training. As AI systems become more advanced and capable of generating high-quality content, the question of fair use and compensation for original creators becomes increasingly pressing.

Sony’s stance could set a precedent for other content creators and rights holders, encouraging them to take similar actions to protect their intellectual property. This could lead to more stringent regulations and agreements between tech companies and content owners, fostering a more equitable ecosystem for AI development and creative expression.

Categories
AI

Google’s new AI-infused search

Google made waves on Tuesday with its announcement to integrate its Gemini AI model directly into its search engine, promising to provide users with instant answers to their queries without the need to click on search results. While this may seem convenient for users, it spells trouble for news publishers already grappling with declining traffic and revenue.

The revamped search experience threatens to further diminish audience engagement with publishers’ content, as users may opt to rely solely on Google’s AI-generated summaries instead of visiting the original sources. This shift could exacerbate the financial strain on news organizations, which rely heavily on web traffic and advertising revenue.

Danielle Coffey, CEO of the News/Media Alliance, minced no words in expressing the potential devastation this change could inflict on the industry. With Google essentially competing directly with publishers by using their content to power its AI-driven search, concerns about market dominance and revenue loss loom large.

In the face of this impending challenge, newsrooms find themselves scrambling for solutions. Some have cautiously embraced partnerships with tech giants like OpenAI, licensing their content archives in hopes of staying afloat. Others, like The New York Times, have opted for a more confrontational approach, resorting to legal action against AI developers.

The strained relationship between publishers and Big Tech has been a long time coming. Mark Zuckerberg’s pivot away from news content on Facebook and Google’s contentious dealings with publishers only underscore the widening rift between the two camps. Google’s reassurances about driving more traffic to publishers through AI-generated summaries offer little comfort, as skepticism abounds regarding its true impact on publishers’ bottom lines.

In the face of such uncertainty, protecting oneself from potential scams and exploitation becomes paramount. Vigilance in scrutinizing agreements with tech companies, diversifying revenue streams beyond traditional advertising, and investing in proprietary technologies to safeguard content are just a few strategies that publishers may consider to mitigate the risks posed by rapid technological advancements.

In essence, while Google’s AI-powered search may promise convenience for users, its implications for the news industry underscore the urgent need for publishers to adapt and fortify themselves against the shifting tides of digital disruption.

Categories
AI

OpenAI Unveils GPT-4o: The Next Leap in Artificial Intelligence

In a much-anticipated announcement on Monday, OpenAI revealed the launch of its groundbreaking artificial intelligence model, GPT-4o, alongside a slew of updates aimed at enhancing user experience. The event, hosted by Chief Technology Officer Mira Murati, showcased the innovative strides the company has made in the realm of AI, promising to revolutionize human-machine interaction.

GPT-4o marks a significant advancement, offering the swift and precise capabilities of its predecessor, GPT-4, to a wider audience, including free users. Murati emphasized the transformative potential of the new model, heralding it as a pivotal shift in the AI landscape.

“We’re witnessing the dawn of a new era in human-computer interaction,” Murati remarked. “GPT-4o is poised to redefine that dynamic.”

Among the key updates unveiled by OpenAI are enhancements to ChatGPT’s multilingual proficiency and its newfound ability to analyze images, audio, and text documents. The company assured a phased rollout of these features to ensure their responsible use.

The event featured live demonstrations showcasing the enhanced voice capabilities of the model. OpenAI researchers engaged in conversations with an AI voice model, eliciting responses that ranged from storytelling to solving math problems. Notably, the AI was adept at discerning emotions, adding a personalized touch to interactions.

While speculation had swirled regarding OpenAI’s foray into search technology, CEO Sam Altman clarified that Monday’s announcement did not pertain to a search engine project. However, Altman hinted at upcoming endeavors that promise to captivate audiences.

Despite a mostly seamless demonstration, concerns lingered about the potential misuse of OpenAI’s new features, particularly in areas like facial recognition and audio generation. While the company assured preventive measures, specifics on safeguards remained scant.

In parallel developments, OpenAI is reportedly on the cusp of integrating its generative AI capabilities into Apple’s iPhone operating system. This strategic partnership underscores Apple’s push towards AI innovation and positions OpenAI as a key player in shaping the future of mobile technology.

However, amidst its expansion and collaboration efforts, OpenAI faces legal challenges from media outlets alleging copyright infringements. The lawsuits underscore the ethical complexities surrounding AI development and highlight the need for robust frameworks to address intellectual property concerns.

As OpenAI propels the boundaries of AI technology, its endeavors underscore the profound impact of responsible innovation on society. With GPT-4o at the helm, the journey towards human-AI symbiosis takes another momentous stride forward.

Categories
AI

The African AI Renaissance: A New Frontier of Innovation and Collaboration

Johannesburg – as technology is getting more advanced each day, the future of Artificial Intelligence (AI) and Africa draws the line between curiosity and strategic sense. The artificial intelligence platform ChatGPT opens the doors to a world full of untapped opportunities capable of revolutionizing healthcare, agriculture, and education, among others. 

Human experts express a similar view that AI is the most recent battleground in the constantly changing competition environment between the U. S.  and China on the continent. According to Chinasa T.  Okolo, a fellow at the Center for Technology Innovation at The Brookings Institution, African countries must make sizable investments in computing infrastructure in order to advance their AI research and innovation. For her, partnerships with the powerhouses such as the U.  S.  and China are the crucial aspect of the process of implementation of such initiatives. 

However, unlike the regions where the data saturation is the problem, Africa stands out as the place for data abundance. Okolo notes that AI firms will shift their focus to Africa in search of data repositories that can feed the development of services and systems which are specifically tailored to the continent’s needs. 

Among the nations of Africa, South Africa becomes a leader in the field of AI. The last government summit was marked by Mondli Gungubele, the minister of communications and digital technologies, who expressed the country’s readiness to enter the era of generative AI and ensure that South Africa does not lag behind other nations in terms of technology. 

This commitment is seen through efforts like the AIISA which was created with the purpose of propelling AI adoption in various industries. Hitekani Magwedze, the ministry of communications and digital technologies spokesman, highlights how the hubs can help to drive innovation and tackle problems like unemployment and inequality. 

The partnerships with the global partners aids the development of AI ecosystem in South Africa. The launching of AI Career Tech Center together with the US tech giant Intel shows how the country is dedicated to harnessing regional expertize for local attainment. 

As South Africa and other African countries begin their venture into AI, they find themselves in the middle of the competition between the United States and China. The AI space in Africa has attracted the attention of both superpowers who have expressed interest in investing and partnering to shape the continent’s technological development. 

For the U.  S.  , Prosper Africa is an illustration of the fact that it plans to cement beneficial bilateral ties with African countries. Lisa Walker, managing director for Africa operations at Prosper Africa, emphasizes the institution’s aim to create alliances between American companies and their African counterparts in the tech arena. 

In the same way, China’s Belt and Road Initiative created the atmosphere for significant investments in Africa’s internet infrastructure and connectivity. The holding of the China-Africa Internet Development and Cooperation Forum is a vivid demonstration of China’s dedication to the progress of science and technology on the African continent. 

With competing interests in mind, Okolo underlines collaboration as the cornerstone of AI development in Africa. While the U.S.  and China compete for the number one spot in the global artificial intelligence race, African nations have a chance to prosper through partnerships that advocate for mutual growth and shared prosperity. 

As Africa navigates the complexities of U.  S.  -China dynamics, one thing remains clear: the AI renaissance in the continent has proved that the continent is capable of overcoming challenges, innovations, and shaping the future of technology globally.