Categories
AI

AI, Copyright, and Content Licensing: An African Perspective

As artificial intelligence (AI) reshapes the global digital landscape, African content creators and publishers face both challenges and opportunities. The rise of AI-generated content has sparked debates about copyright infringement and fair compensation, issues that are particularly pertinent in Africa where many countries are still developing robust intellectual property frameworks.

Recent legal actions, such as The New York Times’ lawsuit against OpenAI, have ignited discussions across Africa about protecting intellectual property in the AI era. For African publishers, this situation presents a dual challenge: safeguarding their content from unauthorized use by AI systems while also exploring potential new revenue streams through licensing deals with AI companies.

“AI presents a double-edged sword for African creators. While it offers unprecedented opportunities for content creation and distribution, it also poses significant challenges to our intellectual property rights. We must proactively shape AI policies that protect our cultural heritage while fostering innovation.”

Olubayo Adekanmbi

The legal landscape surrounding AI and copyright is still evolving globally, with different countries taking varied approaches. This flux presents an opportunity for African nations to develop frameworks that balance innovation with the protection of creators’ rights. Some countries might follow Japan’s more permissive approach, allowing the use of copyrighted works in AI training under specific circumstances, while others might opt for stricter regulations to protect local content creators.

To address the power imbalance in negotiations with large AI companies, there’s growing interest among African publishers in forming consortiums or collective bargaining units. This could potentially give African content creators more leverage in negotiations, ensuring fair compensation for the use of their works.

AI also offers potential benefits for African publishers and content creators. AI-powered tools can streamline content licensing processes, assist in detecting copyright infringement, and help in preserving and promoting African languages and cultural heritage. However, concerns remain about the potential erosion of traditional storytelling methods and the authenticity of cultural expressions.

As Africa navigates this complex landscape, there’s potential to develop approaches that not only protect intellectual property but also leverage AI to amplify African voices, preserve cultural heritage, and drive economic growth in the creative industries. The future of AI, copyright, and content licensing in Africa will likely be shaped by ongoing global legal battles, technological advancements, and local innovations.

For content creators, publishers, and policymakers across Africa, actively participating in shaping the future of AI and copyright will be crucial. By engaging in these discussions and developing strategies that balance innovation with protection, Africa can work towards a future where AI enhances rather than threatens its rich cultural and intellectual heritage.

Categories
AI

WhatsApp’s New Voice Chat Mode for Meta AI

WhatsApp, the popular messaging platform, continues to push the boundaries of user experience with its latest innovation: a voice chat mode for Meta AI. Currently in the testing phase, this exciting feature was uncovered in the WhatsApp beta for Android 2.24.18.18, promising to revolutionize how users interact with artificial intelligence.


The voice chat mode is designed to allow users to communicate with Meta AI using voice commands, creating a more natural and efficient interaction. Instead of typing out messages, users will be able to speak directly to Meta AI, which will respond using a voice chosen by the user. This personalization option adds a unique touch, allowing users to select a voice that resonates with their preferences.
Once officially released, users will have the ability to manually enable the voice chat mode. A convenient floating action button within the chat list will serve as a quick shortcut to activate this feature. When enabled, Meta AI will continuously listen to the user’s commands, facilitating hands-free interaction. This could prove particularly useful in situations where typing is impractical, such as while driving or cooking.


WhatsApp has taken privacy concerns into account, implementing safeguards to protect users. The voice chat mode can be stopped at any time by either leaving the chat or switching back to text mode. Furthermore, a visual indicator in the status bar will clearly show when Meta AI is actively listening, ensuring users maintain control over their interactions.


The introduction of voice chat mode represents a significant leap forward in making Meta AI more accessible and responsive. Verbal communication is often quicker than typing, and this feature aims to expedite interactions while also making them feel more natural. Whether users are seeking information, setting reminders, or engaging in casual conversation, the voice chat mode has the potential to make these interactions smoother and more intuitive.


It’s worth noting that this development follows a similar trend in AI-powered communication tools. OpenAI’s ChatGPT, for instance, introduced a voice chat feature in September 2023, allowing users to have spoken conversations with the AI. This move by WhatsApp demonstrates how major tech companies are recognizing the value of voice interaction in enhancing user experience with AI assistants.


As WhatsApp continues to refine and develop this feature, users can look forward to a new era of AI interaction that prioritizes speed, convenience, and personalization. The voice chat mode for Meta AI represents another step towards seamless integration of AI in our daily communication, promising to make our digital conversations more efficient and engaging than ever before. Keep an eye out for future updates as this exciting feature rolls out in upcoming versions of WhatsApp.

Categories
AI

AI Breakthrough: Predicting Autism in Toddlers With 80% Accuracy

Researchers at the Karolinska Institutet in Stockholm, Sweden, have developed an AI tool that can predict Autism Spectrum Disorder (ASD) in toddlers under 24 months old with nearly 80% accuracy. Talk about a potential lifesaver!

Early detection of autism is crucial. The earlier we can intervene, the better the chances for a child to reach their full potential. Currently, the average age of diagnosis in the U.S. is between 4.7 and 5.2 years, depending on household income. That’s a lot of precious time lost!

Let’s look at some numbers:

  • Globally, about 1 in 100 children are diagnosed with ASD.
  • In the U.S., it’s even higher: 1 in 36 children and 1 in 45 adults have autism.

The researchers used a type of AI called eXtreme Gradient Boosting (XGBoost) to create their model, which they’ve dubbed AutMedAI. They trained it on data from over 30,000 participants and validated it using nearly 15,000 more.

What makes this AI special? AutMedAI uses only basic medical and background information to make its predictions. No invasive tests, no complicated procedures – just simple data that’s easy to collect.

To build this AI, researchers tapped into the mother lode of autism research data:

  1. SPARK: The largest autism research study globally, with records on over 100,000 people with autism and 175,000 family members.
  2. Simons Simplex Collection (SSC): Another robust database from the Simons Foundation Autism Research Initiative.

While this AI tool shows incredible promise, it’s important to remember that it’s not meant to replace human expertise. Instead, it could serve as an early screening tool, helping identify children who might benefit from further evaluation and early intervention.

This breakthrough could be a total game-changer in the world of autism diagnosis and treatment. By catching signs of ASD earlier, we might be able to provide support and interventions at a crucial stage of brain development, potentially improving outcomes for countless children.

So, keep your eyes peeled for more news on this front. The future of autism detection is looking brighter, thanks to the power of AI!

Remember, if you have concerns about your child’s development, always consult with a healthcare professional. This AI tool is exciting, but it’s not a substitute for expert medical advice.

Categories
AI

OpenAI Unveils GPT-4o Mini: A Cost-Effective AI Model to Replace GPT-3.5

OpenAI has launched GPT-4o mini, a powerful and affordable AI model designed to make artificial intelligence more accessible. Priced at just 15 cents per million input tokens and 60 cents per million output tokens, GPT-4o mini offers a cost-effective alternative to models like GPT-3.5 Turbo.

According to OpenAI’s blog, GPT-4o mini excels in performance, scoring 82% on the MMLU benchmark and surpassing GPT-4.1 on the LMSYS leaderboard for chat preferences. It handles a variety of tasks with low cost and fast response times, making it ideal for applications requiring multiple model calls, large volumes of context, or real-time text interactions such as customer support chatbots.

The model currently supports text and vision inputs, with plans to include image, video, and audio inputs and outputs in the future. GPT-4o mini has a context window of 128K tokens and supports up to 16K output tokens per request, with knowledge updated until October 2023. Its improved tokenizer enhances the cost-effectiveness of handling non-English text.

GPT-4o mini also shines in academic and practical applications, outperforming other small models in reasoning, math, and coding tasks. For example, it scored 87% in mathematical reasoning and 87.2% in coding performance on benchmarks like MGSM and HumanEval, respectively.

Safety is a key feature of GPT-4o mini, with robust measures such as filtering harmful content during pre-training and using reinforcement learning with human feedback (RLHF) to align the model’s behavior with safety policies. Over 70 external experts have tested the model to identify and mitigate potential risks, ensuring its reliability and safety.

GPT-4o mini is now available through the Assistants API, Chat Completions API, and Batch API. It will be accessible to Free, Plus, and Team users on ChatGPT starting today, with enterprise users gaining access next week. OpenAI also plans to introduce fine-tuning capabilities for GPT-4o mini soon.

OpenAI continues to aim at reducing costs while enhancing AI capabilities. The introduction of GPT-4o mini is a step toward integrating powerful AI into everyday applications, making advanced intelligence more affordable and accessible for developers and users alike.

Categories
AI

Rise of Chinese AI Education Apps in the US

Chinese AI education apps like Question.AI and Gauth have made significant strides in the competitive US market, signaling a shift from their origins in China’s bustling tech landscape. Originally developed to cater to China’s rigorous educational demands, these apps are now reshaping how American students approach their homework through advanced AI algorithms.

Question.AI, owned by Beijing’s Zuoyebang, and ByteDance’s Gauth has gained rapid popularity in the US, particularly within the education sector. By leveraging generative AI, these apps allow students to snap photos of homework problems and receive detailed solutions with step-by-step explanations. Initially launched in 2023 and 2020 respectively, they offer essential features for free, with additional paid options. Gauth has become the second most popular educational app globally, while Question. AI holds a strong position as well.

The appeal lies in their seamless integration of technology and education, resonating well in a digital learning environment increasingly favored by students and educators alike. This transition to the US market underscores a quest for new revenue streams and a strategic move to establish global technological leadership.

Despite their success, these apps face challenges such as navigating data privacy concerns and integrating with different educational philosophies. Compliance with stringent US data regulations and adapting to American educational values, which prioritize creativity and critical thinking, will be crucial for sustained success.

In summary, the achievements of Question.AI and Gauth in the US underscore the technological prowess developed amidst fierce competition in China’s AI sector. As these apps continue to evolve and adapt, their impact on the future of education appears poised to expand further.

Categories
AI

Meta Launches “AI Studio” for Instagram Creators to Build AI Avatars

Meta has been working on it for a while, and today marks the launch of the first stage of its “AI Studio” platform. This new feature will allow creators on Instagram to build AI versions of themselves that can interact with fans via direct messages.

Currently in beta and limited testing with selected creators, Meta’s custom AI bots will be able to answer questions in the style of the account holder. These AI bots will be identified with a stars icon on the message tab, indicating that the responses are generated by a bot. Additionally, there are disclaimer notes in the chat to clarify that users are engaging with an AI bot, not the actual person or account holder.

Meta Connect 2023: Quest 3, AI, smart glasses but at what prices?

Meta CEO Mark Zuckerberg announced the launch during an interview with YouTuber Kane Sutter, where he also touched on Meta’s broader AI plans. Zuckerberg’s comments were generally broad-reaching, hinting at future AI updates like improved translation and hologram-like projections of real people in virtual reality. However, the main announcement was the live testing of AI Studio with selected Instagram creators in the U.S.

Meta AI Studio example

According to Zuckerberg, AI Studio will enable creators to build AI versions of themselves to interact with their community. This process, which was recently uncovered by app researcher Alessandro Paluzzi, will be integrated into Instagram and provide various prompts and tools to generate these AI bot variations. The primary use case, as described by Zuckerberg, is to answer fact-based queries. The more challenging aspect will be generating creative responses that replicate the style of the creator. Creators will have the freedom to train their bots on different aspects of their social media presence, allowing them to create more life-like replicas of themselves.

Meta AI Studio example

Despite this, Meta is cautious about not misleading users into thinking they are interacting with the real person. Zuckerberg mentioned that they are still refining the AI disclosure elements, but there are several signifiers in place to indicate that the responses are from a bot.

Sutter raised a critical point during his interview with Zuckerberg, expressing concerns about eroding the genuine connection between creators and their audiences. Zuckerberg downplayed these concerns, but the issue remains: does it add value to have AI bots that simulate real humans, especially on platforms designed for authentic interaction?

This development seems to deviate from the core purpose of social media, potentially turning it into a space where bots interact with bots, sidelining real human engagement in favor of automation. Users have long complained about bots and inauthentic interactions on social apps. So why is Meta now promoting this shift, effectively replacing humans with bots?

Zuckerberg argues that the advanced technology makes these AI interactions more convincing and valuable. However, it still feels like an odd direction for Meta to take. Is there truly a demand for AI versions of celebrities and influencers? Will this enhance the user experience?

Instagram AI creation

Zuckerberg also mentioned that, eventually, users will be able to create their own user-generated content (UGC) AI characters, capable of interacting with others in various styles. Yet, the question remains: is there actual demand for this? Will it add meaningful value?

Meta’s initial experiments with celebrity-influenced bots didn’t seem to gain much traction, but the company is pushing forward, aiming to increase bot engagement in the platform. Only time will tell if this new direction will resonate with users or if it will further blur the lines of genuine social interaction.

Categories
AI

OpenAI CTO’s Comments on AI and Creativity Spark Outrage in Artistic Community

With each passing week, it’s becoming increasingly clear that whenever executives from major AI development companies participate in interviews, whether friendly or not, they often end up enraging the artistic community. They seem unable to resist admitting that their AI innovations will replace human jobs, while simultaneously performing mental gymnastics to justify why this displacement is beneficial.

Take, for example, a recent interview with Mira Murati, Chief Technology Officer at OpenAI, conducted by Dartmouth Engineering. Previously criticized for dodging questions about the training methods of OpenAI’s text-to-video AI, Sora, Murati once again stirred controversy. During the discussion about AI’s impact on human creativity, she admitted that AI’s expansion would lead to job losses, provoking widespread anger among digital and real-life artists.

Murati added fuel to the fire by suggesting that jobs taken over by AI “shouldn’t have been here in the first place.” This statement implied that AI mainly affects beginners and low-skilled creators, an interpretation that many found insulting. The backlash was swift, with viewers disliking the original video en masse and flooding social media with angry comments.

In an attempt to mitigate the fallout, Murati later took to Twitter to clarify her remarks. She argued that there is a distinction “between temporary creative tasks and those that add lasting meaning and value to society,” seemingly suggesting that only high-quality, museum-worthy art deserves protection from AI’s encroachment. Her statement read: “Just like spreadsheets changed things for accountants and bookkeepers, AI tools can handle tasks like writing online ads or creating generic images and templates. But it’s important to recognize the difference between temporary creative tasks and those that add lasting meaning and value to society. With AI tools taking on more repetitive or mechanistic aspects of the creative process, human creators can focus on higher-level creative thinking and choices. This lets artists stay in control of their vision and focus their energy on the most important parts of their work.”

Despite her efforts, Murati’s clarification did little to quell the anger. The lengthy statement, filled with buzzwords, seemed to bury the core message, further frustrating the artistic community.

What do you think about all this? How do you feel about Murati’s and OpenAI’s stance on human artists? Do you have a job that “shouldn’t have been here in the first place”? Share your thoughts in the comments!

Categories
AI

Apple’s Battle with the EU Heats Up Over Digital Markets Act

Apple’s conflict with the European Union is intensifying. On Friday, Apple confirmed it wouldn’t be releasing several new features to EU users due to “regulatory uncertainties brought about by the Digital Markets Act (DMA).” In a statement, Apple said:

“We do not believe that we will be able to roll out three of these features – iPhone Mirroring, SharePlay Screen Sharing enhancements, and Apple Intelligence – to our EU users this year.

Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security. We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety.”

This statement can be interpreted in different ways. If you believe the EU’s regulation is overly restrictive, protectionist, and unclear, then Apple’s cautious approach makes sense. By limiting product launches to uncontroversial features, Apple aims to avoid potential multibillion-euro fines.

Conversely, if you see Apple’s reaction as one of defiant compliance, outraged by an authority it perceives as less legitimate than its own, then this move could be viewed as an attempt to deter other governments from adopting similar regulations.

The EU, however, is not backing down. On Monday, it announced plans to sue Apple for noncompliance:

“In preliminary findings, against which Apple can appeal, the European Commission said it believed its rules of engagement did not comply with the Digital Markets Act (DMA) “as they prevent app developers from freely steering consumers to alternative channels for offers and content”.

Additionally, the commission has opened a new non-compliance procedure against Apple over concerns that its new contract terms for third-party app developers also fall short of the DMA’s requirements.”

For the EU, the principle is straightforward: if a European customer wants to do business with a European company, no external entity should hinder that market’s operation. This aligns closely with the founding ideals of the bloc.

However, this isn’t exactly what the DMA states, which is where the conflict arises. Apple aims to adhere strictly to the law while maintaining control over its platforms, whereas the EU seeks to interpret the law to facilitate smooth commerce. The outcome of this legal battle remains uncertain, but it’s clear that the appeals process is just beginning.

Categories
AI

Safe Superintelligence Inc. (SSI)

Ilya Sutskever, a prominent figure in the AI community and co-founder of OpenAI, has embarked on a new venture by founding Safe Superintelligence Inc. (SSI). Sutskever, known for his contributions to the development of AI and deep learning, particularly through his co-authorship of the groundbreaking AlexNet paper in 2012, is now directing his efforts towards creating superintelligent AI that prioritizes safety and reliability.

SSI’s core mission is to tackle what it identifies as the “most important technical problem of our time”: the development of superintelligent AI that is both safe and reliable. This goal underscores the increasing concern within the AI research community about the potential risks associated with highly advanced AI systems. Sutskever and his team are focused on ensuring that as AI systems become more capable, they remain aligned with human values and safety protocols to prevent unintended and potentially catastrophic outcomes.

To achieve its ambitious objectives, SSI plans to assemble a “lean, cracked team of the world’s best engineers and researchers.” This approach emphasizes a streamlined and highly skilled team capable of making significant strides in the complex field of AI safety. By attracting top talent, SSI aims to innovate and develop advanced methodologies for ensuring that superintelligent AI systems are not only powerful but also controllable and aligned with ethical guidelines.

Sutskever’s decision to form SSI comes after a period of significant activity and controversy at OpenAI. As a member of the OpenAI board, he was involved in the temporary removal of Sam Altman as CEO in November 2023. Reports suggest that Sutskever had previously voiced concerns about the pace of commercialization under Altman’s leadership and the associated safety risks. These concerns highlight a broader debate within the AI community about the balance between rapid technological advancement and the necessity of rigorous safety measures.

The establishment of SSI appears to be Sutskever’s proactive response to these concerns. By founding a company dedicated to the safe development of superintelligent AI, Sutskever is positioning himself and his team at the forefront of the effort to mitigate risks associated with advanced AI. This move reflects his commitment to addressing the potential dangers of AI head-on, ensuring that the powerful tools created by AI research are beneficial and not harmful to society.

SSI’s formation marks a significant development in the AI landscape. With a figure of Sutskever’s stature focusing on AI safety, it brings increased attention and credibility to the field. His background, including his seminal work on AlexNet, provides a strong foundation for tackling the complex challenges associated with superintelligent AI. Furthermore, SSI’s emphasis on assembling a top-tier team suggests a focused and high-impact approach to research and development in AI safety.

Categories
AI

Revolutionizing Customer Experience: The Next Generation of Contact Centers

Contact Center as a Service (CCaaS) platforms were introduced with high expectations: rapid AI innovation and new engagement channels. Yet, a decade after their debut, the customer service experience remains largely unchanged. Customers still endure repetitive questions, long wait times, and the frustrations of traditional Interactive Voice Response (IVR) systems.

Thankfully, some forward-thinking contact centers are challenging the outdated “press one for this, press two for that” model. As a McKinsey & Company report highlights, “IVR systems are evolving from dumb menu systems into smart ‘voicebots’.”

Smart voicebots are transforming the way contact centers handle customer inquiries. These AI-driven bots can understand customer intent through natural language processing, routing them to the most appropriate agent—whether live or virtual—via their preferred communication channel.

Innovative contact centers are taking this technology a step further. By engaging with customers while they wait, voicebots can gather important information about their queries. This pre-emptive data collection allows agents to begin conversations with a clear understanding of the customer’s needs, reducing the interaction time by up to 45 seconds and improving overall satisfaction for both customers and agents.

Modern contact centers are not just enhancing the customer experience within existing channels but are rethinking the entire interaction process. Gurpreet Singh Kohli, SVP and Global Head of Telecom & Networks at HCLTech, advocates for proactive escalation based on customer needs. For instance, if a phone call would better resolve a customer’s issue than a live chat, the system should facilitate a seamless transition to a call.

This approach may seem counterintuitive since channels like live chat are typically cheaper. However, by analyzing key metrics such as average handling time (AHT), first contact resolution (FCR), and quality scores, contact centers can determine when a prompt escalation will actually reduce costs and improve service.

Looking ahead, Kohli envisions a future where customer contact handling focuses on intent-level journey orchestration. This strategy involves identifying the most common customer queries and designing the optimal resolution process, blending human expertise, AI, and various communication modalities.

Consider a scenario where a customer reports a broken order. The future contact center might verify the customer’s identity through fingerprint recognition, request a photo of the damaged item, and use image recognition AI to confirm the issue. An automated workflow could then initiate a recall and send a replacement. This streamlined process not only enhances efficiency but also reduces the need for multiple, separate workflows across different channels.

Transitioning to intent-level journey orchestration can be daunting. However, HCLTech offers invaluable support. With a global network of contact center consultants, extensive expertise, and strategic partnerships, HCLTech helps organizations innovate and create the next generation of contact centers.

For more information on how HCLTech is enabling the evolution of contact centers, visit their website and discover the future of customer experience.