- The AI Newsroom
- Posts
- ✅ Your Weekly AI Update #17
✅ Your Weekly AI Update #17
AI Guardrails, Election Deepfakes, and Trump’s AI Policy Shakeup
Welcome to this new edition!
In Today’s Menu:
AI-related Quote
AI-ducation: What’s a Principal Component Analysis?
Top 3 News of the Week
AI story: Timnit Gebru
Image of The Day
Learn AI in 5 minutes a day.
The Rundown is the world’s most trusted AI newsletter, with over 700,000+ readers staying up-to-date with the latest AI news, understanding why it matters, and learning how to apply it in their work.
Their expert research team spends all day learning what’s new in AI, then distills the most important developments into one free email every morning.
The AI-related Quote
"Artificial intelligence is the new electricity."
The AI-ducation
Demystifying Principal Component Analysis (PCA)
Principal Component Analysis, or PCA, is a method in AI and statistics that helps make complex data easier to understand. Imagine you have a big data set with tons of variables, but many of them overlap or don’t add new information. PCA’s job is to simplify this data by finding the most important parts, or “principal components,” which capture the essence of the data with fewer dimensions.
How does it work? PCA identifies patterns, highlighting the directions where the data varies the most. It creates new variables called principal components, which are combinations of the original data but with less noise and redundancy. These components are ranked so that the first component has the most significant impact on explaining the data, the second a bit less, and so on. By focusing on just the top few components, PCA lets us see the most critical data trends without getting lost in all the details.
PCA allows us to visualize complex datasets, detect patterns, and make machine-learning models run faster and more effectively. It’s an essential tool when working with large-scale data and is widely used in fields like genetics, image compression, and finance.
Credit: NumXL
The 3 News of the Week
#1 - Trump’s New AI Policies
Donald Trump’s recent election victory could mean major shifts in U.S. AI policy. Trump has signaled a desire to repeal Biden’s 2023 AI Executive Order, which focused on civil rights and national security protections. While some fear this could weaken bias protections, Trump’s focus on national security provisions, particularly in relation to AI competition with China, may remain a priority.
Trump's allies, including J.D. Vance and Elon Musk, reflect a divided stance on AI regulation, with some advocating for innovation without stringent controls. Trump is expected to push policies that favor American dominance in AI, particularly around issues like chip production and open-source AI.
#2 - ChatGPT Blocks Over 250,000 Election Deepfake Requests
OpenAI recently announced that ChatGPT blocked over 250,000 requests to create images of U.S. election candidates using its DALL-E platform. These requests included images of Donald Trump, Joe Biden, and Kamala Harris, among others. OpenAI’s "safety measures" were designed to prevent deceptive or harmful AI content creation during the election season.
The company stated that it has seen no evidence of election-related influence operations on its platform going viral, despite efforts by outside groups to use its technology for political manipulation. In a prior instance, OpenAI also blocked an Iranian influence campaign, “Storm-2035,” and has since banned related accounts from its services.
#3 - AI Startup 11x Secures $50M in Series B Funding from Andreessen Horowitz
11x, the UK-founded AI startup developing "digital workers" to automate roles in sales and customer support, has raised $50 million in Series B funding, led by Andreessen Horowitz. This follows a $24 million Series A in September, showing the strong investor confidence in 11x’s potential.
The company plans to use the funding to expand its AI-driven workforce and to hire key staff in San Francisco, where it recently relocated. CEO Hasan Sukkar envisions each new “digital worker” replacing the tasks of multiple employees, and the company has seen revenue grow significantly with over 200 clients, including Otter.ai and Airwallex.
The AI Story
Timnit Gebru: A Voice for Ethics in AI
Timnit Gebru is a prominent researcher and advocate in the field of AI ethics, particularly known for her groundbreaking work on algorithmic bias. As a co-founder of the Black in AI community, Gebru has been a powerful force in highlighting how AI systems can unintentionally harm marginalized groups, especially when they’re trained on biased data.
Gebru's most influential work includes research on facial recognition, where she showed that these systems often perform poorly on women and people of color. This work sparked essential discussions on the dangers of bias and the need for fairness and transparency in AI. Her papers have pushed tech giants to rethink how they build and deploy AI, encouraging more responsibility and accountability in the industry.
Beyond her research, Gebru is known for her advocacy in making the AI field more inclusive and ethical. Her contributions have reshaped how we think about AI’s impact on society, ensuring that ethical considerations become a core part of AI development.
Image of The Day
Made with Midjourney
Thanks for your time!
See you on Friday for the special edition!
Hey readers, Doug here!
I'd like to sincerely thank you for taking the time to read The AI Newsroom every week.
I make every effort to send you valuable emails every week.
Please let me know how you found this edition by replying to this email or by answering the questionnaire below. 👇
♻️ Please also feel free to share as much of the newsletter as you can with your friends, colleagues, or AI girlfriend. It helps a lot!
Keep Learning! 💚