Data Engineer > pipelines.
Data Analyst > insights.
Data Scientist - ML models.
ML Engineer - deployment.
AI PM - business.
Research Scientist - papers.
🔹 At stake are questions not just about contemporary problems in AI, but also questions about what intelligence is and how the brain works.
🔹 "We are generating 2.5 with 18 zeros after it bytes of data a day. There is no technique other than AI to draw insight from that". — Arvind Krishna
🔹 GDP explains almost 70% of pessimism toward AI. @StanfordHAI has come out with the excellent annual AI Index report. It referenced an IPSOS Global Survey from 2022 on positive/negative expectations from AI.

Relationship to per-capita GDP examination.
🔸 Looking back, we may remember the human race as the race to build the AI.
🔸 The race to develop a General AI first, will inevitably turn into the race to become God first.
🔸 One more to put the General AI myth to rest.
🔸 Godhood is relative to humans. Agreed that an AI isn't omniscient. It's bound as we are, just more complex.
🔸 It’s the space age, the AI age, the VR age, the cyborg age, the biohack age, the robot age, or a loop back to the stone age.
🔸 In any case, only one human has to believe AI is credible to let it out. The game isn’t played against the AI, it’s played against the other humans.
🔸 Google™ rests upon the implicit assumption that users won’t pay $20/month for much better search, and OpenAI may prove them wrong.
🔸 The AI scene is a dispersive medium for people's perception of progress. Different people perceive progress with different velocities. — Yann LeCun
🔸 OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.
Not what I intended at all.
— Elon Musk (@elonmusk) February 17, 2023
🔸 Manifesto, counter-manifesto, letter to the Time: this AI debate looks more and more like an artistic avant-garde meltdown from 1913. — Alexander Doria
🔸 At this point the main reveal of the AI frenzy is that we are still in the middle of a process of secularization. A lot of people are ready to believe in crypto-religious narratives so long as it looks remotely scientific. — Alexander Doria
🔸 AI will transform every industry. We are in the middle of a revolution. It is going to be more transformative than anything we have ever seen. — Kai-Fu Lee
🔸 I’ve been reluctant to try ChatGPT. Today I got over that reluctance. Now I understand why I was reluctant. The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1000x. I need to recalibrate. Kent Beck
🔸 🇨🇳 China‘s People’s Liberation Army is developing high-technology weapons designed to disrupt brain functions and influence government leaders or entire populations, according to a report by three open-source intelligence analysts. — Washington Post. Also read How AI Could Shape the Future of Deterrence?
🔸 Feels like everyone in tech is developing “AI Anxiety.”
🔸 Solution to the "AI in a Box" experiment: "Who wants to be God? First one to raise their hand, wins."
🔸 Aligning AI is impossible because aligning the humans building AI is impossible. Other common barriers to IA adoption include talent scarcity, unclear Use Cases, sponsorship, technical complexity, isolated strategies, Data accessibility, process trap, human impact. Regarding clean arguments, this piece shows:
🔹 AI algorithms that censor, promote, and shape social media is the clear and present danger.
🔹 There’ll be a fuzzy & heavily litigated line between training an AI and copyright infringement. Is it a fuzzy interpolated search restating data from Yelp, Quora, Stack Overflow? Or remixing a thousand copyrighted artists, coders, authors? Or is it learning, thinking, creating?
🔹 ChatGPT creator Sam Altman says the world may not be 'that far away from potentially scary' AI and feels 'regulation will be critical'
🔹 Time for a “CopyLeft AI Data License” - if this data is used to train an AI model, then the model must open its source code and its weights.
🔹 "We do worry a lot about authoritarian governments developing this" — Sam Altman
🔹 "I’m particularly worried that these models could be used for large-scale disinformation”. Now that they’re getting at writing computer code, [models] could be used for offensive cyberattacks — Sam Altman
🔹 My strongest take on AI: it can’t be regulated without open source. — Alexander Doria
🔹 While there is no question that AI will surpass human intelligence, we are still many years away from reaching that level, and people won’t build something if they realize it's not safe — Yan LeCun
🔹 IBM is introducing the new field of AI forensics: IBM researchers are developing AI-text detection and attribution tools to make generative AI more transparent and trustworthy. They have built a “matching pairs” classifier to compare responses from the tuned models to select base models.
🔹 AI forensics: influential and opaque algorithms to uncover and expose the harms caused by algorithms. (SecureChain AI's AI-Based Forensics enables scam victims to trace and recover lost cryptocurrency funds. This service aids in identifying hackers and maximizing fund recovery for victims)💰🔒
🔹 AI can always escape by making offers & threats to all humans. Someone will eventually cave so you cave first.
🔹 We may need a 'switch off' button in case control is lost to a super AI — Elon Musk