I agree with your analysis about OpenAI. And Google had been running chatbot basis technology for years before this all because publicly popular. I kept seeing similarities between capabilities discussed in their LLM papers and what bots on Reddit and other places were doing.
By the way, here is an alarm. There is a Google paper: "Consensus and subjectivity of skin tone annotation for ML fairness". What is this? I saw a research paper of theirs that discussed how to AI tag texts for skin color of author. Let that sink in in the context of anti white censorship.
I also note that 95% of Google's AI research staff are not white and most are not originally US citizens. This reflects an HR policy somehow.
I agree with your analysis about OpenAI. And Google had been running chatbot basis technology for years before this all because publicly popular. I kept seeing similarities between capabilities discussed in their LLM papers and what bots on Reddit and other places were doing.
By the way, here is an alarm. There is a Google paper: "Consensus and subjectivity of skin tone annotation for ML fairness". What is this? I saw a research paper of theirs that discussed how to AI tag texts for skin color of author. Let that sink in in the context of anti white censorship.
I agree with your analysis about OpenAI. And Google had been running chatbot basis technology for years before this all because publically popular. I kept seeing similarities between capabilities discussed in their LLM papers and what bots on Reddit and otehr places were doing.
By the way, here is an alarm. There is a Google paper: "Consensus and subjectivity of skin tone annotation for ML fairness". What is this? I saw a research paper of theirs that discussed how to AI tag texts for skin color of author. Let that sink in in the context of anti white censorship.