Meet anthymGPT – The First AI for Social Connection
We’re excited to announce that we will build anthymGPT in public, sharing our journey to the world – the good, the bad and the ugly!
The recent progress of AI has us incredibly excited. We’ve been thinking about the integration of NLP & AI capabilities across the anthym platform since our inception, but needed to find product-market fit first. Without PMF, no amount of AI would have made sense to build. We’ve spent the last 18-months talking to clients, understanding their needs, and building what the market wanted. Now we’re thinking deeply about how we leverage AI to help members of a team or group connect more genuinely with other members.
We first embarked on a learning journey to understand the underlying science of social connection and how certain media, like music, evokes memory and has a role in creating social bonds between people. During that journey, we were fortunate to recruit two amazing humans & scientists to our Scientific Advisory Council – Dr. Matt Lieberman, Director of UCLA’s Social Cognitive Neuroscience Lab, and Dr. Elizabeth Margulis, Director of Princeton’s Music Cognition Lab.
We’ve leveraged this research to continually refine our connection framework and develop the MVP of our connection measurement standard – Connection Quotient. We’re beyond excited to now combine this research with the vast troves of anonymized, autobiographical memory and media data we’ve cataloged across 100s of anthym engagements and 1000s of anthym participants, in our pursuit to build the world’s first Human Connection AI.
One Size Fits All | Bigger is Better – Outdated AI Models
In a recent MIT interview, the CEO of OpenAI, Sam Altman, said that the era of big AI models like GPT-4 is coming to an end. This means that future progress in AI will need new ideas, not just bigger models.
Altman thinks that scaling up model size comes with diminishing returns, there’s a limit to how quickly data centers can be built, and developing large AI models is costly. However, this does not mean that accessing data is becoming difficult. It is becoming harder and more expensive to access data due to copyright disputes, privacy concerns and regulations, data monetization, the impact of Web3, and geopolitical barriers.
Additionally, there is a risk of data contamination from generative AI chatbots. Altman’s message is that AI needs to focus on new ideas and techniques rather than just making larger models.
Proprietary Data Sets – FTW!
One of those new techniques involves further tuning models based on human feedback, a promising direction that many researchers are already exploring.
This is where anthym has a compelling opportunity to leverage its rich, rapidly-growing archive of proprietary, anonymized autobiographical memory metadata and media tags. Combined with access to Dr. Lieberman’s library of social cognitive neuroscience and Dr. Margulis’ music cognition & social bonding research, anthym is in a position to deliver outcomes that no other company in the world can, because no other company in the world can replicate the data we have.
Follow Our Journey
As we set out on the anthymGPT journey, we believe we’ll be able to train our model to prove out and deliver on three main outcomes:
- Accelerate meaningful connections among in-tact teams & groups
- Predict social bonds & influence small group dynamics for incoming cohorts / classes
- Deliver unique & novel generative AI output for social & entertainment purposes
We’ll take a deeper dive on each of these three outcomes in future blog posts. Stay tuned!