Prepare your data for AI applications and see the latest in AI
View in browser
Talking-AI-Logo_2 (3)

Read time: 5 minutes

Omar-Shanti

Welcome back to the Talking AI Newsletter, where we help you cut through the AI hype.

 

👋🏻 I'm Omar Shanti, CTO at HatchWorks AI.

With so much buzz around AI, it's easy to get lost in the noise.

 

So let's break it down. In every issue, we'll dive into:

  • the latest AI developments
  • how they can be practically applied
  • how they can shape your business both today and in the future

This Week's Insights

  • Multi-agent systems are becoming critical for scaling AI applications
  • Swarm simplifies workflows through functional decomposition and modularity
  • Whether to bring AI to your workflow or integrate workflows into AI systems
  • Centralized services offer ease, but decentralized solutions offer flexibility
  • AI tools are advancing biotech but require balancing innovation with safety

🐝 Agents All The Way Down

openai-swarm

Following OpenAI's latest cookbook, Swarm has been all the rage – and not without at least some merit. The underlying ideas aren’t ground-breaking; the beauty lies in their intuitive, simple application.

 

Many of us have built our own Swarm implementations. Whether borrowing from the fields of distributed computing, workflow orchestration, or domain driven design, the blueprint has been there all along. Scale with complexity by decomposing your applications along functional lines, and tuning each individually. Separate your orchestrator from your executors too for an added boost. Pretty fundamental stuff!

sponge-bob-modularity

But in a space so dominated by hype and an endless conveyor belt of VC-backed tools looking for use-cases, the fundamentals are so often lost. So let's do two things.

 

First, let's appreciate that Swarm is a pattern – not a tool. Sans dependencies, sans breaking changes.

 

Second, let's reorient ourselves on the direction we're heading: and that's towards multi-agent architectures with multi-modal dialogue and multi-model inference. Prompt Engineering is the bottle neck right now and architectures like Swarm help us get past it by facilitating better testing.


As a staunch advocate of bridging the gaps between GenAI and both classical Software Engineering and ML, I am pleased that the space is having its own Unix moment. 

 

The question is, how long until the monorepo movement finds its twin in a monoagent movement?

spider-man-mono

🎨 Claude, Canvas, and (De)Centralization

The world of LLM SaaS is heading in two competing directions. 

 

One direction is towards decentralization. You can access your LLM in more areas than ever before: Claude Dev and Amazon Q bring it to VSCode; NotebookLM and other GCP services bring GenAI to all of Google Workspaces; and numerous others

 

The other, centralization. Increasing investment in the B2C LLMaaS offerings of OpenAI, Anthropic, Gemini, and others exert a gravitational pull on the users. 

 

The launch of OpenAI’s Canvas, just like Claude’s Artifacts, is a pull toward centralization.

chatgpt-4o-canvas

The question is which is right for you? Do you bring an LLM to your workflow, or bring your workflow to an LLM? 

 

It depends (of course). And based on the major LLM companies’ dual B2B and B2C approach, they know it too. 

 

While Canvas makes it easy to iteratively generate an artifact, whether a blog post or a code snippet, it leaves a lot to be desired: Cursor's multi-file support; Artifacts' execution preview capabilities; Projects' global context. The new interface streamlines existing functionality rather than creating anything new and the underlying model doesn't best its competitors.

 

Canvas is a pull towards Centralization – just not a big enough one.

📰 Your AI News Brief

Stay updated with the latest developments in artificial intelligence.

  • Hopfield and Hinton’s 2024 Nobel Prize in Physics honors their machine learning breakthroughs, highlighting AI's healthcare potential and the need to address ethical challenges.
  • Anthropic CEO’s must-read essay envisions an AI-driven future, balancing bold predictions with strategies for managing risks and equitable implementation.
  • Anthropic's Contextual Retrieval boosts RAG by preserving context, promising scalability and integration in diverse enterprises.
  • HBR asks: Is Your Company’s Data Ready for Generative AI? CTOs must improve data quality and management to unlock AI's business value.
  • Zyphra's Zamba2-7B outperforms top models, offering efficiency and scalability, but companies must optimize infrastructure to harness its full potential.

⚕️Can Gen AI Revolutionize Healthcare—Safely?

In our latest Talking AI episode, Matt Paige talks with Matt Lewis, Global Chief AI Officer at Inizio Medical.

 

They explore the real impact of AI on pharmaceutical research, biotech advancements, and medical device development.

 

They delve into the challenges of integrating AI in a highly regulated environment, acknowledging there's no one-size-fits-all solution.

 

It's all about balancing innovation with patient safety—tune in to discover how personalized and custom AI tools are shaping the future of healthcare.

 

🎧 AI in Life Sciences: Balancing Risk and Innovation

matt-lewis-TALKING-AI

📐The CTO's Blueprint to RAG

SH-NEW-graphic-The-CTOs-Blueprint-to-Retrieval-Augmented-Generation-(RAG)-copy

How can you empower your AI models to deliver accurate, real-time insights without the hefty costs of fine-tuning?

 

In the CTO's Blueprint to Retrieval Augmented Generation (RAG), we show how RAG bridges the gap between large language models and your proprietary data.

 

Think of RAG as giving your LLM its own dynamic library, transforming outdated outputs into real-time, auditable insights.

 

We explore the RAG architecture step by step, from indexing to retrieval to generation, and discuss advanced techniques to optimize performance.

See how RAG can make your data your biggest differentiator.

 

Need something more specific?

 

We break it down by industry for you: 🏦 Financial Services | 🏥 Healthcare

I appreciate you taking the time to read.

 

We’ll be back in two weeks with more actionable AI insights and discussions.

 

Thoughts or feedback?

 

Reach out or connect on LinkedIn.

P.S. Did you miss our first Talking AI Newsletter?

 

It's available for you to read and share here.

 

In that edition, we explored OpenAI's NotebookLM and its applications in enhancing data management and enterprise AI solutions.

Mono-White@1080x
Facebook
LinkedIn
Instagram
TikTok
YouTube

We're your AI development partner. 


We build AI-native solutions and use AI to build software better, faster, smarter. 
If you're interested in working with us, contact us here.

HatchWorks, 3280 Peachtree Rd NE, 7th floor, Atlanta, Georgia 30305, United States

Unsubscribe Manage preferences