Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
Nil2000 committed Apr 25, 2024
2 parents 7676cc5 + 9560987 commit ac27c5e
Show file tree
Hide file tree
Showing 8 changed files with 110 additions and 39 deletions.
32 changes: 29 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,30 @@
# Morphic

An AI-powered answer engine with a generative UI.
An AI-powered search engine with a generative UI.

![capture](/public/capture-240404_blk.png)

### Note

Please note that there are differences between this repository and the official website [morphic.sh](morphic.sh). The official website is a fork of this repository with additional features such as authentication, which are necessary for providing the service online. The core source code of Morphic resides in this repository, and it's designed to be easily built and deployed. When using Morphic, please keep in mind the different roles of the repository and the website.

## 🔍 Overview

- 🧱 [Stack](#-stack)
- 🚀 [Quickstart](#-quickstart)
- 🌐 [Deploy](#-deploy)
-[Verified models](#-verified-models)

### 🚗 Roadmap [WIP]

- [x] Enable specifying the model to use (only writer agent)
- [ ] Implement chat history functionality
- [ ] Develop features for sharing results
- [ ] Add video support for search functionality
- [ ] Implement Retrieval-Augmented Generation (RAG) support
- [ ] Introduce tool support for enhanced productivity
- [ ] Expand Generative UI capabilities

## 🧱 Stack

- App framework: [Next.js](https://nextjs.org/)
Expand Down Expand Up @@ -70,7 +84,7 @@ TAVILY_API_KEY=[YOUR_TAVILY_API_KEY]
# SPECIFIC_API_MODEL=
```

**Note: This project focuses on Generative UI and requires complex output from LLMs. Currently, it's assumed that the official OpenAI models will be used. Although it's possible to set up other models, if you use an OpenAI-compatible model, but we don't guarantee that it'll work. **
_Note: This project focuses on Generative UI and requires complex output from LLMs. Currently, it's assumed that the official OpenAI models will be used. Although it's possible to set up other models, if you use an OpenAI-compatible model, but we don't guarantee that it'll work._

### 4. Run app locally

Expand All @@ -82,10 +96,22 @@ You can now visit http://localhost:3000.

## 🌐 Deploy

Host your own live version of Morphic with Vercel.
Host your own live version of Morphic with Vercel or Cloudflare Pages.

### Vercel

[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fmiurla%2Fmorphic&env=OPENAI_API_KEY,TAVILY_API_KEY)

### Cloudflare Pages

1. Fork the repo to your GitHub.
2. Create a Cloudflare Pages project.
3. Select `Morphic` repo and `Next.js` preset.
4. Set `OPENAI_API_KEY` and `TAVILY_API_KEY` env vars.
5. Save and deploy.
6. Cancel deployment, go to `Settings` -> `Functions` -> `Compatibility flags`, add `nodejs_compat` to preview and production.
7. Redeploy.

## ✅ Verified models

List of verified models that can be specified to writers.
Expand Down
3 changes: 1 addition & 2 deletions app/action.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,6 @@ async function submit(formData?: FormData, skip?: boolean) {
}

async function processEvents() {
uiStream.update(<Spinner />)

let action: any = { object: { next: 'proceed' } }
// If the user skips the task, we proceed to the search
if (!skip) action = (await taskManager(messages)) ?? action
Expand All @@ -70,6 +68,7 @@ async function submit(formData?: FormData, skip?: boolean) {
let toolOutputs = []
let errorOccurred = false
const streamText = createStreamableValue<string>()
uiStream.update(<Spinner />)

// If useSpecificAPI is enabled, only function calls will be made
// If not using a tool, this model generates the answer
Expand Down
Binary file modified bun.lockb
Binary file not shown.
8 changes: 7 additions & 1 deletion components/message.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

import { StreamableValue, useStreamableValue } from 'ai/rsc'
import { MemoizedReactMarkdown } from './ui/markdown'
import rehypeExternalLinks from 'rehype-external-links'
import remarkGfm from 'remark-gfm'

export function BotMessage({
content
Expand All @@ -14,7 +16,11 @@ export function BotMessage({
if (error) return <div>Error</div>

return (
<MemoizedReactMarkdown className="prose-sm prose-neutral prose-a:text-accent-foreground/50">
<MemoizedReactMarkdown
rehypePlugins={[[rehypeExternalLinks, { target: '_blank' }]]}
remarkPlugins={[remarkGfm]}
className="prose-sm prose-neutral prose-a:text-accent-foreground/50"
>
{data}
</MemoizedReactMarkdown>
)
Expand Down
44 changes: 44 additions & 0 deletions components/search-section.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
'use client'

import { SearchResults } from './search-results'
import { SearchSkeleton } from './search-skeleton'
import { SearchResultsImageSection } from './search-results-image'
import { Section } from './section'
import { ToolBadge } from './tool-badge'
import type { SearchResults as TypeSearchResults } from '@/lib/types'
import { StreamableValue, useStreamableValue } from 'ai/rsc'

export type SearchSectionProps = {
result?: StreamableValue<string>
}

export function SearchSection({ result }: SearchSectionProps) {
const [data, error, pending] = useStreamableValue(result)
const results: TypeSearchResults = data ? JSON.parse(data) : undefined
return (
<div>
{!pending && data ? (
<>
<Section size="sm" className="pt-2 pb-0">
<ToolBadge tool="search">{`${results.query}`}</ToolBadge>
</Section>
{results.images && results.images.length > 0 && (
<Section title="Images">
<SearchResultsImageSection
images={results.images}
query={results.query}
/>
</Section>
)}
<Section title="Results">
<SearchResults results={results.results} />
</Section>
</>
) : (
<Section className="pt-2 pb-0">
<SearchSkeleton />
</Section>
)}
</div>
)
}
49 changes: 16 additions & 33 deletions lib/agents/researcher.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,11 @@ import {
import { searchSchema } from '@/lib/schema/search'
import { Section } from '@/components/section'
import { OpenAI } from '@ai-sdk/openai'
import { ToolBadge } from '@/components/tool-badge'
import { SearchSkeleton } from '@/components/search-skeleton'
import { SearchResults } from '@/components/search-results'
import { BotMessage } from '@/components/message'
import Exa from 'exa-js'
import { SearchResultsImageSection } from '@/components/search-results-image'
import { Card } from '@/components/ui/card'
import { SearchResults } from '../types'
import { SearchSection } from '@/components/search-section'

export async function researcher(
uiStream: ReturnType<typeof createStreamableUI>,
Expand All @@ -38,6 +36,7 @@ export async function researcher(
</Section>
)

let isFirstToolResponse = true
const result = await experimental_streamText({
model: openai.chat(process.env.OPENAI_API_MODEL || 'gpt-4-turbo'),
maxTokens: 2500,
Expand All @@ -61,17 +60,14 @@ export async function researcher(
max_results: number
search_depth: 'basic' | 'advanced'
}) => {
uiStream.update(
<Section>
<ToolBadge tool="search">{`${query}`}</ToolBadge>
</Section>
)

uiStream.append(
<Section>
<SearchSkeleton />
</Section>
)
// If this is the first tool response, remove spinner
if (isFirstToolResponse) {
isFirstToolResponse = false
uiStream.update(null)
}
// Append the search section
const streamResults = createStreamableValue<string>()
uiStream.append(<SearchSection result={streamResults.value} />)

// Tavily API requires a minimum of 5 characters in the query
const filledQuery =
Expand All @@ -97,24 +93,7 @@ export async function researcher(
return searchResult
}

uiStream.update(
<Section title="Images">
<SearchResultsImageSection
images={searchResult.images}
query={searchResult.query}
/>
</Section>
)
uiStream.append(
<Section title="Sources">
<SearchResults results={searchResult.results} />
</Section>
)

// Append the answer section if the specific model is not used
if (!useSpecificModel) {
uiStream.append(answerSection)
}
streamResults.done(JSON.stringify(searchResult))

return searchResult
}
Expand Down Expand Up @@ -142,6 +121,10 @@ export async function researcher(
toolCalls.push(delta)
break
case 'tool-result':
// Append the answer section if the specific model is not used
if (!useSpecificModel && toolResponses.length === 0) {
uiStream.append(answerSection)
}
toolResponses.push(delta)
break
case 'error':
Expand Down
11 changes: 11 additions & 0 deletions lib/types/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
export type SearchResults = {
images: string[]
results: SearchResultItem[]
query: string
}

export type SearchResultItem = {
title: string
url: string
content: string
}
2 changes: 2 additions & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@
"react-dom": "^18",
"react-icons": "^5.0.1",
"react-markdown": "^9.0.1",
"rehype-external-links": "^3.0.0",
"remark-gfm": "^4.0.0",
"tailwind-merge": "^2.2.2",
"tailwindcss-animate": "^1.0.7"
},
Expand Down

0 comments on commit ac27c5e

Please sign in to comment.