Web Scraping with Next.js 13 using a Generated Fetch Typescript client with Python, Playwright and a FastAPI Backend
Dive into a comprehensive guide on integrating Playwright, FastAPI, OpenAPI, TypeScript, and Next.js. Discover challenges, solutions, and insights.
Felix Vemmer
August 28, 2023
For my SaaS backlinkgpt.com (AI powered backlink building), I need to scrape website content. Sounds, easy right. It took my quite some time to figure out on how to piece together Playwright, FastAPI, OpenAPI, TypeScript, and Next.js. Here's the story.
While it's tempting to think scraping is as easy as sending a request using Javascript's fetch or Python's requests, the reality is different. Modern websites often use Javascript to render content dynamically, meaning a straightforward fetch often won't capture everything. The real challenge is to scrape sites that depend heavily on dynamic content loading techniques.
A headless browser, as the name suggests, operates without a graphical interface. It can interact with web pages, execute JavaScript, and perform tasks similar to a standard browser - all without user visibility. Popular choices include Puppeteer (JavaScript) and Selenium and Playwright, which caters to multiple languages, including Python.
My hope was that running a headless browser on Vercel was straight forward. Turns out it was not. I soon ran into a few issues regarding bundle size and memory limits.
To run them, you must connect to a browser instance via websockets. Browserless.io is one option and has a good tutorial on it.
So I just went ahead and created a route, which would allow me to use the Browserless API and scrape the content:
Overall, it worked out pretty fine and the scraping was also fairly quick as I shared in this tweet:
While the pricing for Browserless.io was reasonable and I quickly surpassed the free tier's limit of 1,000 scrapes, I remained unsatisfied due to additional challenges.
I encountered two primary challenges with Browserless.io.
When a website presented a cookie consent banner, only this banner was scraped, omitting the actual site content.
I aimed to extract more than just opengraph tags; I wanted to capture the website's text in a more structured format. By converting HTML into markdown, I could reduce token usage for OpenAI's GPT and maintain the crucial structure that plain text would sacrifice.
While addressing these issues with Browserless.io seemed feasible, my Python background made it enticing to use Python Playwright. This approach granted me greater ease in debugging and crafting custom logic, ensuring adaptability for future enhancements.
I've long admired Modal for its unparalleled developer experience in deploying Python apps. While I intend to share a detailed review on its merits soon, feel free to check out my tech stack in the meantime:
First I created a simple FastAPI app on Modal with the POST route scrape-website. This in turn calls the get_website_content function which takes care of parsing the HTML with Beautiful Soup and converting the HTML content to Markdown with html2text:
One very known feature of FastAPI is its ability to generate OpenAPI (formerly known as Swagger) documentation for your API out of the box. This documentation not only serves as a great tool for understanding and testing your API endpoints but also provides a JSON schema that can be utilized to generate client libraries in various languages, including TypeScript.
I thought doing so was quite easy, especially since Sebastián Ramírez even wrote some amazing docs on how to do it:
There are many tools to generate clients from OpenAPI. A common tool is OpenAPI Generator. If you are building a frontend, a very interesting alternative is openapi-typescript-codegen.
Turns out I tried too many tools and code generators and was quite amazed on how many new ones are built, but there's not one single super well working one. Here's what I found.
openapi-zod-client: Sadly uses axios and I did not want an additonal dependencies. Also all the functions are in snake case and customizing them was a bit confusing to me since I never used handlebars.
Fern: Looks like a cool startup but a bit of an overkill. Also creating more yaml and custom things was too much work, since I wanted to keep it simple.
With everything set up, I wanted to ensure that our frontend could seamlessly interface with our backend without having to juggle different URLs or face CORS issues. To do this, I turned to the rewrites feature in Next.js, which provides a mechanism to map an incoming request path to a different destination path.
Here's how I configured the rewrites in the next.config.js:
The above configuration tells Next.js to forward any request starting with /fast-api to our backend server. This way, on our frontend, we can simply call /fast-api/scrape-website and it will be proxied to our backend on Modal.com.
With these rewrites in place, the integration of frontend and backend was smooth, and my development experience was greatly enhanced. I no longer had to remember or handle different URLs for different environments, and everything just worked.
And that's how I bridged PydanticV2, OpenAPI, TypeScript, and Next.js together. Hope this helps anyone looking to do something similar!
In just 16 breathtaking seconds, @browserless scrapes a website and @langchain delivers summarization, classification, and keyword extraction. All achieved smoothly on @vercel's edge runtime and AI SDK where data streams seamlessly. The future is here, and it's incredibly fast!⚡