Releases: apify/crawlee
v2.0.2
v2.0.1
v2.0.0
We're releasing SDK 2 ahead of schedule, because we need state of the art HTTP2 support for scraping and with Node.js versions <15.10, HTTP2 is not very reliable. We bundled in 2 more potentially breaking changes that we were waiting for, but we expect those to have very little impact on users. Migration should therefore be super simple. Just bump your Node.js version.
If you're waiting for full TypeScript support and new features, those are still in the works and will be released in SDK 3 at the end of this year.
- BREAKING: Require Node.js >=15.10.0 because HTTP2 support on lower Node.js versions is very buggy.
- BREAKING: Bump
cheerio
to1.0.0-rc.10
fromrc.3
. There were breaking changes incheerio
between the versions so this bump might be breaking for you as well. - Remove
LiveViewServer
which was deprecated before release of SDK v1. - We no longer tag beta releases.
v1.3.4
v1.3.3
v1.3.2
v1.3.1
v1.3.0
Navigation hooks in CheerioCrawler
CheerioCrawler
downloads the web pages using the requestAsBrowser
utility function.
As opposed to the browser based crawlers that are automatically encoding the URLs, the
requestAsBrowser
function will not do so. We either need to manually encode the URLs
via encodeURI()
function, or set forceUrlEncoding: true
in the requestAsBrowserOptions
,
which will automatically encode all the URLs before accessing them.
We can either use
forceUrlEncoding
or encode manually, but not both - it would
result in double encoding and therefore lead to invalid URLs.
We can use the preNavigationHooks
to adjust requestAsBrowserOptions
:
preNavigationHooks: [
(crawlingContext, requestAsBrowserOptions) => {
requestAsBrowserOptions.forceUrlEncoding = true;
}
]
Apify
class and Configuration
Adds two new named exports:
Configuration
class that serves as the main configuration holder, replacing explicit usage of
environment variables.Apify
class that allows configuring the SDK. Env vars still have precedence over the SDK configuration.
When using the Apify class, there should be no side effects.
Also adds new configuration for WAL mode in ApifyStorageLocal
.
As opposed to using the global helper functions like main
, there is an alternative approach using Apify
class.
It has mostly the same API, but the methods on Apify
instance will use the configuration provided in the constructor.
Environment variables will have precedence over this configuration.
const { Apify } = require('apify'); // use named export to get the class
const sdk = new Apify({ token: '123' });
console.log(sdk.config.get('token')); // '123'
// the token will be passed to the `call` method automatically
const run = await sdk.call('apify/hello-world', { myInput: 123 });
console.log(`Received message: ${run.output.body.message}`);
Another example shows how the default dataset name can be changed:
const { Apify } = require('apify'); // use named export to get the class
const sdk = new Apify({ defaultDatasetId: 'custom-name' });
await sdk.pushData({ myValue: 123 });
is equivalent to:
const Apify = require('apify'); // use default export to get the helper functions
const dataset = await Apify.openDataset('custom-name');
await dataset.pushData({ myValue: 123 });
Full list of changes:
- Add
Configuration
class andApify
named export, see above. - Fix
proxyUrl
without a port throwing an error when launching browsers. - Fix
maxUsageCount
of aSession
not being persisted. - Update
puppeteer
andplaywright
to match stable Chrome (90). - Fix support for building TypeScript projects that depend on the SDK.
- add
taskTimeoutSecs
to allow control over timeout ofAutoscaledPool
tasks - add
forceUrlEncoding
torequestAsBrowser
options - add
preNavigationHooks
andpostNavigationHooks
toCheerioCrawler
- deprecated
prepareRequestFunction
andpostResponseFunction
methods ofCheerioCrawler
- Added new event
aborting
for handling gracefully aborted run from Apify platform.
v1.2.1
v1.2.0
This release brings the long awaited HTTP2 capabilities to requestAsBrowser
. It could make HTTP2 requests even before, but it was not very helpful in making browser-like ones. This is very important for disguising as a browser and reduction in the number of blocked requests. requestAsBrowser
now uses got-scraping
.
The most important new feature is that the full set of headers requestAsBrowser
uses will now be generated using live data about browser headers that we collect. This means that the "header fingeprint" will always match existing browsers and should be indistinguishable from a real browser request. The header sets will be automatically rotated for you to further reduce the chances of blocking.
We also switched the default HTTP version from 1 to 2 in requestAsBrowser
. We don't expect this change to be breaking, and we took precautions, but we're aware that there are always some edge cases, so please let us know if it causes trouble for you.
Full list of changes:
- Replace the underlying HTTP client of
utils.requestAsBrowser()
withgot-scraping
. - Make
useHttp2
true
by default withutils.requestAsBrowser()
. - Fix
Apify.call()
failing with emptyOUTPUT
. - Update
puppeteer
to8.0.0
andplaywright
to1.10.0
with Chromium 90 in Docker images. - Update
@apify/ps-tree
to support Windows better. - Update
@apify/storage-local
to support Node.js 16 prebuilds.