An HTTP client, taking inspiration from Ruby’s faraday and Python’s requests
Package API:
HttpClient
- Main interface to making HTTP requests. Synchronous requests only.HttpResponse
- HTTP response object, used for all responses across the different clients.Paginator
- Auto-paginate through requests - supports a subset of all possible pagination scenarios - will fill out more scenarios soonAsync
- Asynchronous HTTP requests - a simple interface for many URLS - whose interface is similar to HttpClient
- all URLs are treated the same.AsyncVaried
- Asynchronous HTTP requests - accepts any number of HttpRequest
objects - with a different interface than HttpClient
/Async
due to the nature of handling requests with different HTTP methods, options, etc.HttpRequest
- HTTP request object, used for AsyncVaried
mock()
- Turn on/off mocking, via webmockr
auth()
- Simple authentication helperproxy()
- Proxy helperupload()
- File upload helperset_auth()
, set_headers()
, set_opts()
, set_proxy()
, and crul_settings()
HttpClient
method only, and allow you to trigger functions to run on requests or responses, or both. See ?hooks
for the details and examplesMocking:
crul
now integrates with webmockr to mock HTTP requests. Checkout the http testing book
Caching:
crul
also integrates with vcr to cache http requests/responses. Checkout the http testing book
CRAN version
Dev version
HttpClient
is where to start
(x <- HttpClient$new(
url = "https://httpbin.org",
opts = list(
timeout = 1
),
headers = list(
a = "hello world"
)
))
#> <crul connection>
#> url: https://httpbin.org
#> curl options:
#> timeout: 1
#> proxies:
#> auth:
#> headers:
#> a: hello world
#> progress: FALSE
#> hooks:
Makes an R6 class, that has all the bits and bobs you’d expect for doing HTTP requests. When it prints, it gives any defaults you’ve set. As you update the object you can see what’s been set
You can also pass in curl options when you make HTTP requests, see below for examples.
The client object created above has http methods that you can call, and pass paths to, as well as query parameters, body values, and any other curl options.
Here, we’ll do a GET request on the route /get
on our base url https://httpbin.org
(the full url is then https://httpbin.org/get
)
The response from a http request is another R6 class HttpResponse
, which has slots for the outputs of the request, and some functions to deal with the response:
Status code
Status information
res$status_http()
#> <Status code: 200>
#> Message: OK
#> Explanation: Request fulfilled, document follows
The content
res$content
#> [1] 7b 0a 20 20 22 61 72 67 73 22 3a 20 7b 7d 2c 20 0a 20 20 22 68 65 61
#> [24] 64 65 72 73 22 3a 20 7b 0a 20 20 20 20 22 41 22 3a 20 22 68 65 6c 6c
#> [47] 6f 20 77 6f 72 6c 64 22 2c 20 0a 20 20 20 20 22 41 63 63 65 70 74 22
#> [70] 3a 20 22 61 70 70 6c 69 63 61 74 69 6f 6e 2f 6a 73 6f 6e 2c 20 74 65
#> [93] 78 74 2f 78 6d 6c 2c 20 61 70 70 6c 69 63 61 74 69 6f 6e 2f 78 6d 6c
#> [116] 2c 20 2a 2f 2a 22 2c 20 0a 20 20 20 20 22 41 63 63 65 70 74 2d 45 6e
#> [139] 63 6f 64 69 6e 67 22 3a 20 22 67 7a 69 70 2c 20 64 65 66 6c 61 74 65
#> [162] 22 2c 20 0a 20 20 20 20 22 48 6f 73 74 22 3a 20 22 68 74 74 70 62 69
#> [185] 6e 2e 6f 72 67 22 2c 20 0a 20 20 20 20 22 55 73 65 72 2d 41 67 65 6e
#> [208] 74 22 3a 20 22 6c 69 62 63 75 72 6c 2f 37 2e 35 34 2e 30 20 72 2d 63
#> [231] 75 72 6c 2f 34 2e 32 20 63 72 75 6c 2f 30 2e 39 2e 30 22 0a 20 20 7d
#> [254] 2c 20 0a 20 20 22 6f 72 69 67 69 6e 22 3a 20 22 31 39 32 2e 31 33 32
#> [277] 2e 36 31 2e 33 35 2c 20 31 39 32 2e 31 33 32 2e 36 31 2e 33 35 22 2c
#> [300] 20 0a 20 20 22 75 72 6c 22 3a 20 22 68 74 74 70 73 3a 2f 2f 68 74 74
#> [323] 70 62 69 6e 2e 6f 72 67 2f 67 65 74 22 0a 7d 0a
HTTP method
Request headers
res$request_headers
#> $`User-Agent`
#> [1] "libcurl/7.54.0 r-curl/4.2 crul/0.9.0"
#>
#> $`Accept-Encoding`
#> [1] "gzip, deflate"
#>
#> $Accept
#> [1] "application/json, text/xml, application/xml, */*"
#>
#> $a
#> [1] "hello world"
Response headers
res$response_headers
#> $status
#> [1] "HTTP/1.1 200 OK"
#>
#> $`access-control-allow-credentials`
#> [1] "true"
#>
#> $`access-control-allow-origin`
#> [1] "*"
#>
#> $`content-encoding`
#> [1] "gzip"
#>
#> $`content-type`
#> [1] "application/json"
#>
#> $date
#> [1] "Wed, 06 Nov 2019 20:52:22 GMT"
#>
#> $`referrer-policy`
#> [1] "no-referrer-when-downgrade"
#>
#> $server
#> [1] "nginx"
#>
#> $`x-content-type-options`
#> [1] "nosniff"
#>
#> $`x-frame-options`
#> [1] "DENY"
#>
#> $`x-xss-protection`
#> [1] "1; mode=block"
#>
#> $`content-length`
#> [1] "228"
#>
#> $connection
#> [1] "keep-alive"
All response headers - e.g., intermediate headers
And you can parse the content with parse()
res$parse()
#> No encoding supplied: defaulting to UTF-8.
#> [1] "{\n \"args\": {}, \n \"headers\": {\n \"A\": \"hello world\", \n \"Accept\": \"application/json, text/xml, application/xml, */*\", \n \"Accept-Encoding\": \"gzip, deflate\", \n \"Host\": \"httpbin.org\", \n \"User-Agent\": \"libcurl/7.54.0 r-curl/4.2 crul/0.9.0\"\n }, \n \"origin\": \"192.132.61.35, 192.132.61.35\", \n \"url\": \"https://httpbin.org/get\"\n}\n"
jsonlite::fromJSON(res$parse())
#> No encoding supplied: defaulting to UTF-8.
#> $args
#> named list()
#>
#> $headers
#> $headers$A
#> [1] "hello world"
#>
#> $headers$Accept
#> [1] "application/json, text/xml, application/xml, */*"
#>
#> $headers$`Accept-Encoding`
#> [1] "gzip, deflate"
#>
#> $headers$Host
#> [1] "httpbin.org"
#>
#> $headers$`User-Agent`
#> [1] "libcurl/7.54.0 r-curl/4.2 crul/0.9.0"
#>
#>
#> $origin
#> [1] "192.132.61.35, 192.132.61.35"
#>
#> $url
#> [1] "https://httpbin.org/get"
res <- HttpClient$new(url = "http://api.gbif.org/v1/occurrence/search")
res$get(query = list(limit = 100), timeout_ms = 100)
#> Error in curl::curl_fetch_memory(x$url$url, handle = x$url$handle) :
#> Timeout was reached
The simpler interface allows many requests (many URLs), but they all get the same options/headers, etc. and you have to use the same HTTP method on all of them:
(cc <- Async$new(
urls = c(
'https://httpbin.org/',
'https://httpbin.org/get?a=5',
'https://httpbin.org/get?foo=bar'
)
))
res <- cc$get()
lapply(res, function(z) z$parse("UTF-8"))
The AsyncVaried
interface accepts any number of HttpRequest
objects, which can define any type of HTTP request of any HTTP method:
req1 <- HttpRequest$new(
url = "https://httpbin.org/get",
opts = list(verbose = TRUE),
headers = list(foo = "bar")
)$get()
req2 <- HttpRequest$new(url = "https://httpbin.org/post")$post()
out <- AsyncVaried$new(req1, req2)
Execute the requests
Then functions get applied to all responses:
out$status()
#> [[1]]
#> <Status code: 200>
#> Message: OK
#> Explanation: Request fulfilled, document follows
#>
#> [[2]]
#> <Status code: 200>
#> Message: OK
#> Explanation: Request fulfilled, document follows
out$parse()
#> [1] "{\n \"args\": {}, \n \"headers\": {\n \"Accept\": \"application/json, text/xml, application/xml, */*\", \n \"Accept-Encoding\": \"gzip, deflate\", \n \"Foo\": \"bar\", \n \"Host\": \"httpbin.org\", \n \"User-Agent\": \"R (3.6.1 x86_64-apple-darwin15.6.0 x86_64 darwin15.6.0)\"\n }, \n \"origin\": \"192.132.61.35, 192.132.61.35\", \n \"url\": \"https://httpbin.org/get\"\n}\n"
#> [2] "{\n \"args\": {}, \n \"data\": \"\", \n \"files\": {}, \n \"form\": {}, \n \"headers\": {\n \"Accept\": \"application/json, text/xml, application/xml, */*\", \n \"Accept-Encoding\": \"gzip, deflate\", \n \"Content-Length\": \"0\", \n \"Content-Type\": \"application/x-www-form-urlencoded\", \n \"Host\": \"httpbin.org\", \n \"User-Agent\": \"libcurl/7.54.0 r-curl/4.2 crul/0.9.0\"\n }, \n \"json\": null, \n \"origin\": \"192.132.61.35, 192.132.61.35\", \n \"url\": \"https://httpbin.org/post\"\n}\n"
library(httr)
x <- HttpClient$new(
url = "https://httpbin.org/bytes/102400",
progress = progress()
)
z <- x$get()
|==============================================| 100%
crul
in R doing citation(package = 'crul')