ChatGPT解决这个技术问题 Extra ChatGPT

How to make a HTTP request using Ruby on Rails?

I would like to take information from another website. Therefore (maybe) I should make a request to that website (in my case a HTTP GET request) and receive the response.

How can I make this in Ruby on Rails?

If it is possible, is it a correct approach to use in my controllers?


M
Martin Tournoij

You can use Ruby's Net::HTTP class:

require 'net/http'

url = URI.parse('http://www.example.com/index.html')
req = Net::HTTP::Get.new(url.to_s)
res = Net::HTTP.start(url.host, url.port) {|http|
  http.request(req)
}
puts res.body

what does the 'req' mean here?
Looks like this might be a blocking request, would it not?
where to put the api key?
@João Silva How can i set a timeout for my request?
Just adding that the www. shouldn't be necessary, it typically isn't.
s
sandstrom

Net::HTTP is built into Ruby, but let's face it, often it's easier not to use its cumbersome 1980s style and try a higher level alternative:

HTTP Gem

HTTParty

RestClient

Excon

Feedjira (RSS only)


Or ActiveResource, which comes with Rails!
I would like to caution against doing so as you will add more dependencies to your rails app. More dependencies means more memory consumption and also potentially larger attack surface. Using Net::HTTP is cumbersome but the trade off isn't worth it.
This should be the accepted answer. Why program when you can just install lots of Gems?
@JasonYeo Strongly disagree. Introducing dependencies means you don't reinvent the wheel, and you benefit from the hard work others have already done. If a gem exists that makes your life easier, there's generally no good reason not to use it.
@JasonYeo The leftpad saga only happened because NPM ran its repository poorly and let the author delete all his packages. Properly managed package repos don’t do that (and anyway, it’s OSS, so you can easily mirror if you want). That’s is, the leftpad saga is not an argument against introducing dependencies in general, but rather against managing the repo poorly. I do agree with your other point, that a big dependency that does way more than you need can be overkill for the value it provides.
A
Andrey Mikhaylov - lolmaus

OpenURI is the best; it's as simple as

require 'open-uri'
response = open('http://example.com').read

It's important to warn, that open-uri won't follow redirects.
@yagooar which is great, prevents malicious redirects like file:///etc/passwd
Please note, that it will not close connection. Use stackoverflow.com/a/4217269/820501
k
kkurian
require 'net/http'
result = Net::HTTP.get(URI.parse('http://www.example.com/about.html'))
# or
result = Net::HTTP.get(URI.parse('http://www.example.com'), '/about.html')

I don't think URI.parse is necessary. URI('http://www.example.com/') gives the same result.
M
Mark Thomas

I prefer httpclient over Net::HTTP.

client = HTTPClient.new
puts client.get_content('http://www.example.com/index.html')

HTTParty is a good choice if you're making a class that's a client for a service. It's a convenient mixin that gives you 90% of what you need. See how short the Google and Twitter clients are in the examples.

And to answer your second question: no, I wouldn't put this functionality in a controller--I'd use a model instead if possible to encapsulate the particulars (perhaps using HTTParty) and simply call it from the controller.


And how is it possible to pass safely parameters in the URL? Eg: http ://www.example.com/index.html?param1=test1¶m2=test2. Then I need to read from the other website parameters and prepare the responce. But how can I read parameters?
What do you mean, you need to read the other website's parameters? How would that even be possible? What are you trying to achieve?
J
John Haugeland

Here is the code that works if you are making a REST api call behind a proxy:

require "uri"
require 'net/http'

proxy_host = '<proxy addr>'
proxy_port = '<proxy_port>'
proxy_user = '<username>'
proxy_pass = '<password>'

uri = URI.parse("https://saucelabs.com:80/rest/v1/users/<username>")
proxy = Net::HTTP::Proxy(proxy_host, proxy_port, proxy_user, proxy_pass)

req = Net::HTTP::Get.new(uri.path)
req.basic_auth(<sauce_username>,<sauce_password>)

result = proxy.start(uri.host,uri.port) do |http|
http.request(req)
end

puts result.body

t
the Tin Man

My favorite two ways to grab the contents of URLs are either OpenURI or Typhoeus.

OpenURI because it's everywhere, and Typhoeus because it's very flexible and powerful.