Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rework API to use Solr when available #1366

Closed
wants to merge 26 commits into from

Conversation

jywarren
Copy link
Member

@jywarren jywarren commented Apr 4, 2017

Needs now to be done on top of (after) #1386 -- could redo this manually, actually.

@icarito - check this change out -- it would make API node searches draw upon Solr when available, and ActiveRecord when not. If you like it, and #1176 is ready, you can merge this into the fix_solr branch.


Jeff's remaining to-do list:

@PublicLabBot
Copy link

PublicLabBot commented Apr 4, 2017

2 Messages
📖 @jywarren Thank you for your pull request! I’m here to help with some tips and recommendations. Please take a look at the list provided and help us review and accept your contribution!
📖 This pull request doesn’t link to a issue number. Please refer to the issue it fixes (if any) in the format: Fixes #123.

Generated by 🚫 Danger

@icarito
Copy link
Member

icarito commented Apr 5, 2017

That looks great! Is this a pull-request into a pull-request? Wow. I don't know how it works, should I merge this into fix_solr? Great work!

@jywarren
Copy link
Member Author

jywarren commented Apr 5, 2017

It is, yes -- to keep this follow-up work separate and sequential, let's wait until #1176 is complete before looking at merging this in. I want to know that the Solr toggling fallback system works on branch1 first!

@icarito
Copy link
Member

icarito commented Apr 5, 2017

Well currently the last push is running already from http://branch1.laboratoriopublico.org - shall I stop Solr in pad.publiclab.org and see what happens?

@jywarren
Copy link
Member Author

jywarren commented Apr 5, 2017

yes, please do! so eager!

@icarito
Copy link
Member

icarito commented Apr 5, 2017

Earlier this morning, after loading a fresh SQL dump, I went for reindexing and it's still going, reaally slooow. The good news is that I ran it before Solr was ready and it successfully retried until it was online.

icarito@tycho:/srv/plots_branch1/plots2$ RAILS_ENV=production docker-compose run web rake sunspot:reindex
Error - RSolr::Error::Http - 503 Service Unavailable - retrying...
Error - RSolr::Error::Http - 503 Service Unavailable - ignoring...
[#                                                                  ] [ 13650/871891] [  1.57%] [25:28] [26:42:11] [     8.93/s]

This is not the speed it reindexed the last time.

I'm seeing the Solr logs over at pad.publiclab.org and finding it like this:

solr_1  | 1486575 INFO  (qtp1112280004-16) [   x:default] o.a.s.c.S.Request [default] webapp=/solr path=/select params={q=test&defType=edisma
x&qf=title_text+body_text+comments_text&fl=*+score&start=0&fq=type:Node&rows=30&wt=ruby} hits=0 status=0 QTime=0                             solr_1  | 1486679 INFO  (qtp1112280004-13) [   x:default] o.a.s.c.S.Request [default] webapp=/solr path=/select params={q=test&defType=edisma
x&qf=title_text+body_text+comments_text&fl=*+score&start=0&fq=type:Node&rows=30&wt=ruby} hits=0 status=0 QTime=0                             solr_1  | 1486778 INFO  (qtp1112280004-15) [   x:default] o.a.s.c.S.Request [default] webapp=/solr path=/select params={q=test&defType=edisma
x&qf=title_text+body_text+comments_text&fl=*+score&start=0&fq=type:Node&rows=30&wt=ruby} hits=0 status=0 QTime=0                             solr_1  | 1486866 INFO  (qtp1112280004-18) [   x:default] o.a.s.c.S.Request [default] webapp=/solr path=/select params={q=test&defType=edisma
x&qf=title_text+body_text+comments_text&fl=*+score&start=0&fq=type:Node&rows=30&wt=ruby} hits=0 status=0 QTime=0                             solr_1  | 1486962 INFO  (qtp1112280004-16) [   x:default] o.a.s.c.S.Request [default] webapp=/solr path=/select params={q=test&defType=edisma
x&qf=title_text+body_text+comments_text&fl=*+score&start=0&fq=type:Node&rows=30&wt=ruby} hits=0 status=0 QTime=0                             solr_1  | 1487061 INFO  (qtp1112280004-13) [   x:default] o.a.s.c.S.Request [default] webapp=/solr path=/select params={q=test&defType=edisma
x&qf=title_text+body_text+comments_text&fl=*+score&start=0&fq=type:Node&rows=30&wt=ruby} hits=0 status=0 QTime=0                             solr_1  | 1487158 INFO  (qtp1112280004-15) [   x:default] o.a.s.c.S.Request [default] webapp=/solr path=/select params={q=test&defType=edisma
x&qf=title_text+body_text+comments_text&fl=*+score&start=0&fq=type:Node&rows=30&wt=ruby} hits=0 status=0 QTime=0                             solr_1  | 1487258 INFO  (qtp1112280004-18) [   x:default] o.a.s.c.S.Request [default] webapp=/solr path=/select params={q=test&defType=edisma
x&qf=title_text+body_text+comments_text&fl=*+score&start=0&fq=type:Node&rows=30&wt=ruby} hits=0 status=0 QTime=0                             solr_1  | 1487366 INFO  (qtp1112280004-16) [   x:default] o.a.s.c.S.Request [default] webapp=/solr path=/select params={q=test&defType=edisma
x&qf=title_text+body_text+comments_text&fl=*+score&start=0&fq=type:Node&rows=30&wt=ruby} hits=0 status=0 QTime=0                             solr_1  | 1487469 INFO  (qtp1112280004-13) [   x:default] o.a.s.c.S.Request [default] webapp=/solr path=/select params={q=test&defType=edisma
x&qf=title_text+body_text+comments_text&fl=*+score&start=0&fq=type:Node&rows=30&wt=ruby} hits=0 status=0 QTime=0                             solr_1  | 1487574 INFO  (qtp1112280004-15) [   x:default] o.a.s.c.S.Request [default] webapp=/solr path=/select params={q=test&defType=edisma
x&qf=title_text+body_text+comments_text&fl=*+score&start=0&fq=type:Node&rows=30&wt=ruby} hits=0 status=0 QTime=0                             solr_1  | 1487700 INFO  (qtp1112280004-18) [   x:default] o.a.s.u.p.LogUpdateProcessor [default] webapp=/solr path=/update params={wt=ruby} {add=[User 182647 (1563849302710681600), User 182648 (1563849302713827328), User 182649 (1563849302713827329), User 182650 (1563849302713827330), User 182651 (1563849302713827331), User 182652 (1563849302713827332), User 182653 (1563849302713827333), User 182654 (1563849302713827334
), User 182655 (1563849302713827335), User 182656 (1563849302713827336), ... (50 adds)]} 0 14  

So that's many tests per seconds (possibly one per record)? Perhaps it would be good to throttle these tests so that there won't be more than one per few seconds or so?

@jywarren
Copy link
Member Author

jywarren commented Apr 5, 2017

OK, rebased here too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants