-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot re-create Solr index [JIRA: CLIENTS-529] #408
Comments
@eagleoneraptor I'm wondering if this is a timing issue in that it takes a while for the changes in Riak to propagate over to Solr. @zeeshanlakhani pointed out to me that this has to happen to all nodes in the cluster, not just one. FYI, for our testing we use this function |
@javajolt if I do a 30 seconds sleep after
How exactly should I implement the busy waiting in my case (deleting the whole index)? |
I guess I would keep searching until I got the not found error, then you would know you have successfully deleted this. Sounds like a useful feature to add to the client library, too, so you would not have to do this yourself. |
So, you mean something like: c.delete_search_index('my_index')
success = True
while success:
try:
print bucket.search('_yz_rk:' + key, index='my_index')['docs']
except Exception as e:
print e
success = False
# Create index That is not working, in the first call to |
So, I think I figured out how to hack it without the def wait_for_delete_index():
while True:
try:
urllib2.urlopen('http://localhost:8093/internal_solr/my_index/select')
# Schema found in Solr, try again
time.sleep(1)
except urllib2.HTTPError:
# Schema not found in Solr
return
c.delete_search_index('my_index')
wait_for_delete_index()
c.create_search_schema('my_schema', schema_data)
c.create_search_index('my_index', 'my_schema') What do you think? |
Interesting. I would bet it works, but it's unfortunate you have to go directly to Solr. That said, it's the safest solution because Solr is the component we are waiting for. The only downside is that if you have multiple nodes, I don't know that querying the single Solr node would tell you that the index has been deleted on ALL of the nodes in your cluster. There is one Solr instance per Riak node. |
I'm having problems with Riak and Python client while trying to delete and create a search index after a schema update. Our system check for changes in an XML file with the Solr schema, when the file changes, we refresh the index (update schema, delete index, create index and store all bucket data in the index again).
Roughly, what we are doing is the following:
Having 5 records in
my_bucket
(and inmy_index
), after executing all these sentences, thesearch
method is still returning the same data as before, I was expecting the Solr'smy_index
index to be empty (since I'm not re-indexing the bucket data inmy_index
with this code). Also, I cannot see my changes in the Solr schema browser.But then, after some trial and error, I've noticed the following. If I execute this script:
And then, in another Python instance, I execute this script:
Namely, if I delete the index and create it again separately, everything works fine.
my_index
has no data anymore and Solr schema browser shows me the updated schema. Also, I've noticed thatcreate_search_index
has taken a lot more of time to execute (in the first script, everything was executed instantly).Am I doing something wrong or is this a bug?
The text was updated successfully, but these errors were encountered: