Hi vamsiu Thank you for your response. I'm currently facing an issue with extracting 1.6 million records efficiently. Fetching 1000 records per request with my script seems to be taking too much time. I've discovered that the API supports bulk data retrieval, and I've attempted to implement it using the Python example provided in the documentation.
Document1
Document 2
However, I'm encountering difficulties in retrieving the specific fields I need, such as id, sso_id, and last_visit_time from the response also getting huge volume of data for one day.
Regarding the second document, I'm unable to utilize it as I lack the necessary privileges to obtain the username and password required to instantiate the Khoros object.
Also how can i use 'https://[COMMUNITY DOMAIN]/api/2.0/search?' API to get bulk data, is this different than 'https://api.lithium.com/lsi-data/v2/data/export/community/ea.prod' ?
vamsiu , could you please assist me with this? Below is the script I'm currently using for bulk data retrieval.
import requests
import json
access_token = 'access_token' # Place the "Bulk API access token" from Community Analytics here
client_id = 'client_id' # Place the "Client ID" from Community Analytics here
response = requests.get(
'https://api.lithium.com/lsi-data/v2/data/export/community/ea.prod',
params={'fromDate': '20240201', # Set the start date in YYYYMMDD format
'toDate': '20240202'}, # Set the end date in YYYYMMDD format
auth=(access_token, ''),
headers={'client-id': client_id, # TODO: place the "Client ID" from Community Analytics here
'Accept': 'application/json' } # OPTIONAL: this could also be set to "application/json" to receive JSON results, or even left unset to receive CSV
)
data = response.json() # Use "response.json()" incase you need json response (set 'Accept' header as mentioned above).
with open('ahq_user_data.json', 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=4)