Forum Discussion
Unfortunately it's not quite that easy to implement, since what messages are read/unread are different for each user.
DougS is exactly right, the implementation is actually quite ugly and prone to errors and strange behaviour, especially when combined with some sort of caching (component cache, usercache), it can be done, but it's definitely not pretty. If I find time I'll try to resurrect the idea, thanks for the link!
If I already have DougS on the "line" =): I was recently digging trough the docs and community because I though I had read (once upon a time, but couldn't find it...) that the Lithium REST API's internally cache requests (e.g. outside of appcache, usercache, component cache). Is that a) the case and b) for how long and c) if it is the case, can that value be adjusted by support or is it global?
- allensmith817 years agoBoss
It shouldn't be that difficult to sort this,
we want to call the API for the currently authenticated user so we aren't asking for global 'unread' setting.. we literally want to query that value in the message collection for the CURRENT user.
- luk7 years agoBoss
allensmith81 There is no such thing as a global "unread". You WILL query the "read" field for the current user (which is the root of the problem, it has to be done for each user individually => expensive)! Like DougS said:
Unfortunately it's not quite that easy to implement
I try to illustrate: you're going to query something like this:
SELECT subject, conversation.solved, view_href, user_context.read FROM messages WHERE board.id = '<board.id>'
this will return 25 messages, that have either true or false as value for user_context.read, but you don't know how many out of those are false (e.g. unread), which is (as far as I understand) why you started this topic, as the read value cannot be used as a constraint to filter out unread (or read) messages for the current user. So what we have to do is to loop trough the result (let's say we want the the last 10 unread messages) and keep track of how many unread messages we have found (as Doug already stated above):
<#assign query = "SELECT subject, conversation.solved, view_href, user_context.read FROM messages WHERE board.id = '<board.id>'"?url /> <#assign response = rest('2.0', '/search?q=${query}').data.items /> <#assign unread_limit = 10 /> <#assign unread_messages = [] /> <#-- the simplistic approach would be to simply loop over the response, check if the messages is unread and we didn't yet reach the limit, if so, add the currently processed message to the "unread_messages" sequence that will be further processed as a result. This maybe works if you just need the last few unread messages, maybe something below 10... --> <#-- we try to be clever and sort the response so we have all unread messages first, we then only have to loop until the first read = true to know that there will be no more unread messages down the line! --> <#list response?sort_by(['user_context', 'read']) as obj> <#if ( !obj.user_context.read && unread_limit > 0 )> <#assign unread_limit = (unread_limit - 1) /> <#assign unread_messages = unread_messages + [obj] /> Is message "${obj.subject}" read? => ${obj.user_context.read?c} <#elseif ( obj.user_context.read || unread_limit == 0 )> Breaking loop, current message is read: ${obj.user_context.read?c} ${(unread_limit == 0)?then('Limit has been reached!', 'Limit not reached yet...response did not contain enough unread messages, fetching more...')} <#-- stop looping when we have enough unread messages or
read = true value is encountered -->
<#break />
</#if>
</#list>PROBLEM: what if your initial response of message objects does not contain enough
unread messages to reach the limit?APPROACHES(S) TO THE PROBLEM:
- You can simply add/increase the LIMIT in the LiQL query and hope you'll get enough unread messages if you set it to a high enough value, although that could be 50, 100, 500 or even more...does that scale and is it reliable? What is the impact on performance? Try this with LIMIT 1000...<#assign query = "SELECT subject, conversation.solved, view_href, user_context.read FROM messages WHERE board.id = '<board.id>' LIMIT 1000"?url />
- You can start dealing with OFFSET / pagination until you have enough messages, which will make the code more complex but foremost requires more API calls and therefore is slower than a single API call, although the performance penalty is most likely smaller than with the first approach as you can process the messages in smaller batches.
<#-- only check 10 messages at the time, here we would check "page" 3 --> <#assign query = "SELECT subject, conversation.solved, view_href, user_context.read FROM messages WHERE board.id = '<board.id>' LIMIT 10 OFFSET 20"?url />
When doing this we have to take care to not run into errors because we get out of range, e.g. we have to know how many messages are there in total, that adds one more API request:
<#assign query = "SELECT count(*) FROM messages WHERE board.id = '<board.id>'"?url />
<#assign count = rest('2.0', '/search?q=${query}').data.count />we then need to deal with the API pagination and calculate the amount of pages based on the count fetched above, :
<#assign unread_limit = 10 /> <#assign unread_messages = [] /> <#assign query = "SELECT count(*) FROM messages WHERE board.id = '<board.id>'"?url /> <#assign count = rest('2.0', '/search?q=${query}').data.count /> <#--<#assign count = 106 /> <#-- // fake data, use on try.freemarker.org --> <#assign pages = ((count/unread_limit)?ceiling > 0)?then((count/unread_limit)?ceiling-1, 0) /> <#list 0..pages as offset> <#assign query = "SELECT subject, conversation.solved, view_href, user_context.read FROM messages WHERE board.id = '<board.id>' LIMIT ${unread_limit} OFFSET ${offset}"?url /> <#assign response = rest('2.0', '/search?q=${query}').data.items /> <#-- <#assign response = [{"user_context": {"read": false}}, {"user_context": {"read": false}}, {"user_context": {"read": false}}, {"user_context": {"read": true}}, {"user_context": {"read": false}}, {"user_context": {"read": true}}, {"user_context": {"read": true}}, {"user_context": {"read": true}}, {"user_context": {"read": false}}, {"user_context": {"read": true}}, {"user_context": {"read": false}}] /> <#-- // fake data, use on try.freemarker.org --> <#-- we try to be clever and sort the response so we have all unread messages first, we then only have to loop until the first read = true to know that there will be no more unread messages down the line! --> <#list response?sort_by(['user_context', 'read']) as obj> <#if ( !obj.user_context.read && unread_limit > 0 )> <#assign unread_limit = (unread_limit - 1) /> <#assign unread_messages = unread_messages + [obj] /> Message read => ${obj.user_context.read?c} <#elseif ( obj.user_context.read || unread_limit == 0 )> Breaking loop, current message is read => ${obj.user_context.read?c} ${(unread_limit == 0)?then('Limit has been reached!', 'Limit not reached yet...response did not contain enough unread messages, fetching more...')} <#-- break loop when we have enough unread messages --> <#break /> </#if> </#list> <#-- break outer loop as well --> <#if ( unread_limit == 0 )> Breaking outer loop and stop making API requests, limit reached... <#break /> </#if> </#list>
I agree that from a code standpoint is is manageable so far, but it definitely is not as elegant as it could be...
DougS an improvement of the API could be to allow a sort based on user_context.read, would that be easier to implement than a WHERE constraint? If we could use something like
SELECT <fields> FROM messages WHERE board.id = '<board.id>' ORDER BY user_context.read ASC
we might be able to fetch the exact right amount of messages (if ALL unread messages are needed of course...) by combining the call above with the count we can fetch from API v1 with
<#assign unread_count = rest('/boards/id/<board.id>/messages/read/count').count.value />
to set a LIMIT like:
<#assign unread_count = rest('/boards/id/<board.id>/messages/read/count').count.value /> <#assign query = "SELECT <fields> FROM messages WHERE board.id = '<board.id>' ORDER BY user_context.read ASC LIMIT ${unread_count}"?url />
any thoughts?
- allensmith817 years agoBoss
So,
I think we are misunderstanding each other... and I still do not see why its not that simple.
I DO NOT want a Global read or unread setting.. I want to call the api call once per user, which we could then cache etc.
I completely understand your API example, we looked at doing something similar, but its to performance heavy.
Essentially all we want to do is able to call:
Select * from messages where user_context.read - false and board.id='boardid'
Now if this was a straight SQL query on a database that would be an even easier task. So for me the complexity is simply exposing the ability via the API to use a field that the collection already has as a Liql Constraint
Hopefully that has clarified why a) this shouldn't be that complicated and b) DougS could you help me identify the person in Lithium who owns the API roadmap so I can push this up the development flagpole?
Some how looking at whats happening with idea's of late doesn't fill me with confidence that filing another idea will make this happen.
Related Content
- 5 years ago
- 8 years ago