Replies: 2 comments 6 replies
-
Got it @xoldyckk. I see the problems that you have outlined in my approach, and the need for the all validations that you have outlined. I will start working on correcting the same. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Also, do you think we should move to using base64 cursors with both createdAt and mongo id, or current implementation of just using primary ids is fine? |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
@EshaanAgg I'm going to outline the biggest flaw with your approach. You are fetching every single user that is assigned to a tag in a single db request. This completely goes against the concept of pagination. Pagination isn't only for providing data in little chunks to client, it's also to get data in little chunks from the database.
Now flaws regarding resolver arguments:-
first
orlast
must be provided. If they're not the query should fail.first
andlast
are provided, the query should fail, there's no use case currently where both the arguments should be accepted in a single query.first/after
are a pair andlast/before
are a pair. Argumentafter
is optional, but if it is providedfirst
must also be provided. Clients cannot providebefore
withfirst
. Argumentbefore
is optional, but if it is providedlast
must also be provided. Clients cannot provideafter
withlast
.after
andbefore
are cursors. Cursors are base64 encoded strings that contain information to uniquely identify an edge in a connection. Edge here would be the user edge(in graphql schema). Primary id(mongodb _id) can be used as a cursor, though usually it's a combination of primary id and createdAt field.hasNextPage
andhasPreviousPage
(depending upon whether you're forward or backward paginating), you can increase the number of items you're fetching from database by 1. If that extra item is fetched too, it means you have a next page of data(hasNextPage
is true). Whether you're forward or backward paginating you're always going to have information abouthasPreviousPage
, because you're literally going to pass in a cursor which represents an object which will exist.For sometime in the future when a good validation system is decided for resolver arguments:- You're not checking arguments
first
andlast
for minimum/maximum. The client cannot just query with0
as the value for eitherfirst
orlast
. Accordingly, client cannot query with a big integer like999999
as the value for eitherfirst
orlast
. Idk if the custom graphql scalarPositiveInt
includes0
as a positive integer, but if it does the query should fail.first
andlast
dictate how many items are returned in a single query, their values should be constrained. The client cannot just ask to get 0 items or 99999 items in a single request.This validation system would also account for many of steps I mentioned above. I just thought the min/max constraints are not something I should include in those steps because they're not related to the correct implemention of connection itself.
Beta Was this translation helpful? Give feedback.
All reactions