You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Past experience with the Ruby unit tests indicates that test coverage is not sufficient to give us great confidence about the reliability of the v2 implementation. Thus, we propose to add an extra layer of confidence by testing the v2 API GET endpoints in production. This will be done by sending GET requests both to the v1 and the v2 backends:
response1 = client.get(“http://forum/api/” + endpoint, params=params)
response2 = client.get(“http://forumv2/api/” + endpoint, params=params) # "forumv2" is actually "localhost:8000"
if response2 != response1:
logging.error(f“Forum v2 diff for endpoint {endpoint} with params={params}. Expected: {response1}. Got: {response2}.”)
return response1
Our goal will be to remove all such errors. Because GET requests are idempotent, and most of the complexity of the v1 API comes from GET endpoints, this approach will considerably improve our confidence in the new v2 implementation.
Running duplicate queries will undoubtedly cause slowdowns. If these slowdowns are unacceptable, then we will duplicate queries for a randomized subsample of all requests.
The text was updated successfully, but these errors were encountered:
Past experience with the Ruby unit tests indicates that test coverage is not sufficient to give us great confidence about the reliability of the v2 implementation. Thus, we propose to add an extra layer of confidence by testing the v2 API GET endpoints in production. This will be done by sending GET requests both to the v1 and the v2 backends:
Our goal will be to remove all such errors. Because GET requests are idempotent, and most of the complexity of the v1 API comes from GET endpoints, this approach will considerably improve our confidence in the new v2 implementation.
Running duplicate queries will undoubtedly cause slowdowns. If these slowdowns are unacceptable, then we will duplicate queries for a randomized subsample of all requests.
The text was updated successfully, but these errors were encountered: