Pages - Menu

Pages - Menu

Pages

2022年12月12日月曜日

The world of Versionless API in WunderGraph.

https://zenn.dev/kinzal/articles/74a761de1cc52d



I've been reading the WunderGraph Manifesto for a while now, and it's a great read.


It says some very good things, and I was impressed by it as the next generation of API management practices in an era of multiple clients, multiple microservices, and multiple APIs to run in order to develop services.


If you have the same impression, I also recommend reading Use Cases, which is a more how-oriented discussion.


This one is a bit "hmmm...?" but as a whole, the book has some very good things to say.


The Versionless API in particular makes me want to shout out that this is what I wanted.


Versionless API does not mean that the API is not versioned. It means that the API can continue to evolve without breaking the client.


But most importantly, this feature completely eliminates the mental burden of having to think about version control.


Configure the clients you want to support, including those that have communicated with API Gateway in the past two months, and then run wunderctl generate migration to implement the function.


Once the migration test suite is green, you can deploy the updated API without interrupting your clients.


The bottleneck in API development when making backward-compatible changes is the client.


We want to make the change because we know it will cause the same problem for everyone reading this article. However, have you ever had the experience of not being able to make the decision to make a change because the cost of the change is too great and the change will affect the client? It happens to me often.


Also, if you are supporting mobile clients, it will take a long time to make a change because the old application will be running for a long period of time.


In today's development climate, it is becoming increasingly important to increase ease of change to maximize agility.


Yet, the service is split up, and the agility is reduced by having to agree with many clients to make changes to the API.


It's wonderful that that would disappear in a Versionless API world: !!!! But there's no such thing as a good deal.


As of 2022/12, the Cloud version of WunderGraph is Early Access, so I can't verify it, and it doesn't appear to implement the Versionless API described on GitHub. (I'm sorry if I'm just not doing my due diligence. If you find it, please let me know how to do it so I can verify it.)


) So, if there is no way, I have no choice but to create it myself. I don't want to make a mistake by creating it out of the blue, so I'll start by considering what is needed and what trade-offs will be involved.


Overall structure.


If we think of the flow as centered on user requests, the flow will be as follows.


User makes a request


Router determines the Schema corresponding to the query and performs routing


If the request corresponds to Schema A of the old schema, the Resolver of Schema A is called


If the request corresponds to Schema B of the new schema, Schema B Resolver is called


If the request corresponds to Schema A of the old schema, call Migration to convert the request to Schema B.


Schema B performs the resolution, returns the result to Migration, and converts it to a Schema A response


Although this is a rough form, it seems possible to create a Versionless API that maintains backward compatibility in this way.


Let's take a look at the details of each element.


Router


The role of the router is to determine the schema from the request and decide where to route it.


So, how should this schema identification be done?


Linking the request to the schema in advance


WunderGraph probably uses this method. This can be done by defining the Schema and Operation in advance.


Add information to the request that identifies the schema.


For example, embed the hash of the schema corresponding to the request in the header.


Parsing the request in order from the latest schema to determine if it succeeds


There is a trade-off between the two. If you approach 1, you can make the decision faster at runtime, but you need to build a flow to do so in advance. On the other hand, if you approach 3, you can eliminate the need for advance preparation, but at the same time, you may experience latency degradation at runtime.


In this sense, 2 is well-balanced, and since the schema version can be obtained during client development, a function to embed the schema version at build or deploy time would be a good solution.


Migration


There are two types of migration: request migration and response migration.


function requestMigration(request: Request<SchamaA>): Request<SchemaB> {


// ...


} function responseMigration(response: Response<SchemaB>): Response<SchemaA> 


{ // ...


}


This is a rough code, but I guess this is the form of two functions.


It seems to me that anything can't be migrated in this way. Let's consider the case where there is no backward compatibility and migration is not possible.


request


In other words, if there is no information in the migration source, it cannot be done.


While it is unlikely that there are any cases of updating, it is not impossible to say that there are cases where the addition of a request increases the number of required input values, and the deletion of a response is necessary because it is no longer needed or because there is a problem with a field. (Neither of these is a frequent case.)


Large-scale schema renewal seems feasible if there are no schema operations that cannot be done with this composite.


How to keep the old Resolver


Since there are cases where simple migration is not possible, we will also consider how to maintain backward compatibility under any conditions.


The simplest way that comes to mind is to keep the old Resolver implementation. This will ensure that requests for the old schema will be handled.


pros


Full compatibility can be achieved.

Eliminates the need for a migration mechanism


cons


Resolver implementation remains, increasing the amount of code to maintain


If the backend DB is changed, the old Resolver implementation needs to be changed, etc.


This is a complete trade-off, and the Migration method is the flip side of this trade-off. So the focus is on how well the old implementation is maintained.


My sense is that in cases where the client is slow to change, the ease of change is likely to be reduced due to the large number of old Resolver implementations that remain. On the other hand, if the changeover is fast, it may not be a concern because the old Resolver can be removed aggressively.


If I had my choice, I would probably choose the Migration method, accepting the cons that there are cases where changes cannot be made, since the speed of client changes is not within the control of the API provider. If this API is controllable by something like BFF, I might keep the old Resolver.


Other elements that may be needed


First, tracing will always be needed. In particular, it must be there to make the decision to delete the old Schema and Migration Translations (or Resolver`).


Next, it seems safer to have a CI that performs backward compatibility checks on the schema connected to the trace. If a client request has been made within the last N months, it should not be possible to change the schema in such a way that it will no longer work.


One thing to note here is that we do not prohibit changes that are not backward compatible, but we must prohibit changes that will not work for the client. You can make as many changes as you like if they do not affect the client.


In that sense, Consumer-Driven Contracts testing may be sufficient without the need to connect with tracing. But it is not enough to support only the latest clients, so it would be better to connect with Trace.


Concluding Remarks


In this way, the Versionless API looks like it could be a good idea.


WunderGraph is based on GraphQL, and I am also drawn to GraphQL as a terminology, but it seems like it could be applied to RESTful APIs as well.


Perhaps the focus should be on how to implement migration and how to ensure its safety. I really wanted to test it with WunderGraph, but I couldn't find an implementation...


I'd like to find the time to implement it, as it seems to be the way to go.


In addition


To give you a quick idea of what it feels like to play with WunderGraph, I can understand the idea and the experience you want to achieve, but I feel it's still too early to tell.


I had guessed it from the fact that it was still being supported and stopped in the planning stage when I looked inside the document and said, "Support!


But...


 < I don't care if it's a trap or not!


I could not help but touch the WunderGraph. If you are going to use WunderGraph at this point, you might as well commit to WunderGraph and grow the API with us so you can be happy.


1 Tweet


Support Authors by Giving a Badge!


Zenn will reward authors who receive badges with cash or Amazon gift certificates.


Discussion


As a side note, this is an important concept not only for APIs but also for Kinesis/Kafka


Streaming is also an important concept.




In the case of Streaming, the Producer sends events and the Consumer receives and processes them, so there is no way to make it versionless. If it is possible to make it possible for the Consumer to tell the Producer what the Consumer needs, such as with GraphQL Subscription, it may be possible to create a world view like Versionless Streaming. I am imagining that it might be possible to draw a world view like Versionless Streaming.


I think this is an important concept for streaming in Kinesis/Kafka, etc.


In these event streaming platforms, there seems to be a lot of approaches that use schema registory, which is called schema evolution, to solve the problem.


Validate, evolve, and control schemas in Amazon MSK and Amazon Kinesis Data Streams with AWS Glue Schema Registry


Schema Evolution and Compatibility


Also, the serialization format of the data itself may have that functionality, and it is likely that many will take advantage of it.


AVRO : Schema Evolution Simplified


The approach of sandwiching layers for migration, as discussed in the Migration section, seems to be similar to the approach of using schema registory.


Thanks for the info!


One thing I am uninformed about, but I understand that Schema Registry/Schema Evolution is not to absorb backward-incompatible changes like Versionless API, but to guard against backward-incompatible changes. or is it an effort to guard against backward incompatible changes in the Schema Registry?


Or is there a mechanism in the Schema Registry to absorb such backward-incompatible changes?


No, basically, it should not be possible to accommodate incompatible changes.


For example, in ConfluentPlatform's schema registory, it seems that what can be done, such as changing Schema fields, changes depending on the compatibility level, but it seems that "roles that need to be changed first" can be done. (Such as, "changes are required from Consumer.")


In API, if the information to build Schema A is not in Schema B, I think it will be in the same way, but I am not sure if it is possible to do something like "fill in the default value in such cases to avoid compatibility breakdown" for example, in schema registory.


Thank you very much. I have caught up with my understanding.


I'd like to see if there is a difference between the Versionless API approach and the Schema Evolution approach in terms of cases that don't affect the client. I'm starting to wonder if that breadth is really necessary.


You must be logged in to comment.


Translated with www.DeepL.com/Translator (free version)

0 件のコメント:

コメントを投稿