SkillAgentSearch skills...

RedisClient

RESP3 Protocol supported Redis Client library

Install / Use

/learn @TheUniversalCity/RedisClient
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

The Cache Technology And High Performance .NET Redis Client With RESP3 Protocol Support

  • What Is Cache ?

Cache is an extremely common software term which has a lot of areas of usage in our lives and a vital role that makes systems work properly. Its examples also can be encountered often out of software field. For instance, preparing the ingredients for a meal or acquiring the materials supposed to be used in repair serves the same purpose with creating a cache system for a specific job. The essential point here is that whatever you keep in order to fulfill whatever you planned must be available at the closest place to the its point of use. If you store the ingredients in a cellar or refrigerator instead of the bench that you prepare the food on, it does not provide convenience and sufficient speed to you.

We are using these mechanisms in our software career tons of times and it is quite probable that there are many other fields we are using it but we did not notice yet. When the cases of using the variables defined in an ordinary method more than once in their run-time is thought, they are stored in CPU's memory(L3 Cache) because of not applying to RAM frequently. You can see the CPU's are designed as apt to storing and hold standby the things that they need to carry on a task at their nearest slot although access speed of RAM is fast enough like an HTTP request as we disregard its toil.

However, the actual data need to be processed in some cases while the mutual data of multiple transactions are disscussed. Despite the perfomance loss occurs, cache mechanism is deactivated in this situation because of the setup of operations. It is enough to use the "volatile" keyword while defining the variables in C# programming language so that caching usages of variables become prevented. Thus, the data which have been obtained from every variable access gets brought through RAM.

  • Usage of Outer Service Data Over Internet and Caching Process

As developers, most of the systems we have developed connect each other through internet. We execute our operations with the data that collected from many resources on the internet consonant to our profession principles. If there are some that do not get updated often at their resources within these data or there is not such obligation to use current data in the meantime, writing these data to the RAM and using them through the RAM when the next neccesity occurs make the daha approach our bench and provide high performance gain we access them next time.

Caching operations that we have mentioned by this point are basic and already used technologies without making a great effort. The next need of cache after this point starts to become a little bit more complex where we try to access the common data via different applications on the internet because there is a group of applications in our system and all of them need to work on the same data. This leads to assuring those applications work with same data but none of them cannot directly store the data at their local variables. If they do, then they are not able to ensure that they are using the same data with other ones.

Here are the two ways to proceed;

  • Accessing the most current data through its resource

  • Accessing the data by placing it to the common and different point that closer to the application than its resources after it has been obtained from there

Maybe direct access makes sense if the data does not pass through any transaction and distance of data source is close as any common spot. However, there is a requirement of a close spot to locate data and that spot needs to include rapid technologies for the access if adjustment of writing speed needs an operation at the data source and the data source is not close enough. This type of systems are called as "Distributed Cache Systems". Redis and Memcached are some example technologies of the distributed cache systems. They are designed and developed to achieve high performance for storing and collecting data and at the transfer protocols.

  • Caching in Scalable Systems

To supply with the high traffic occured due to increase of the importance of internet nowadays, an available application's amount should be able to increment when required or sources of server that the application involves in can be enhanced.

  • The operation of enhancing the server sources is called as "Vertical Scaling"

  • Increment of the application amount on the same server is called "Horizontal Scaling"

Requirement of horizontal scaling increases the application numbers and its caching transactions need to be managed on multiple applications. In this case, caching transactions should be executed on distributed cache systems because data consistency is quite important.

Distributed cache systems are never the closest method to the place where their own transactions are being executed and therefore their perfomance must be managed certainly. For example, reaching or updating the data that is stored on a Redis server substantially consume the bandwidth when accessing a data is needed thousands of time in a loop because it is provided through a network transfer. On the other hand, also data recency would need to be guaranteed if the data was directly written on the RAM which is one of the nearest place to execution area for each application.

Some "Distributed In-Memory Cache" solutions are produced and used on horizontal scalable systems in consideration of these problems. The requested data at the most common solutions should be included together with the close memory of applications(RAM) and distributed cache at the mutual spot. Besides, applications listen to a message channel in order to find out the situations of data recency changes in their close memories. Other applications that update these related data on the distributed cache leave the update messages to this message channel. Furthermore, the applications that receive the update messages remove the updated data from their close memories and when needed again, they store those messages as they acquire through the distributed cache in common spot to reuse for once in their close memories. Hence, the applications access the data that its update messages have not been received yet after they had been collected and stored in applications' close memories with a faster way instead of consuming the bandwidth with continously applying for the common place on the network.

This method leads to some problems occur with the mechanism it applies while it is trying to resolve one. Some of those are:

  • Data in distributed cache point are usually stored with a validity period(Time to Live or TTL). This period is ignored when the collected data is getting placed in the RAM. Data might be deleted from distributed cache point if there is not any update message sent belongs to it and the application cannot assure the recency of data in its close memory in this case.

  • This system has been created for controlling the recency of data stored in close memory through a message channel. A change made by an application that does not work with the system in distributed cache on the common spot or a person manually, does not sent the update messages via this message channel. In this situation, the application cannot assure the recency of data in its close memory as well.

  • Caching in LCWaikiki Modernization Projects

We prefer the Redis application for caching at the busy points where we are applying horizontal scaling in the LCWaikiki projects. Until a few months ago, we were using the distributed in-memory cache technique at some points.

By June 2021, we have included LocalizationService application that had been used by the most of LCWaikiki e-commerce projects in modernization process. We had preferred the Stackexchange.Redis library which we have used also in previous projects for the Redis connection in modernization.

While our modernization projects continue, individually I had researches and studies about some of transfer technologies and I was testing these technologies on the Redis Client project that I develeoped before from scratch. In addition, I was trying to develop a web based Redis Web Manager application like Windows based Redis Desktop Manager application.

I planned the use Invalidate Mechanism that comes with RESP3 protocol after I got inspired from my own research when I was designing the modernization structure. This mechanism that is needed for Distributed In-Memory Cache system does not support 6.0.0 and later versions of Redis server.

The reason why I chose this system is avoiding the problems that I have mentioned above and providing the benefits below:

  • RESP3 protocol can manage the pub-sub mechanism that needs another channel at RESP2 through a single channel. This reduces the numbers of connection by half.

  • Client tracking saves which client has queried the which keyword at the last time. No matter how the change happens, it sends "invalid" message as "PUSH" object in the protocol to the only clients which have used the expired or TTL period ended keyword.

    • Message traffic exponantially decreases.

    • With this method, the data source server guarantees the transmission of change to the related clients by any means while pub-sub messages are handled at application level in other methods.

    • Invaildity messages handled at application level out ot RESP3 cannot guarantee the transmission of TTL status changes. At the same time, The invalidity messages of keys changed by the applicalitons which are not able to manage at the systems have invalidity messages handled at application level won't be sent to the other applications. This breaks the consistency of data.

Stackexchange.Redis library supports RESP1 and RESP2 protocol vers

View on GitHub
GitHub Stars23
CategoryCustomer
Updated1y ago
Forks1

Languages

C#

Security Score

65/100

Audited on Sep 18, 2024

No findings