Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Time sensitive: GitHub Actions cache service integration #5620

Open
Link- opened this issue Feb 11, 2025 · 6 comments
Open

Time sensitive: GitHub Actions cache service integration #5620

Link- opened this issue Feb 11, 2025 · 6 comments

Comments

@Link-
Copy link

Link- commented Feb 11, 2025

The actions/cache backend service has been rewritten from the ground up for improved performance and reliability. The new service has been gradually rolling out since February 1st, 2025. The legacy service will be sunset by March 1st.

We have identified that this project is integrating with the legacy cache service without using the official and supported package. Unfortunately, that means that you have to introduce code changes in order to be compatible with the new service we're rolling out.

The new service uses an entirely new set of internal API endpoints. To help with your changes we have provided the proto definitions below to help generate compatible clients to speed up your migration.

These internal APIs were never intended for consumption the way your project is at the moment. Since this is not a paved path we endorse, it's possible there will be breaking changes in the future. We are reaching out as a courtesy because we do not wish to break the workflows dependent on this project.

Please introduce the necessary changes ASAP before the end of February. Otherwise, storing and retrieving cache entries will start to fail. There will be no need to offer backward compatibility as the new service will be rolled out to all repositories by February 13th 2025.

The cutover point in time will be the moment the new service is rolled out to a given repository. Then, users will no longer have access to cache entries from the legacy service and are expected to store / retrieve cache entries from the new service.

Expand for proto definitions

cache.proto

syntax = "proto3";

import "cachemetadata.proto";

package v1;

service CacheService {
    // Generates a SAS URL with write permissions to upload a cache archive
    rpc CreateCacheEntry(CreateCacheEntryRequest) returns (CreateCacheEntryResponse);
    // Indicate the completion of a cache archive upload. Triggers post-upload processing
    rpc FinalizeCacheEntryUpload(FinalizeCacheEntryUploadRequest) returns (FinalizeCacheEntryUploadResponse);
    // Generates a SAS URL with read permissions to download a cache archive
    rpc GetCacheEntryDownloadURL(GetCacheEntryDownloadURLRequest) returns (GetCacheEntryDownloadURLResponse);
}

message CreateCacheEntryRequest {
    // Scope and other metadata for the cache entry
    results.entities.v1.CacheMetadata metadata = 1;
    // An explicit key for a cache entry 
    string key = 2;
    // Hash of the compression tool, runner OS and paths cached
    string version = 3;
}

message CreateCacheEntryResponse {
    bool ok = 1;
    // SAS URL to upload the cache archive
    string signed_upload_url = 2;
}

message FinalizeCacheEntryUploadRequest {
    // Scope and other metadata for the cache entry 
    results.entities.v1.CacheMetadata metadata = 1;
    // An explicit key for a cache entry
    string key = 2;
    // Size of the cache archive in Bytes
    int64 size_bytes = 3;
    // Hash of the compression tool, runner OS and paths cached
    string version = 4;
}

message FinalizeCacheEntryUploadResponse {
    bool ok = 1;
    // Cache entry database ID
    int64 entry_id = 2;
}

message GetCacheEntryDownloadURLRequest {
    // Scope and other metadata for the cache entry 
    results.entities.v1.CacheMetadata metadata = 1;
    // An explicit key for a cache entry
    string key = 2;
    // Restore keys used for prefix searching
    repeated string restore_keys = 3;
    // Hash of the compression tool, runner OS and paths cached
    string version = 4;
}

message GetCacheEntryDownloadURLResponse {
    bool ok = 1;
    // SAS URL to download the cache archive
    string signed_download_url = 2;
    // Key or restore key that matches the lookup
    string matched_key = 3;
}

cachemetadata.proto

syntax = "proto3";

import "cachescope.proto";

package v1;

message CacheMetadata {
    // Backend repository id
    int64 repository_id = 1;
    // Scopes for the cache entry 
    repeated CacheScope scope = 2;
}

cachescope.proto

syntax = "proto3";

package v1;

message CacheScope {
    // Determines the scope of the cache entry
    string scope = 1;
    // None: 0 | Read: 1 | Write: 2 | All: (1|2)
    int64 permission = 2;
}

Further information

Link to changelog posts

@Xuanwo
Copy link
Member

Xuanwo commented Feb 12, 2025

Thank you @Link- for informing us. We will evaluate and implement the necessary changes.

@Xuanwo
Copy link
Member

Xuanwo commented Feb 12, 2025

By the way @Link- we have received a report like this: #5583, which may be related to the action service on GHES with AWS S3. Will the new implementation address this issue?

@Link-
Copy link
Author

Link- commented Feb 12, 2025

@Xuanwo At the moment GHES is not affected by this change.

There will be no need to offer backward compatibility as the new service will be rolled out to all repositories by February 13th 2025.

I'm afraid this statement is wrong, it's an oversight on my end. Keep the old client if you want to maintain compatibility with GHES. Only use the new client if the following conditions are fulfilled:

https://github.com/actions/toolkit/blob/340a6b15b5879eefe1412ee6c8606978b091d3e8/packages/cache/src/internal/config.ts#L14


By the way @Link- we have received a report like this: #5583, which may be related to the action service on GHES with AWS S3. Will the new implementation address this issue?

We need to investigate this on our end for the legacy service. It'd be great to receive a support ticket for this issue as you've suggested in the discussion already so that we can track the work.

@DLukeNelson
Copy link

@Link-
I have previously opened up this discussion to attempt to understand my issue a little better.

@Link-
Copy link
Author

Link- commented Feb 12, 2025

Thanks @DLukeNelson - As I noted in the body of the main issue, direct integration with the cache service by not using @actions/cache is currently not a supported path. Even if the technical fix is not complex, I'm afraid there's very little support we can offer for this at the moment. We're discussing internally whether we make a public API available for the cache service, but that is still in the early stages and I don't have much to say about it at this point.

@Xuanwo
Copy link
Member

Xuanwo commented Feb 17, 2025

Some notes for this service migrate:

  • GHAC v2 is a twirp based services that users need to talk to {base_url}/twirp/{method}.
  • GHAC v2 requires the Azure SDK for uploading (luckily, OpenDAL has native Azure support).
  • GetCacheEntryDownloadURL may returns empty signed_download_url to indicate the cache entry doesn't exist. But the cache itself no longer seems consistent, which means that reading after writing could also fail.

huonw added a commit to pantsbuild/pants that referenced this issue Feb 25, 2025
Github is planning to sunset the legacy services before March 1st. This
PR updates OpenDAL to version 0.52.0, which automatically supports GHAC
v2.

The ghac v2 requires using azblob to upload data internally, so we need
to add some extra dependencies in this PR. Perhaps we can consider
enabling both s3 and azblob support in the future since all dependencies
have already been included.

Refer to apache/opendal#5620 for more details.

---------

Signed-off-by: Xuanwo <[email protected]>
Co-authored-by: Huon Wilson <[email protected]>
WorkerPants pushed a commit to pantsbuild/pants that referenced this issue Feb 25, 2025
Github is planning to sunset the legacy services before March 1st. This
PR updates OpenDAL to version 0.52.0, which automatically supports GHAC
v2.

The ghac v2 requires using azblob to upload data internally, so we need
to add some extra dependencies in this PR. Perhaps we can consider
enabling both s3 and azblob support in the future since all dependencies
have already been included.

Refer to apache/opendal#5620 for more details.

---------

Signed-off-by: Xuanwo <[email protected]>
Co-authored-by: Huon Wilson <[email protected]>
huonw added a commit to pantsbuild/pants that referenced this issue Feb 25, 2025
…21997)

Github is planning to sunset the legacy services before March 1st. This
PR updates OpenDAL to version 0.52.0, which automatically supports GHAC
v2.

The ghac v2 requires using azblob to upload data internally, so we need
to add some extra dependencies in this PR. Perhaps we can consider
enabling both s3 and azblob support in the future since all dependencies
have already been included.

Refer to apache/opendal#5620 for more details.

Signed-off-by: Xuanwo <[email protected]>
Co-authored-by: Xuanwo <[email protected]>
Co-authored-by: Huon Wilson <[email protected]>
huonw added a commit to pantsbuild/pants that referenced this issue Feb 25, 2025
Github is planning to sunset the legacy services before March 1st. This
PR updates OpenDAL to version 0.52.0, which automatically supports GHAC
v2.

The ghac v2 requires using azblob to upload data internally, so we need
to add some extra dependencies in this PR. Perhaps we can consider
enabling both s3 and azblob support in the future since all dependencies
have already been included.

Refer to apache/opendal#5620 for more details.

---------

Signed-off-by: Xuanwo <[email protected]>
Co-authored-by: Huon Wilson <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants