chore: vendor
Some checks failed
continuous-integration/drone/push Build is failing

This commit is contained in:
2024-08-04 11:06:58 +02:00
parent 2a5985e44e
commit 04aec8232f
3557 changed files with 981078 additions and 1 deletions

7
vendor/github.com/go-git/go-git/v5/.gitignore generated vendored Normal file
View File

@ -0,0 +1,7 @@
coverage.out
*~
coverage.txt
profile.out
.tmp/
.git-dist/
.vscode

74
vendor/github.com/go-git/go-git/v5/CODE_OF_CONDUCT.md generated vendored Normal file
View File

@ -0,0 +1,74 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance, race,
religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at conduct@sourced.tech. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org

233
vendor/github.com/go-git/go-git/v5/COMPATIBILITY.md generated vendored Normal file
View File

@ -0,0 +1,233 @@
# Supported Features
Here is a non-comprehensive table of git commands and features and their
compatibility status with go-git.
## Getting and creating repositories
| Feature | Sub-feature | Status | Notes | Examples |
| ------- | ------------------------------------------------------------------------------------------------------------------ | ------ | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `init` | | ✅ | | |
| `init` | `--bare` | ✅ | | |
| `init` | `--template` <br/> `--separate-git-dir` <br/> `--shared` | ❌ | | |
| `clone` | | ✅ | | - [PlainClone](_examples/clone/main.go) |
| `clone` | Authentication: <br/> - none <br/> - access token <br/> - username + password <br/> - ssh | ✅ | | - [clone ssh](_examples/clone/auth/ssh/main.go) <br/> - [clone access token](_examples/clone/auth/basic/access_token/main.go) <br/> - [clone user + password](_examples/clone/auth/basic/username_password/main.go) |
| `clone` | `--progress` <br/> `--single-branch` <br/> `--depth` <br/> `--origin` <br/> `--recurse-submodules` <br/>`--shared` | ✅ | | - [recurse submodules](_examples/clone/main.go) <br/> - [progress](_examples/progress/main.go) |
## Basic snapshotting
| Feature | Sub-feature | Status | Notes | Examples |
| -------- | ----------- | ------ | -------------------------------------------------------- | ------------------------------------ |
| `add` | | ✅ | Plain add is supported. Any other flags aren't supported | |
| `status` | | ✅ | | |
| `commit` | | ✅ | | - [commit](_examples/commit/main.go) |
| `reset` | | ✅ | | |
| `rm` | | ✅ | | |
| `mv` | | ✅ | | |
## Branching and merging
| Feature | Sub-feature | Status | Notes | Examples |
| ----------- | ----------- | ------------ | --------------------------------------- | ----------------------------------------------------------------------------------------------- |
| `branch` | | ✅ | | - [branch](_examples/branch/main.go) |
| `checkout` | | ✅ | Basic usages of checkout are supported. | - [checkout](_examples/checkout/main.go) |
| `merge` | | ⚠️ (partial) | Fast-forward only | |
| `mergetool` | | ❌ | | |
| `stash` | | ❌ | | |
| `tag` | | ✅ | | - [tag](_examples/tag/main.go) <br/> - [tag create and push](_examples/tag-create-push/main.go) |
## Sharing and updating projects
| Feature | Sub-feature | Status | Notes | Examples |
| ----------- | ----------- | ------ | ----------------------------------------------------------------------- | ------------------------------------------ |
| `fetch` | | ✅ | | |
| `pull` | | ✅ | Only supports merges where the merge can be resolved as a fast-forward. | - [pull](_examples/pull/main.go) |
| `push` | | ✅ | | - [push](_examples/push/main.go) |
| `remote` | | ✅ | | - [remotes](_examples/remotes/main.go) |
| `submodule` | | ✅ | | - [submodule](_examples/submodule/main.go) |
| `submodule` | deinit | ❌ | | |
## Inspection and comparison
| Feature | Sub-feature | Status | Notes | Examples |
| ---------- | ----------- | --------- | ----- | ------------------------------ |
| `show` | | ✅ | | |
| `log` | | ✅ | | - [log](_examples/log/main.go) |
| `shortlog` | | (see log) | | |
| `describe` | | ❌ | | |
## Patching
| Feature | Sub-feature | Status | Notes | Examples |
| ------------- | ----------- | ------ | ---------------------------------------------------- | -------- |
| `apply` | | ❌ | | |
| `cherry-pick` | | ❌ | | |
| `diff` | | ✅ | Patch object with UnifiedDiff output representation. | |
| `rebase` | | ❌ | | |
| `revert` | | ❌ | | |
## Debugging
| Feature | Sub-feature | Status | Notes | Examples |
| -------- | ----------- | ------ | ----- | ---------------------------------- |
| `bisect` | | ❌ | | |
| `blame` | | ✅ | | - [blame](_examples/blame/main.go) |
| `grep` | | ✅ | | |
## Email
| Feature | Sub-feature | Status | Notes | Examples |
| -------------- | ----------- | ------ | ----- | -------- |
| `am` | | ❌ | | |
| `apply` | | ❌ | | |
| `format-patch` | | ❌ | | |
| `send-email` | | ❌ | | |
| `request-pull` | | ❌ | | |
## External systems
| Feature | Sub-feature | Status | Notes | Examples |
| ------------- | ----------- | ------ | ----- | -------- |
| `svn` | | ❌ | | |
| `fast-import` | | ❌ | | |
| `lfs` | | ❌ | | |
## Administration
| Feature | Sub-feature | Status | Notes | Examples |
| --------------- | ----------- | ------ | ----- | -------- |
| `clean` | | ✅ | | |
| `gc` | | ❌ | | |
| `fsck` | | ❌ | | |
| `reflog` | | ❌ | | |
| `filter-branch` | | ❌ | | |
| `instaweb` | | ❌ | | |
| `archive` | | ❌ | | |
| `bundle` | | ❌ | | |
| `prune` | | ❌ | | |
| `repack` | | ❌ | | |
## Server admin
| Feature | Sub-feature | Status | Notes | Examples |
| -------------------- | ----------- | ------ | ----- | ----------------------------------------- |
| `daemon` | | ❌ | | |
| `update-server-info` | | ✅ | | [cli](./cli/go-git/update_server_info.go) |
## Advanced
| Feature | Sub-feature | Status | Notes | Examples |
| ---------- | ----------- | ----------- | ----- | -------- |
| `notes` | | ❌ | | |
| `replace` | | ❌ | | |
| `worktree` | | ❌ | | |
| `annotate` | | (see blame) | | |
## GPG
| Feature | Sub-feature | Status | Notes | Examples |
| ------------------- | ----------- | ------ | ----- | -------- |
| `git-verify-commit` | | ✅ | | |
| `git-verify-tag` | | ✅ | | |
## Plumbing commands
| Feature | Sub-feature | Status | Notes | Examples |
| --------------- | ------------------------------------- | ------------ | --------------------------------------------------- | -------------------------------------------- |
| `cat-file` | | ✅ | | |
| `check-ignore` | | ❌ | | |
| `commit-tree` | | ❌ | | |
| `count-objects` | | ❌ | | |
| `diff-index` | | ❌ | | |
| `for-each-ref` | | ✅ | | |
| `hash-object` | | ✅ | | |
| `ls-files` | | ✅ | | |
| `ls-remote` | | ✅ | | - [ls-remote](_examples/ls-remote/main.go) |
| `merge-base` | `--independent` <br/> `--is-ancestor` | ⚠️ (partial) | Calculates the merge-base only between two commits. | - [merge-base](_examples/merge_base/main.go) |
| `merge-base` | `--fork-point` <br/> `--octopus` | ❌ | | |
| `read-tree` | | ❌ | | |
| `rev-list` | | ✅ | | |
| `rev-parse` | | ❌ | | |
| `show-ref` | | ✅ | | |
| `symbolic-ref` | | ✅ | | |
| `update-index` | | ❌ | | |
| `update-ref` | | ❌ | | |
| `verify-pack` | | ❌ | | |
| `write-tree` | | ❌ | | |
## Indexes and Git Protocols
| Feature | Version | Status | Notes |
| -------------------- | ------------------------------------------------------------------------------- | ------ | ----- |
| index | [v1](https://github.com/git/git/blob/master/Documentation/gitformat-index.txt) | ❌ | |
| index | [v2](https://github.com/git/git/blob/master/Documentation/gitformat-index.txt) | ✅ | |
| index | [v3](https://github.com/git/git/blob/master/Documentation/gitformat-index.txt) | ❌ | |
| pack-protocol | [v1](https://github.com/git/git/blob/master/Documentation/gitprotocol-pack.txt) | ✅ | |
| pack-protocol | [v2](https://github.com/git/git/blob/master/Documentation/gitprotocol-v2.txt) | ❌ | |
| multi-pack-index | [v1](https://github.com/git/git/blob/master/Documentation/gitformat-pack.txt) | ❌ | |
| pack-\*.rev files | [v1](https://github.com/git/git/blob/master/Documentation/gitformat-pack.txt) | ❌ | |
| pack-\*.mtimes files | [v1](https://github.com/git/git/blob/master/Documentation/gitformat-pack.txt) | ❌ | |
| cruft packs | | ❌ | |
## Capabilities
| Feature | Status | Notes |
| ------------------------------ | ------------ | ----- |
| `multi_ack` | ❌ | |
| `multi_ack_detailed` | ❌ | |
| `no-done` | ❌ | |
| `thin-pack` | ❌ | |
| `side-band` | ⚠️ (partial) | |
| `side-band-64k` | ⚠️ (partial) | |
| `ofs-delta` | ✅ | |
| `agent` | ✅ | |
| `object-format` | ❌ | |
| `symref` | ✅ | |
| `shallow` | ✅ | |
| `deepen-since` | ✅ | |
| `deepen-not` | ❌ | |
| `deepen-relative` | ❌ | |
| `no-progress` | ✅ | |
| `include-tag` | ✅ | |
| `report-status` | ✅ | |
| `report-status-v2` | ❌ | |
| `delete-refs` | ✅ | |
| `quiet` | ❌ | |
| `atomic` | ✅ | |
| `push-options` | ✅ | |
| `allow-tip-sha1-in-want` | ✅ | |
| `allow-reachable-sha1-in-want` | ❌ | |
| `push-cert=<nonce>` | ❌ | |
| `filter` | ❌ | |
| `session-id=<session id>` | ❌ | |
## Transport Schemes
| Scheme | Status | Notes | Examples |
| -------------------- | ------------ | ---------------------------------------------------------------------- | ---------------------------------------------- |
| `http(s)://` (dumb) | ❌ | | |
| `http(s)://` (smart) | ✅ | | |
| `git://` | ✅ | | |
| `ssh://` | ✅ | | |
| `file://` | ⚠️ (partial) | Warning: this is not pure Golang. This shells out to the `git` binary. | |
| Custom | ✅ | All existing schemes can be replaced by custom implementations. | - [custom_http](_examples/custom_http/main.go) |
## SHA256
| Feature | Sub-feature | Status | Notes | Examples |
| -------- | ----------- | ------ | ---------------------------------- | ------------------------------------ |
| `init` | | ✅ | Requires building with tag sha256. | - [init](_examples/sha256/main.go) |
| `commit` | | ✅ | Requires building with tag sha256. | - [commit](_examples/sha256/main.go) |
| `pull` | | ❌ | | |
| `fetch` | | ❌ | | |
| `push` | | ❌ | | |
## Other features
| Feature | Sub-feature | Status | Notes | Examples |
| --------------- | --------------------------- | ------ | ---------------------------------------------- | -------- |
| `config` | `--local` | ✅ | Read and write per-repository (`.git/config`). | |
| `config` | `--global` <br/> `--system` | ✅ | Read-only. | |
| `gitignore` | | ✅ | | |
| `gitattributes` | | ✅ | | |
| `git-worktree` | | ❌ | Multiple worktrees are not supported. | |

46
vendor/github.com/go-git/go-git/v5/CONTRIBUTING.md generated vendored Normal file
View File

@ -0,0 +1,46 @@
# Contributing Guidelines
source{d} go-git project is [Apache 2.0 licensed](LICENSE) and accepts
contributions via GitHub pull requests. This document outlines some of the
conventions on development workflow, commit message formatting, contact points,
and other resources to make it easier to get your contribution accepted.
## Support Channels
The official support channels, for both users and contributors, are:
- [StackOverflow go-git tag](https://stackoverflow.com/questions/tagged/go-git) for user questions.
- GitHub [Issues](https://github.com/src-d/go-git/issues)* for bug reports and feature requests.
*Before opening a new issue or submitting a new pull request, it's helpful to
search the project - it's likely that another user has already reported the
issue you're facing, or it's a known issue that we're already aware of.
## How to Contribute
Pull Requests (PRs) are the main and exclusive way to contribute to the official go-git project.
In order for a PR to be accepted it needs to pass a list of requirements:
- You should be able to run the same query using `git`. We don't accept features that are not implemented in the official git implementation.
- The expected behavior must match the [official git implementation](https://github.com/git/git).
- The actual behavior must be correctly explained with natural language and providing a minimum working example in Go that reproduces it.
- All PRs must be written in idiomatic Go, formatted according to [gofmt](https://golang.org/cmd/gofmt/), and without any warnings from [go lint](https://github.com/golang/lint) nor [go vet](https://golang.org/cmd/vet/).
- They should in general include tests, and those shall pass.
- If the PR is a bug fix, it has to include a suite of unit tests for the new functionality.
- If the PR is a new feature, it has to come with a suite of unit tests, that tests the new functionality.
- In any case, all the PRs have to pass the personal evaluation of at least one of the maintainers of go-git.
### Format of the commit message
Every commit message should describe what was changed, under which context and, if applicable, the GitHub issue it relates to:
```
plumbing: packp, Skip argument validations for unknown capabilities. Fixes #623
```
The format can be described more formally as follows:
```
<package>: <subpackage>, <what changed>. [Fixes #<issue-number>]
```

78
vendor/github.com/go-git/go-git/v5/EXTENDING.md generated vendored Normal file
View File

@ -0,0 +1,78 @@
# Extending go-git
`go-git` was built in a highly extensible manner, which enables some of its functionalities to be changed or extended without the need of changing its codebase. Here are the key extensibility features:
## Dot Git Storers
Dot git storers are the components responsible for storing the Git internal files, including objects and references.
The built-in storer implementations include [memory](storage/memory) and [filesystem](storage/filesystem). The `memory` storer stores all the data in memory, and its use look like this:
```go
r, err := git.Init(memory.NewStorage(), nil)
```
The `filesystem` storer stores the data in the OS filesystem, and can be used as follows:
```go
r, err := git.Init(filesystem.NewStorage(osfs.New("/tmp/foo")), nil)
```
New implementations can be created by implementing the [storage.Storer interface](storage/storer.go#L16).
## Filesystem
Git repository worktrees are managed using a filesystem abstraction based on [go-billy](https://github.com/go-git/go-billy). The Git operations will take place against the specific filesystem implementation. Initialising a repository in Memory can be done as follows:
```go
fs := memfs.New()
r, err := git.Init(memory.NewStorage(), fs)
```
The same operation can be done against the OS filesystem:
```go
fs := osfs.New("/tmp/foo")
r, err := git.Init(memory.NewStorage(), fs)
```
New filesystems (e.g. cloud based storage) could be created by implementing `go-billy`'s [Filesystem interface](https://github.com/go-git/go-billy/blob/326c59f064021b821a55371d57794fbfb86d4cb3/fs.go#L52).
## Transport Schemes
Git supports various transport schemes, including `http`, `https`, `ssh`, `git`, `file`. `go-git` defines the [transport.Transport interface](plumbing/transport/common.go#L48) to represent them.
The built-in implementations can be replaced by calling `client.InstallProtocol`.
An example of changing the built-in `https` implementation to skip TLS could look like this:
```go
customClient := &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
},
}
client.InstallProtocol("https", githttp.NewClient(customClient))
```
Some internal implementations enables code reuse amongst the different transport implementations. Some of these may be made public in the future (e.g. `plumbing/transport/internal/common`).
## Cache
Several different operations across `go-git` lean on caching of objects in order to achieve optimal performance. The caching functionality is defined by the [cache.Object interface](plumbing/cache/common.go#L17).
Two built-in implementations are `cache.ObjectLRU` and `cache.BufferLRU`. However, the caching functionality can be customized by implementing the interface `cache.Object` interface.
## Hash
`go-git` uses the `crypto.Hash` interface to represent hash functions. The built-in implementations are `github.com/pjbgf/sha1cd` for SHA1 and Go's `crypto/SHA256`.
The default hash functions can be changed by calling `hash.RegisterHash`.
```go
func init() {
hash.RegisterHash(crypto.SHA1, sha1.New)
}
```
New `SHA1` or `SHA256` hash functions that implement the `hash.RegisterHash` interface can be registered by calling `RegisterHash`.

201
vendor/github.com/go-git/go-git/v5/LICENSE generated vendored Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 Sourced Technologies, S.L.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

54
vendor/github.com/go-git/go-git/v5/Makefile generated vendored Normal file
View File

@ -0,0 +1,54 @@
# General
WORKDIR = $(PWD)
# Go parameters
GOCMD = go
GOTEST = $(GOCMD) test
# Git config
GIT_VERSION ?=
GIT_DIST_PATH ?= $(PWD)/.git-dist
GIT_REPOSITORY = http://github.com/git/git.git
# Coverage
COVERAGE_REPORT = coverage.out
COVERAGE_MODE = count
build-git:
@if [ -f $(GIT_DIST_PATH)/git ]; then \
echo "nothing to do, using cache $(GIT_DIST_PATH)"; \
else \
git clone $(GIT_REPOSITORY) -b $(GIT_VERSION) --depth 1 --single-branch $(GIT_DIST_PATH); \
cd $(GIT_DIST_PATH); \
make configure; \
./configure; \
make all; \
fi
test:
@echo "running against `git version`"; \
$(GOTEST) -race ./...
$(GOTEST) -v _examples/common_test.go _examples/common.go --examples
TEMP_REPO := $(shell mktemp)
test-sha256:
$(GOCMD) run -tags sha256 _examples/sha256/main.go $(TEMP_REPO)
cd $(TEMP_REPO) && git fsck
rm -rf $(TEMP_REPO)
test-coverage:
@echo "running against `git version`"; \
echo "" > $(COVERAGE_REPORT); \
$(GOTEST) -coverprofile=$(COVERAGE_REPORT) -coverpkg=./... -covermode=$(COVERAGE_MODE) ./...
clean:
rm -rf $(GIT_DIST_PATH)
fuzz:
@go test -fuzz=FuzzParser $(PWD)/internal/revision
@go test -fuzz=FuzzDecoder $(PWD)/plumbing/format/config
@go test -fuzz=FuzzPatchDelta $(PWD)/plumbing/format/packfile
@go test -fuzz=FuzzParseSignedBytes $(PWD)/plumbing/object
@go test -fuzz=FuzzDecode $(PWD)/plumbing/object
@go test -fuzz=FuzzDecoder $(PWD)/plumbing/protocol/packp
@go test -fuzz=FuzzNewEndpoint $(PWD)/plumbing/transport

131
vendor/github.com/go-git/go-git/v5/README.md generated vendored Normal file
View File

@ -0,0 +1,131 @@
![go-git logo](https://cdn.rawgit.com/src-d/artwork/02036484/go-git/files/go-git-github-readme-header.png)
[![GoDoc](https://godoc.org/github.com/go-git/go-git/v5?status.svg)](https://pkg.go.dev/github.com/go-git/go-git/v5) [![Build Status](https://github.com/go-git/go-git/workflows/Test/badge.svg)](https://github.com/go-git/go-git/actions) [![Go Report Card](https://goreportcard.com/badge/github.com/go-git/go-git)](https://goreportcard.com/report/github.com/go-git/go-git)
*go-git* is a highly extensible git implementation library written in **pure Go**.
It can be used to manipulate git repositories at low level *(plumbing)* or high level *(porcelain)*, through an idiomatic Go API. It also supports several types of storage, such as in-memory filesystems, or custom implementations, thanks to the [`Storer`](https://pkg.go.dev/github.com/go-git/go-git/v5/plumbing/storer) interface.
It's being actively developed since 2015 and is being used extensively by [Keybase](https://keybase.io/blog/encrypted-git-for-everyone), [Gitea](https://gitea.io/en-us/) or [Pulumi](https://github.com/search?q=org%3Apulumi+go-git&type=Code), and by many other libraries and tools.
Project Status
--------------
After the legal issues with the [`src-d`](https://github.com/src-d) organization, the lack of update for four months and the requirement to make a hard fork, the project is **now back to normality**.
The project is currently actively maintained by individual contributors, including several of the original authors, but also backed by a new company, [gitsight](https://github.com/gitsight), where `go-git` is a critical component used at scale.
Comparison with git
-------------------
*go-git* aims to be fully compatible with [git](https://github.com/git/git), all the *porcelain* operations are implemented to work exactly as *git* does.
*git* is a humongous project with years of development by thousands of contributors, making it challenging for *go-git* to implement all the features. You can find a comparison of *go-git* vs *git* in the [compatibility documentation](COMPATIBILITY.md).
Installation
------------
The recommended way to install *go-git* is:
```go
import "github.com/go-git/go-git/v5" // with go modules enabled (GO111MODULE=on or outside GOPATH)
import "github.com/go-git/go-git" // with go modules disabled
```
Examples
--------
> Please note that the `CheckIfError` and `Info` functions used in the examples are from the [examples package](https://github.com/go-git/go-git/blob/master/_examples/common.go#L19) just to be used in the examples.
### Basic example
A basic example that mimics the standard `git clone` command
```go
// Clone the given repository to the given directory
Info("git clone https://github.com/go-git/go-git")
_, err := git.PlainClone("/tmp/foo", false, &git.CloneOptions{
URL: "https://github.com/go-git/go-git",
Progress: os.Stdout,
})
CheckIfError(err)
```
Outputs:
```
Counting objects: 4924, done.
Compressing objects: 100% (1333/1333), done.
Total 4924 (delta 530), reused 6 (delta 6), pack-reused 3533
```
### In-memory example
Cloning a repository into memory and printing the history of HEAD, just like `git log` does
```go
// Clones the given repository in memory, creating the remote, the local
// branches and fetching the objects, exactly as:
Info("git clone https://github.com/go-git/go-billy")
r, err := git.Clone(memory.NewStorage(), nil, &git.CloneOptions{
URL: "https://github.com/go-git/go-billy",
})
CheckIfError(err)
// Gets the HEAD history from HEAD, just like this command:
Info("git log")
// ... retrieves the branch pointed by HEAD
ref, err := r.Head()
CheckIfError(err)
// ... retrieves the commit history
cIter, err := r.Log(&git.LogOptions{From: ref.Hash()})
CheckIfError(err)
// ... just iterates over the commits, printing it
err = cIter.ForEach(func(c *object.Commit) error {
fmt.Println(c)
return nil
})
CheckIfError(err)
```
Outputs:
```
commit ded8054fd0c3994453e9c8aacaf48d118d42991e
Author: Santiago M. Mola <santi@mola.io>
Date: Sat Nov 12 21:18:41 2016 +0100
index: ReadFrom/WriteTo returns IndexReadError/IndexWriteError. (#9)
commit df707095626f384ce2dc1a83b30f9a21d69b9dfc
Author: Santiago M. Mola <santi@mola.io>
Date: Fri Nov 11 13:23:22 2016 +0100
readwriter: fix bug when writing index. (#10)
When using ReadWriter on an existing siva file, absolute offset for
index entries was not being calculated correctly.
...
```
You can find this [example](_examples/log/main.go) and many others in the [examples](_examples) folder.
Contribute
----------
[Contributions](https://github.com/go-git/go-git/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) are more than welcome, if you are interested please take a look to
our [Contributing Guidelines](CONTRIBUTING.md).
License
-------
Apache License Version 2.0, see [LICENSE](LICENSE)

38
vendor/github.com/go-git/go-git/v5/SECURITY.md generated vendored Normal file
View File

@ -0,0 +1,38 @@
# go-git Security Policy
The purpose of this security policy is to outline `go-git`'s process
for reporting, handling and disclosing security sensitive information.
## Supported Versions
The project follows a version support policy where only the latest minor
release is actively supported. Therefore, only issues that impact the latest
minor release will be fixed. Users are encouraged to upgrade to the latest
minor/patch release to benefit from the most up-to-date features, bug fixes,
and security enhancements.
The supported versions policy applies to both the `go-git` library and its
associated repositories within the `go-git` org.
## Reporting Security Issues
Please report any security vulnerabilities or potential weaknesses in `go-git`
privately via go-git-security@googlegroups.com. Do not publicly disclose the
details of the vulnerability until a fix has been implemented and released.
During the process the project maintainers will investigate the report, so please
provide detailed information, including steps to reproduce, affected versions, and any mitigations if known.
The project maintainers will acknowledge the receipt of the report and work with
the reporter to validate and address the issue.
Please note that `go-git` does not have any bounty programs, and therefore do
not provide financial compensation for disclosures.
## Security Disclosure Process
The project maintainers will make every effort to promptly address security issues.
Once a security vulnerability is fixed, a security advisory will be published to notify users and provide appropriate mitigation measures.
All `go-git` advisories can be found at https://github.com/go-git/go-git/security/advisories.

590
vendor/github.com/go-git/go-git/v5/blame.go generated vendored Normal file
View File

@ -0,0 +1,590 @@
package git
import (
"bytes"
"container/heap"
"errors"
"fmt"
"io"
"strconv"
"time"
"unicode/utf8"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/object"
"github.com/go-git/go-git/v5/utils/diff"
"github.com/sergi/go-diff/diffmatchpatch"
)
// BlameResult represents the result of a Blame operation.
type BlameResult struct {
// Path is the path of the File that we're blaming.
Path string
// Rev (Revision) is the hash of the specified Commit used to generate this result.
Rev plumbing.Hash
// Lines contains every line with its authorship.
Lines []*Line
}
// Blame returns a BlameResult with the information about the last author of
// each line from file `path` at commit `c`.
func Blame(c *object.Commit, path string) (*BlameResult, error) {
// The file to blame is identified by the input arguments:
// commit and path. commit is a Commit object obtained from a Repository. Path
// represents a path to a specific file contained in the repository.
//
// Blaming a file is done by walking the tree in reverse order trying to find where each line was last modified.
//
// When a diff is found it cannot immediately assume it came from that commit, as it may have come from 1 of its
// parents, so it will first try to resolve those diffs from its parents, if it couldn't find the change in its
// parents then it will assign the change to itself.
//
// When encountering 2 parents that have made the same change to a file it will choose the parent that was merged
// into the current branch first (this is determined by the order of the parents inside the commit).
//
// This currently works on a line by line basis, if performance becomes an issue it could be changed to work with
// hunks rather than lines. Then when encountering diff hunks it would need to split them where necessary.
b := new(blame)
b.fRev = c
b.path = path
b.q = new(priorityQueue)
file, err := b.fRev.File(path)
if err != nil {
return nil, err
}
finalLines, err := file.Lines()
if err != nil {
return nil, err
}
finalLength := len(finalLines)
needsMap := make([]lineMap, finalLength)
for i := range needsMap {
needsMap[i] = lineMap{i, i, nil, -1}
}
contents, err := file.Contents()
if err != nil {
return nil, err
}
b.q.Push(&queueItem{
nil,
nil,
c,
path,
contents,
needsMap,
0,
false,
0,
})
items := make([]*queueItem, 0)
for {
items = items[:0]
for {
if b.q.Len() == 0 {
return nil, errors.New("invalid state: no items left on the blame queue")
}
item := b.q.Pop()
items = append(items, item)
next := b.q.Peek()
if next == nil || next.Hash != item.Commit.Hash {
break
}
}
finished, err := b.addBlames(items)
if err != nil {
return nil, err
}
if finished == true {
break
}
}
if err != nil {
return nil, err
}
b.lineToCommit = make([]*object.Commit, finalLength)
for i := range needsMap {
b.lineToCommit[i] = needsMap[i].Commit
}
lines, err := newLines(finalLines, b.lineToCommit)
if err != nil {
return nil, err
}
return &BlameResult{
Path: path,
Rev: c.Hash,
Lines: lines,
}, nil
}
// Line values represent the contents and author of a line in BlamedResult values.
type Line struct {
// Author is the email address of the last author that modified the line.
Author string
// AuthorName is the name of the last author that modified the line.
AuthorName string
// Text is the original text of the line.
Text string
// Date is when the original text of the line was introduced
Date time.Time
// Hash is the commit hash that introduced the original line
Hash plumbing.Hash
}
func newLine(author, authorName, text string, date time.Time, hash plumbing.Hash) *Line {
return &Line{
Author: author,
AuthorName: authorName,
Text: text,
Hash: hash,
Date: date,
}
}
func newLines(contents []string, commits []*object.Commit) ([]*Line, error) {
result := make([]*Line, 0, len(contents))
for i := range contents {
result = append(result, newLine(
commits[i].Author.Email, commits[i].Author.Name, contents[i],
commits[i].Author.When, commits[i].Hash,
))
}
return result, nil
}
// this struct is internally used by the blame function to hold its
// inputs, outputs and state.
type blame struct {
// the path of the file to blame
path string
// the commit of the final revision of the file to blame
fRev *object.Commit
// resolved lines
lineToCommit []*object.Commit
// queue of commits that need resolving
q *priorityQueue
}
type lineMap struct {
Orig, Cur int
Commit *object.Commit
FromParentNo int
}
func (b *blame) addBlames(curItems []*queueItem) (bool, error) {
curItem := curItems[0]
// Simple optimisation to merge paths, there is potential to go a bit further here and check for any duplicates
// not only if they are all the same.
if len(curItems) == 1 {
curItems = nil
} else if curItem.IdenticalToChild {
allSame := true
lenCurItems := len(curItems)
lowestParentNo := curItem.ParentNo
for i := 1; i < lenCurItems; i++ {
if !curItems[i].IdenticalToChild || curItem.Child != curItems[i].Child {
allSame = false
break
}
lowestParentNo = min(lowestParentNo, curItems[i].ParentNo)
}
if allSame {
curItem.Child.numParentsNeedResolving = curItem.Child.numParentsNeedResolving - lenCurItems + 1
curItems = nil // free the memory
curItem.ParentNo = lowestParentNo
// Now check if we can remove the parent completely
for curItem.Child.IdenticalToChild && curItem.Child.MergedChildren == nil && curItem.Child.numParentsNeedResolving == 1 {
oldChild := curItem.Child
curItem.Child = oldChild.Child
curItem.ParentNo = oldChild.ParentNo
}
}
}
// if we have more than 1 item for this commit, create a single needsMap
if len(curItems) > 1 {
curItem.MergedChildren = make([]childToNeedsMap, len(curItems))
for i, c := range curItems {
curItem.MergedChildren[i] = childToNeedsMap{c.Child, c.NeedsMap, c.IdenticalToChild, c.ParentNo}
}
newNeedsMap := make([]lineMap, 0, len(curItem.NeedsMap))
newNeedsMap = append(newNeedsMap, curItems[0].NeedsMap...)
for i := 1; i < len(curItems); i++ {
cur := curItems[i].NeedsMap
n := 0 // position in newNeedsMap
c := 0 // position in current list
for c < len(cur) {
if n == len(newNeedsMap) {
newNeedsMap = append(newNeedsMap, cur[c:]...)
break
} else if newNeedsMap[n].Cur == cur[c].Cur {
n++
c++
} else if newNeedsMap[n].Cur < cur[c].Cur {
n++
} else {
newNeedsMap = append(newNeedsMap, cur[c])
newPos := len(newNeedsMap) - 1
for newPos > n {
newNeedsMap[newPos-1], newNeedsMap[newPos] = newNeedsMap[newPos], newNeedsMap[newPos-1]
newPos--
}
}
}
}
curItem.NeedsMap = newNeedsMap
curItem.IdenticalToChild = false
curItem.Child = nil
curItems = nil // free the memory
}
parents, err := parentsContainingPath(curItem.path, curItem.Commit)
if err != nil {
return false, err
}
anyPushed := false
for parnetNo, prev := range parents {
currentHash, err := blobHash(curItem.path, curItem.Commit)
if err != nil {
return false, err
}
prevHash, err := blobHash(prev.Path, prev.Commit)
if err != nil {
return false, err
}
if currentHash == prevHash {
if len(parents) == 1 && curItem.MergedChildren == nil && curItem.IdenticalToChild {
// commit that has 1 parent and 1 child and is the same as both, bypass it completely
b.q.Push(&queueItem{
Child: curItem.Child,
Commit: prev.Commit,
path: prev.Path,
Contents: curItem.Contents,
NeedsMap: curItem.NeedsMap, // reuse the NeedsMap as we are throwing away this item
IdenticalToChild: true,
ParentNo: curItem.ParentNo,
})
} else {
b.q.Push(&queueItem{
Child: curItem,
Commit: prev.Commit,
path: prev.Path,
Contents: curItem.Contents,
NeedsMap: append([]lineMap(nil), curItem.NeedsMap...), // create new slice and copy
IdenticalToChild: true,
ParentNo: parnetNo,
})
curItem.numParentsNeedResolving++
}
anyPushed = true
continue
}
// get the contents of the file
file, err := prev.Commit.File(prev.Path)
if err != nil {
return false, err
}
prevContents, err := file.Contents()
if err != nil {
return false, err
}
hunks := diff.Do(prevContents, curItem.Contents)
prevl := -1
curl := -1
need := 0
getFromParent := make([]lineMap, 0)
out:
for h := range hunks {
hLines := countLines(hunks[h].Text)
for hl := 0; hl < hLines; hl++ {
switch {
case hunks[h].Type == diffmatchpatch.DiffEqual:
prevl++
curl++
if curl == curItem.NeedsMap[need].Cur {
// add to needs
getFromParent = append(getFromParent, lineMap{curl, prevl, nil, -1})
// move to next need
need++
if need >= len(curItem.NeedsMap) {
break out
}
}
case hunks[h].Type == diffmatchpatch.DiffInsert:
curl++
if curl == curItem.NeedsMap[need].Cur {
// the line we want is added, it may have been added here (or by another parent), skip it for now
need++
if need >= len(curItem.NeedsMap) {
break out
}
}
case hunks[h].Type == diffmatchpatch.DiffDelete:
prevl += hLines
continue out
default:
return false, errors.New("invalid state: invalid hunk Type")
}
}
}
if len(getFromParent) > 0 {
b.q.Push(&queueItem{
curItem,
nil,
prev.Commit,
prev.Path,
prevContents,
getFromParent,
0,
false,
parnetNo,
})
curItem.numParentsNeedResolving++
anyPushed = true
}
}
curItem.Contents = "" // no longer need, free the memory
if !anyPushed {
return finishNeeds(curItem)
}
return false, nil
}
func finishNeeds(curItem *queueItem) (bool, error) {
// any needs left in the needsMap must have come from this revision
for i := range curItem.NeedsMap {
if curItem.NeedsMap[i].Commit == nil {
curItem.NeedsMap[i].Commit = curItem.Commit
curItem.NeedsMap[i].FromParentNo = -1
}
}
if curItem.Child == nil && curItem.MergedChildren == nil {
return true, nil
}
if curItem.MergedChildren == nil {
return applyNeeds(curItem.Child, curItem.NeedsMap, curItem.IdenticalToChild, curItem.ParentNo)
}
for _, ctn := range curItem.MergedChildren {
m := 0 // position in merged needs map
p := 0 // position in parent needs map
for p < len(ctn.NeedsMap) {
if ctn.NeedsMap[p].Cur == curItem.NeedsMap[m].Cur {
ctn.NeedsMap[p].Commit = curItem.NeedsMap[m].Commit
m++
p++
} else if ctn.NeedsMap[p].Cur < curItem.NeedsMap[m].Cur {
p++
} else {
m++
}
}
finished, err := applyNeeds(ctn.Child, ctn.NeedsMap, ctn.IdenticalToChild, ctn.ParentNo)
if finished || err != nil {
return finished, err
}
}
return false, nil
}
func applyNeeds(child *queueItem, needsMap []lineMap, identicalToChild bool, parentNo int) (bool, error) {
if identicalToChild {
for i := range child.NeedsMap {
l := &child.NeedsMap[i]
if l.Cur != needsMap[i].Cur || l.Orig != needsMap[i].Orig {
return false, errors.New("needsMap isn't the same? Why not??")
}
if l.Commit == nil || parentNo < l.FromParentNo {
l.Commit = needsMap[i].Commit
l.FromParentNo = parentNo
}
}
} else {
i := 0
out:
for j := range child.NeedsMap {
l := &child.NeedsMap[j]
for needsMap[i].Orig < l.Cur {
i++
if i == len(needsMap) {
break out
}
}
if l.Cur == needsMap[i].Orig {
if l.Commit == nil || parentNo < l.FromParentNo {
l.Commit = needsMap[i].Commit
l.FromParentNo = parentNo
}
}
}
}
child.numParentsNeedResolving--
if child.numParentsNeedResolving == 0 {
finished, err := finishNeeds(child)
if finished || err != nil {
return finished, err
}
}
return false, nil
}
// String prints the results of a Blame using git-blame's style.
func (b BlameResult) String() string {
var buf bytes.Buffer
// max line number length
mlnl := len(strconv.Itoa(len(b.Lines)))
// max author length
mal := b.maxAuthorLength()
format := fmt.Sprintf("%%s (%%-%ds %%s %%%dd) %%s\n", mal, mlnl)
for ln := range b.Lines {
_, _ = fmt.Fprintf(&buf, format, b.Lines[ln].Hash.String()[:8],
b.Lines[ln].AuthorName, b.Lines[ln].Date.Format("2006-01-02 15:04:05 -0700"), ln+1, b.Lines[ln].Text)
}
return buf.String()
}
// utility function to calculate the number of runes needed
// to print the longest author name in the blame of a file.
func (b BlameResult) maxAuthorLength() int {
m := 0
for ln := range b.Lines {
m = max(m, utf8.RuneCountInString(b.Lines[ln].AuthorName))
}
return m
}
func min(a, b int) int {
if a < b {
return a
}
return b
}
func max(a, b int) int {
if a > b {
return a
}
return b
}
type childToNeedsMap struct {
Child *queueItem
NeedsMap []lineMap
IdenticalToChild bool
ParentNo int
}
type queueItem struct {
Child *queueItem
MergedChildren []childToNeedsMap
Commit *object.Commit
path string
Contents string
NeedsMap []lineMap
numParentsNeedResolving int
IdenticalToChild bool
ParentNo int
}
type priorityQueueImp []*queueItem
func (pq *priorityQueueImp) Len() int { return len(*pq) }
func (pq *priorityQueueImp) Less(i, j int) bool {
return !(*pq)[i].Commit.Less((*pq)[j].Commit)
}
func (pq *priorityQueueImp) Swap(i, j int) { (*pq)[i], (*pq)[j] = (*pq)[j], (*pq)[i] }
func (pq *priorityQueueImp) Push(x any) { *pq = append(*pq, x.(*queueItem)) }
func (pq *priorityQueueImp) Pop() any {
n := len(*pq)
ret := (*pq)[n-1]
(*pq)[n-1] = nil // ovoid memory leak
*pq = (*pq)[0 : n-1]
return ret
}
func (pq *priorityQueueImp) Peek() *object.Commit {
if len(*pq) == 0 {
return nil
}
return (*pq)[0].Commit
}
type priorityQueue priorityQueueImp
func (pq *priorityQueue) Init() { heap.Init((*priorityQueueImp)(pq)) }
func (pq *priorityQueue) Len() int { return (*priorityQueueImp)(pq).Len() }
func (pq *priorityQueue) Push(c *queueItem) {
heap.Push((*priorityQueueImp)(pq), c)
}
func (pq *priorityQueue) Pop() *queueItem {
return heap.Pop((*priorityQueueImp)(pq)).(*queueItem)
}
func (pq *priorityQueue) Peek() *object.Commit { return (*priorityQueueImp)(pq).Peek() }
type parentCommit struct {
Commit *object.Commit
Path string
}
func parentsContainingPath(path string, c *object.Commit) ([]parentCommit, error) {
// TODO: benchmark this method making git.object.Commit.parent public instead of using
// an iterator
var result []parentCommit
iter := c.Parents()
for {
parent, err := iter.Next()
if err == io.EOF {
return result, nil
}
if err != nil {
return nil, err
}
if _, err := parent.File(path); err == nil {
result = append(result, parentCommit{parent, path})
} else {
// look for renames
patch, err := parent.Patch(c)
if err != nil {
return nil, err
} else if patch != nil {
for _, fp := range patch.FilePatches() {
from, to := fp.Files()
if from != nil && to != nil && to.Path() == path {
result = append(result, parentCommit{parent, from.Path()})
break
}
}
}
}
}
}
func blobHash(path string, commit *object.Commit) (plumbing.Hash, error) {
file, err := commit.File(path)
if err != nil {
return plumbing.ZeroHash, err
}
return file.Hash, nil
}

20
vendor/github.com/go-git/go-git/v5/common.go generated vendored Normal file
View File

@ -0,0 +1,20 @@
package git
import "strings"
// countLines returns the number of lines in a string à la git, this is
// The newline character is assumed to be '\n'. The empty string
// contains 0 lines. If the last line of the string doesn't end with a
// newline, it will still be considered a line.
func countLines(s string) int {
if s == "" {
return 0
}
nEOL := strings.Count(s, "\n")
if strings.HasSuffix(s, "\n") {
return nEOL
}
return nEOL + 1
}

123
vendor/github.com/go-git/go-git/v5/config/branch.go generated vendored Normal file
View File

@ -0,0 +1,123 @@
package config
import (
"errors"
"strings"
"github.com/go-git/go-git/v5/plumbing"
format "github.com/go-git/go-git/v5/plumbing/format/config"
)
var (
errBranchEmptyName = errors.New("branch config: empty name")
errBranchInvalidMerge = errors.New("branch config: invalid merge")
errBranchInvalidRebase = errors.New("branch config: rebase must be one of 'true' or 'interactive'")
)
// Branch contains information on the
// local branches and which remote to track
type Branch struct {
// Name of branch
Name string
// Remote name of remote to track
Remote string
// Merge is the local refspec for the branch
Merge plumbing.ReferenceName
// Rebase instead of merge when pulling. Valid values are
// "true" and "interactive". "false" is undocumented and
// typically represented by the non-existence of this field
Rebase string
// Description explains what the branch is for.
// Multi-line explanations may be used.
//
// Original git command to edit:
// git branch --edit-description
Description string
raw *format.Subsection
}
// Validate validates fields of branch
func (b *Branch) Validate() error {
if b.Name == "" {
return errBranchEmptyName
}
if b.Merge != "" && !b.Merge.IsBranch() {
return errBranchInvalidMerge
}
if b.Rebase != "" &&
b.Rebase != "true" &&
b.Rebase != "interactive" &&
b.Rebase != "false" {
return errBranchInvalidRebase
}
return plumbing.NewBranchReferenceName(b.Name).Validate()
}
func (b *Branch) marshal() *format.Subsection {
if b.raw == nil {
b.raw = &format.Subsection{}
}
b.raw.Name = b.Name
if b.Remote == "" {
b.raw.RemoveOption(remoteSection)
} else {
b.raw.SetOption(remoteSection, b.Remote)
}
if b.Merge == "" {
b.raw.RemoveOption(mergeKey)
} else {
b.raw.SetOption(mergeKey, string(b.Merge))
}
if b.Rebase == "" {
b.raw.RemoveOption(rebaseKey)
} else {
b.raw.SetOption(rebaseKey, b.Rebase)
}
if b.Description == "" {
b.raw.RemoveOption(descriptionKey)
} else {
desc := quoteDescription(b.Description)
b.raw.SetOption(descriptionKey, desc)
}
return b.raw
}
// hack to trigger conditional quoting in the
// plumbing/format/config/Encoder.encodeOptions
//
// Current Encoder implementation uses Go %q format if value contains a backslash character,
// which is not consistent with reference git implementation.
// git just replaces newline characters with \n, while Encoder prints them directly.
// Until value quoting fix, we should escape description value by replacing newline characters with \n.
func quoteDescription(desc string) string {
return strings.ReplaceAll(desc, "\n", `\n`)
}
func (b *Branch) unmarshal(s *format.Subsection) error {
b.raw = s
b.Name = b.raw.Name
b.Remote = b.raw.Options.Get(remoteSection)
b.Merge = plumbing.ReferenceName(b.raw.Options.Get(mergeKey))
b.Rebase = b.raw.Options.Get(rebaseKey)
b.Description = unquoteDescription(b.raw.Options.Get(descriptionKey))
return b.Validate()
}
// hack to enable conditional quoting in the
// plumbing/format/config/Encoder.encodeOptions
// goto quoteDescription for details.
func unquoteDescription(desc string) string {
return strings.ReplaceAll(desc, `\n`, "\n")
}

696
vendor/github.com/go-git/go-git/v5/config/config.go generated vendored Normal file
View File

@ -0,0 +1,696 @@
// Package config contains the abstraction of multiple config files
package config
import (
"bytes"
"errors"
"fmt"
"io"
"os"
"path/filepath"
"sort"
"strconv"
"github.com/go-git/go-billy/v5/osfs"
"github.com/go-git/go-git/v5/internal/url"
"github.com/go-git/go-git/v5/plumbing"
format "github.com/go-git/go-git/v5/plumbing/format/config"
)
const (
// DefaultFetchRefSpec is the default refspec used for fetch.
DefaultFetchRefSpec = "+refs/heads/*:refs/remotes/%s/*"
// DefaultPushRefSpec is the default refspec used for push.
DefaultPushRefSpec = "refs/heads/*:refs/heads/*"
)
// ConfigStorer generic storage of Config object
type ConfigStorer interface {
Config() (*Config, error)
SetConfig(*Config) error
}
var (
ErrInvalid = errors.New("config invalid key in remote or branch")
ErrRemoteConfigNotFound = errors.New("remote config not found")
ErrRemoteConfigEmptyURL = errors.New("remote config: empty URL")
ErrRemoteConfigEmptyName = errors.New("remote config: empty name")
)
// Scope defines the scope of a config file, such as local, global or system.
type Scope int
// Available ConfigScope's
const (
LocalScope Scope = iota
GlobalScope
SystemScope
)
// Config contains the repository configuration
// https://www.kernel.org/pub/software/scm/git/docs/git-config.html#FILES
type Config struct {
Core struct {
// IsBare if true this repository is assumed to be bare and has no
// working directory associated with it.
IsBare bool
// Worktree is the path to the root of the working tree.
Worktree string
// CommentChar is the character indicating the start of a
// comment for commands like commit and tag
CommentChar string
// RepositoryFormatVersion identifies the repository format and layout version.
RepositoryFormatVersion format.RepositoryFormatVersion
}
User struct {
// Name is the personal name of the author and the committer of a commit.
Name string
// Email is the email of the author and the committer of a commit.
Email string
}
Author struct {
// Name is the personal name of the author of a commit.
Name string
// Email is the email of the author of a commit.
Email string
}
Committer struct {
// Name is the personal name of the committer of a commit.
Name string
// Email is the email of the committer of a commit.
Email string
}
Pack struct {
// Window controls the size of the sliding window for delta
// compression. The default is 10. A value of 0 turns off
// delta compression entirely.
Window uint
}
Init struct {
// DefaultBranch Allows overriding the default branch name
// e.g. when initializing a new repository or when cloning
// an empty repository.
DefaultBranch string
}
Extensions struct {
// ObjectFormat specifies the hash algorithm to use. The
// acceptable values are sha1 and sha256. If not specified,
// sha1 is assumed. It is an error to specify this key unless
// core.repositoryFormatVersion is 1.
//
// This setting must not be changed after repository initialization
// (e.g. clone or init).
ObjectFormat format.ObjectFormat
}
// Remotes list of repository remotes, the key of the map is the name
// of the remote, should equal to RemoteConfig.Name.
Remotes map[string]*RemoteConfig
// Submodules list of repository submodules, the key of the map is the name
// of the submodule, should equal to Submodule.Name.
Submodules map[string]*Submodule
// Branches list of branches, the key is the branch name and should
// equal Branch.Name
Branches map[string]*Branch
// URLs list of url rewrite rules, if repo url starts with URL.InsteadOf value, it will be replaced with the
// key instead.
URLs map[string]*URL
// Raw contains the raw information of a config file. The main goal is
// preserve the parsed information from the original format, to avoid
// dropping unsupported fields.
Raw *format.Config
}
// NewConfig returns a new empty Config.
func NewConfig() *Config {
config := &Config{
Remotes: make(map[string]*RemoteConfig),
Submodules: make(map[string]*Submodule),
Branches: make(map[string]*Branch),
URLs: make(map[string]*URL),
Raw: format.New(),
}
config.Pack.Window = DefaultPackWindow
return config
}
// ReadConfig reads a config file from a io.Reader.
func ReadConfig(r io.Reader) (*Config, error) {
b, err := io.ReadAll(r)
if err != nil {
return nil, err
}
cfg := NewConfig()
if err = cfg.Unmarshal(b); err != nil {
return nil, err
}
return cfg, nil
}
// LoadConfig loads a config file from a given scope. The returned Config,
// contains exclusively information from the given scope. If it couldn't find a
// config file to the given scope, an empty one is returned.
func LoadConfig(scope Scope) (*Config, error) {
if scope == LocalScope {
return nil, fmt.Errorf("LocalScope should be read from the a ConfigStorer")
}
files, err := Paths(scope)
if err != nil {
return nil, err
}
for _, file := range files {
f, err := osfs.Default.Open(file)
if err != nil {
if os.IsNotExist(err) {
continue
}
return nil, err
}
defer f.Close()
return ReadConfig(f)
}
return NewConfig(), nil
}
// Paths returns the config file location for a given scope.
func Paths(scope Scope) ([]string, error) {
var files []string
switch scope {
case GlobalScope:
xdg := os.Getenv("XDG_CONFIG_HOME")
if xdg != "" {
files = append(files, filepath.Join(xdg, "git/config"))
}
home, err := os.UserHomeDir()
if err != nil {
return nil, err
}
files = append(files,
filepath.Join(home, ".gitconfig"),
filepath.Join(home, ".config/git/config"),
)
case SystemScope:
files = append(files, "/etc/gitconfig")
}
return files, nil
}
// Validate validates the fields and sets the default values.
func (c *Config) Validate() error {
for name, r := range c.Remotes {
if r.Name != name {
return ErrInvalid
}
if err := r.Validate(); err != nil {
return err
}
}
for name, b := range c.Branches {
if b.Name != name {
return ErrInvalid
}
if err := b.Validate(); err != nil {
return err
}
}
return nil
}
const (
remoteSection = "remote"
submoduleSection = "submodule"
branchSection = "branch"
coreSection = "core"
packSection = "pack"
userSection = "user"
authorSection = "author"
committerSection = "committer"
initSection = "init"
urlSection = "url"
extensionsSection = "extensions"
fetchKey = "fetch"
urlKey = "url"
bareKey = "bare"
worktreeKey = "worktree"
commentCharKey = "commentChar"
windowKey = "window"
mergeKey = "merge"
rebaseKey = "rebase"
nameKey = "name"
emailKey = "email"
descriptionKey = "description"
defaultBranchKey = "defaultBranch"
repositoryFormatVersionKey = "repositoryformatversion"
objectFormat = "objectformat"
mirrorKey = "mirror"
// DefaultPackWindow holds the number of previous objects used to
// generate deltas. The value 10 is the same used by git command.
DefaultPackWindow = uint(10)
)
// Unmarshal parses a git-config file and stores it.
func (c *Config) Unmarshal(b []byte) error {
r := bytes.NewBuffer(b)
d := format.NewDecoder(r)
c.Raw = format.New()
if err := d.Decode(c.Raw); err != nil {
return err
}
c.unmarshalCore()
c.unmarshalUser()
c.unmarshalInit()
if err := c.unmarshalPack(); err != nil {
return err
}
unmarshalSubmodules(c.Raw, c.Submodules)
if err := c.unmarshalBranches(); err != nil {
return err
}
if err := c.unmarshalURLs(); err != nil {
return err
}
return c.unmarshalRemotes()
}
func (c *Config) unmarshalCore() {
s := c.Raw.Section(coreSection)
if s.Options.Get(bareKey) == "true" {
c.Core.IsBare = true
}
c.Core.Worktree = s.Options.Get(worktreeKey)
c.Core.CommentChar = s.Options.Get(commentCharKey)
}
func (c *Config) unmarshalUser() {
s := c.Raw.Section(userSection)
c.User.Name = s.Options.Get(nameKey)
c.User.Email = s.Options.Get(emailKey)
s = c.Raw.Section(authorSection)
c.Author.Name = s.Options.Get(nameKey)
c.Author.Email = s.Options.Get(emailKey)
s = c.Raw.Section(committerSection)
c.Committer.Name = s.Options.Get(nameKey)
c.Committer.Email = s.Options.Get(emailKey)
}
func (c *Config) unmarshalPack() error {
s := c.Raw.Section(packSection)
window := s.Options.Get(windowKey)
if window == "" {
c.Pack.Window = DefaultPackWindow
} else {
winUint, err := strconv.ParseUint(window, 10, 32)
if err != nil {
return err
}
c.Pack.Window = uint(winUint)
}
return nil
}
func (c *Config) unmarshalRemotes() error {
s := c.Raw.Section(remoteSection)
for _, sub := range s.Subsections {
r := &RemoteConfig{}
if err := r.unmarshal(sub); err != nil {
return err
}
c.Remotes[r.Name] = r
}
// Apply insteadOf url rules
for _, r := range c.Remotes {
r.applyURLRules(c.URLs)
}
return nil
}
func (c *Config) unmarshalURLs() error {
s := c.Raw.Section(urlSection)
for _, sub := range s.Subsections {
r := &URL{}
if err := r.unmarshal(sub); err != nil {
return err
}
c.URLs[r.Name] = r
}
return nil
}
func unmarshalSubmodules(fc *format.Config, submodules map[string]*Submodule) {
s := fc.Section(submoduleSection)
for _, sub := range s.Subsections {
m := &Submodule{}
m.unmarshal(sub)
if m.Validate() == ErrModuleBadPath {
continue
}
submodules[m.Name] = m
}
}
func (c *Config) unmarshalBranches() error {
bs := c.Raw.Section(branchSection)
for _, sub := range bs.Subsections {
b := &Branch{}
if err := b.unmarshal(sub); err != nil {
return err
}
c.Branches[b.Name] = b
}
return nil
}
func (c *Config) unmarshalInit() {
s := c.Raw.Section(initSection)
c.Init.DefaultBranch = s.Options.Get(defaultBranchKey)
}
// Marshal returns Config encoded as a git-config file.
func (c *Config) Marshal() ([]byte, error) {
c.marshalCore()
c.marshalExtensions()
c.marshalUser()
c.marshalPack()
c.marshalRemotes()
c.marshalSubmodules()
c.marshalBranches()
c.marshalURLs()
c.marshalInit()
buf := bytes.NewBuffer(nil)
if err := format.NewEncoder(buf).Encode(c.Raw); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func (c *Config) marshalCore() {
s := c.Raw.Section(coreSection)
s.SetOption(bareKey, fmt.Sprintf("%t", c.Core.IsBare))
if string(c.Core.RepositoryFormatVersion) != "" {
s.SetOption(repositoryFormatVersionKey, string(c.Core.RepositoryFormatVersion))
}
if c.Core.Worktree != "" {
s.SetOption(worktreeKey, c.Core.Worktree)
}
}
func (c *Config) marshalExtensions() {
// Extensions are only supported on Version 1, therefore
// ignore them otherwise.
if c.Core.RepositoryFormatVersion == format.Version_1 {
s := c.Raw.Section(extensionsSection)
s.SetOption(objectFormat, string(c.Extensions.ObjectFormat))
}
}
func (c *Config) marshalUser() {
s := c.Raw.Section(userSection)
if c.User.Name != "" {
s.SetOption(nameKey, c.User.Name)
}
if c.User.Email != "" {
s.SetOption(emailKey, c.User.Email)
}
s = c.Raw.Section(authorSection)
if c.Author.Name != "" {
s.SetOption(nameKey, c.Author.Name)
}
if c.Author.Email != "" {
s.SetOption(emailKey, c.Author.Email)
}
s = c.Raw.Section(committerSection)
if c.Committer.Name != "" {
s.SetOption(nameKey, c.Committer.Name)
}
if c.Committer.Email != "" {
s.SetOption(emailKey, c.Committer.Email)
}
}
func (c *Config) marshalPack() {
s := c.Raw.Section(packSection)
if c.Pack.Window != DefaultPackWindow {
s.SetOption(windowKey, fmt.Sprintf("%d", c.Pack.Window))
}
}
func (c *Config) marshalRemotes() {
s := c.Raw.Section(remoteSection)
newSubsections := make(format.Subsections, 0, len(c.Remotes))
added := make(map[string]bool)
for _, subsection := range s.Subsections {
if remote, ok := c.Remotes[subsection.Name]; ok {
newSubsections = append(newSubsections, remote.marshal())
added[subsection.Name] = true
}
}
remoteNames := make([]string, 0, len(c.Remotes))
for name := range c.Remotes {
remoteNames = append(remoteNames, name)
}
sort.Strings(remoteNames)
for _, name := range remoteNames {
if !added[name] {
newSubsections = append(newSubsections, c.Remotes[name].marshal())
}
}
s.Subsections = newSubsections
}
func (c *Config) marshalSubmodules() {
s := c.Raw.Section(submoduleSection)
s.Subsections = make(format.Subsections, len(c.Submodules))
var i int
for _, r := range c.Submodules {
section := r.marshal()
// the submodule section at config is a subset of the .gitmodule file
// we should remove the non-valid options for the config file.
section.RemoveOption(pathKey)
s.Subsections[i] = section
i++
}
}
func (c *Config) marshalBranches() {
s := c.Raw.Section(branchSection)
newSubsections := make(format.Subsections, 0, len(c.Branches))
added := make(map[string]bool)
for _, subsection := range s.Subsections {
if branch, ok := c.Branches[subsection.Name]; ok {
newSubsections = append(newSubsections, branch.marshal())
added[subsection.Name] = true
}
}
branchNames := make([]string, 0, len(c.Branches))
for name := range c.Branches {
branchNames = append(branchNames, name)
}
sort.Strings(branchNames)
for _, name := range branchNames {
if !added[name] {
newSubsections = append(newSubsections, c.Branches[name].marshal())
}
}
s.Subsections = newSubsections
}
func (c *Config) marshalURLs() {
s := c.Raw.Section(urlSection)
s.Subsections = make(format.Subsections, len(c.URLs))
var i int
for _, r := range c.URLs {
section := r.marshal()
// the submodule section at config is a subset of the .gitmodule file
// we should remove the non-valid options for the config file.
s.Subsections[i] = section
i++
}
}
func (c *Config) marshalInit() {
s := c.Raw.Section(initSection)
if c.Init.DefaultBranch != "" {
s.SetOption(defaultBranchKey, c.Init.DefaultBranch)
}
}
// RemoteConfig contains the configuration for a given remote repository.
type RemoteConfig struct {
// Name of the remote
Name string
// URLs the URLs of a remote repository. It must be non-empty. Fetch will
// always use the first URL, while push will use all of them.
URLs []string
// Mirror indicates that the repository is a mirror of remote.
Mirror bool
// insteadOfRulesApplied have urls been modified
insteadOfRulesApplied bool
// originalURLs are the urls before applying insteadOf rules
originalURLs []string
// Fetch the default set of "refspec" for fetch operation
Fetch []RefSpec
// raw representation of the subsection, filled by marshal or unmarshal are
// called
raw *format.Subsection
}
// Validate validates the fields and sets the default values.
func (c *RemoteConfig) Validate() error {
if c.Name == "" {
return ErrRemoteConfigEmptyName
}
if len(c.URLs) == 0 {
return ErrRemoteConfigEmptyURL
}
for _, r := range c.Fetch {
if err := r.Validate(); err != nil {
return err
}
}
if len(c.Fetch) == 0 {
c.Fetch = []RefSpec{RefSpec(fmt.Sprintf(DefaultFetchRefSpec, c.Name))}
}
return plumbing.NewRemoteHEADReferenceName(c.Name).Validate()
}
func (c *RemoteConfig) unmarshal(s *format.Subsection) error {
c.raw = s
fetch := []RefSpec{}
for _, f := range c.raw.Options.GetAll(fetchKey) {
rs := RefSpec(f)
if err := rs.Validate(); err != nil {
return err
}
fetch = append(fetch, rs)
}
c.Name = c.raw.Name
c.URLs = append([]string(nil), c.raw.Options.GetAll(urlKey)...)
c.Fetch = fetch
c.Mirror = c.raw.Options.Get(mirrorKey) == "true"
return nil
}
func (c *RemoteConfig) marshal() *format.Subsection {
if c.raw == nil {
c.raw = &format.Subsection{}
}
c.raw.Name = c.Name
if len(c.URLs) == 0 {
c.raw.RemoveOption(urlKey)
} else {
urls := c.URLs
if c.insteadOfRulesApplied {
urls = c.originalURLs
}
c.raw.SetOption(urlKey, urls...)
}
if len(c.Fetch) == 0 {
c.raw.RemoveOption(fetchKey)
} else {
var values []string
for _, rs := range c.Fetch {
values = append(values, rs.String())
}
c.raw.SetOption(fetchKey, values...)
}
if c.Mirror {
c.raw.SetOption(mirrorKey, strconv.FormatBool(c.Mirror))
}
return c.raw
}
func (c *RemoteConfig) IsFirstURLLocal() bool {
return url.IsLocalEndpoint(c.URLs[0])
}
func (c *RemoteConfig) applyURLRules(urlRules map[string]*URL) {
// save original urls
originalURLs := make([]string, len(c.URLs))
copy(originalURLs, c.URLs)
for i, url := range c.URLs {
if matchingURLRule := findLongestInsteadOfMatch(url, urlRules); matchingURLRule != nil {
c.URLs[i] = matchingURLRule.ApplyInsteadOf(c.URLs[i])
c.insteadOfRulesApplied = true
}
}
if c.insteadOfRulesApplied {
c.originalURLs = originalURLs
}
}

139
vendor/github.com/go-git/go-git/v5/config/modules.go generated vendored Normal file
View File

@ -0,0 +1,139 @@
package config
import (
"bytes"
"errors"
"regexp"
format "github.com/go-git/go-git/v5/plumbing/format/config"
)
var (
ErrModuleEmptyURL = errors.New("module config: empty URL")
ErrModuleEmptyPath = errors.New("module config: empty path")
ErrModuleBadPath = errors.New("submodule has an invalid path")
)
var (
// Matches module paths with dotdot ".." components.
dotdotPath = regexp.MustCompile(`(^|[/\\])\.\.([/\\]|$)`)
)
// Modules defines the submodules properties, represents a .gitmodules file
// https://www.kernel.org/pub/software/scm/git/docs/gitmodules.html
type Modules struct {
// Submodules is a map of submodules being the key the name of the submodule.
Submodules map[string]*Submodule
raw *format.Config
}
// NewModules returns a new empty Modules
func NewModules() *Modules {
return &Modules{
Submodules: make(map[string]*Submodule),
raw: format.New(),
}
}
const (
pathKey = "path"
branchKey = "branch"
)
// Unmarshal parses a git-config file and stores it.
func (m *Modules) Unmarshal(b []byte) error {
r := bytes.NewBuffer(b)
d := format.NewDecoder(r)
m.raw = format.New()
if err := d.Decode(m.raw); err != nil {
return err
}
unmarshalSubmodules(m.raw, m.Submodules)
return nil
}
// Marshal returns Modules encoded as a git-config file.
func (m *Modules) Marshal() ([]byte, error) {
s := m.raw.Section(submoduleSection)
s.Subsections = make(format.Subsections, len(m.Submodules))
var i int
for _, r := range m.Submodules {
s.Subsections[i] = r.marshal()
i++
}
buf := bytes.NewBuffer(nil)
if err := format.NewEncoder(buf).Encode(m.raw); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
// Submodule defines a submodule.
type Submodule struct {
// Name module name
Name string
// Path defines the path, relative to the top-level directory of the Git
// working tree.
Path string
// URL defines a URL from which the submodule repository can be cloned.
URL string
// Branch is a remote branch name for tracking updates in the upstream
// submodule. Optional value.
Branch string
// raw representation of the subsection, filled by marshal or unmarshal are
// called.
raw *format.Subsection
}
// Validate validates the fields and sets the default values.
func (m *Submodule) Validate() error {
if m.Path == "" {
return ErrModuleEmptyPath
}
if m.URL == "" {
return ErrModuleEmptyURL
}
if dotdotPath.MatchString(m.Path) {
return ErrModuleBadPath
}
return nil
}
func (m *Submodule) unmarshal(s *format.Subsection) {
m.raw = s
m.Name = m.raw.Name
m.Path = m.raw.Option(pathKey)
m.URL = m.raw.Option(urlKey)
m.Branch = m.raw.Option(branchKey)
}
func (m *Submodule) marshal() *format.Subsection {
if m.raw == nil {
m.raw = &format.Subsection{}
}
m.raw.Name = m.Name
if m.raw.Name == "" {
m.raw.Name = m.Path
}
m.raw.SetOption(pathKey, m.Path)
m.raw.SetOption(urlKey, m.URL)
if m.Branch != "" {
m.raw.SetOption(branchKey, m.Branch)
}
return m.raw
}

155
vendor/github.com/go-git/go-git/v5/config/refspec.go generated vendored Normal file
View File

@ -0,0 +1,155 @@
package config
import (
"errors"
"strings"
"github.com/go-git/go-git/v5/plumbing"
)
const (
refSpecWildcard = "*"
refSpecForce = "+"
refSpecSeparator = ":"
)
var (
ErrRefSpecMalformedSeparator = errors.New("malformed refspec, separators are wrong")
ErrRefSpecMalformedWildcard = errors.New("malformed refspec, mismatched number of wildcards")
)
// RefSpec is a mapping from local branches to remote references.
// The format of the refspec is an optional +, followed by <src>:<dst>, where
// <src> is the pattern for references on the remote side and <dst> is where
// those references will be written locally. The + tells Git to update the
// reference even if it isnt a fast-forward.
// eg.: "+refs/heads/*:refs/remotes/origin/*"
//
// https://git-scm.com/book/en/v2/Git-Internals-The-Refspec
type RefSpec string
// Validate validates the RefSpec
func (s RefSpec) Validate() error {
spec := string(s)
if strings.Count(spec, refSpecSeparator) != 1 {
return ErrRefSpecMalformedSeparator
}
sep := strings.Index(spec, refSpecSeparator)
if sep == len(spec)-1 {
return ErrRefSpecMalformedSeparator
}
ws := strings.Count(spec[0:sep], refSpecWildcard)
wd := strings.Count(spec[sep+1:], refSpecWildcard)
if ws == wd && ws < 2 && wd < 2 {
return nil
}
return ErrRefSpecMalformedWildcard
}
// IsForceUpdate returns if update is allowed in non fast-forward merges.
func (s RefSpec) IsForceUpdate() bool {
return s[0] == refSpecForce[0]
}
// IsDelete returns true if the refspec indicates a delete (empty src).
func (s RefSpec) IsDelete() bool {
return s[0] == refSpecSeparator[0]
}
// IsExactSHA1 returns true if the source is a SHA1 hash.
func (s RefSpec) IsExactSHA1() bool {
return plumbing.IsHash(s.Src())
}
// Src returns the src side.
func (s RefSpec) Src() string {
spec := string(s)
var start int
if s.IsForceUpdate() {
start = 1
} else {
start = 0
}
end := strings.Index(spec, refSpecSeparator)
return spec[start:end]
}
// Match match the given plumbing.ReferenceName against the source.
func (s RefSpec) Match(n plumbing.ReferenceName) bool {
if !s.IsWildcard() {
return s.matchExact(n)
}
return s.matchGlob(n)
}
// IsWildcard returns true if the RefSpec contains a wildcard.
func (s RefSpec) IsWildcard() bool {
return strings.Contains(string(s), refSpecWildcard)
}
func (s RefSpec) matchExact(n plumbing.ReferenceName) bool {
return s.Src() == n.String()
}
func (s RefSpec) matchGlob(n plumbing.ReferenceName) bool {
src := s.Src()
name := n.String()
wildcard := strings.Index(src, refSpecWildcard)
var prefix, suffix string
prefix = src[0:wildcard]
if len(src) > wildcard+1 {
suffix = src[wildcard+1:]
}
return len(name) >= len(prefix)+len(suffix) &&
strings.HasPrefix(name, prefix) &&
strings.HasSuffix(name, suffix)
}
// Dst returns the destination for the given remote reference.
func (s RefSpec) Dst(n plumbing.ReferenceName) plumbing.ReferenceName {
spec := string(s)
start := strings.Index(spec, refSpecSeparator) + 1
dst := spec[start:]
src := s.Src()
if !s.IsWildcard() {
return plumbing.ReferenceName(dst)
}
name := n.String()
ws := strings.Index(src, refSpecWildcard)
wd := strings.Index(dst, refSpecWildcard)
match := name[ws : len(name)-(len(src)-(ws+1))]
return plumbing.ReferenceName(dst[0:wd] + match + dst[wd+1:])
}
func (s RefSpec) Reverse() RefSpec {
spec := string(s)
separator := strings.Index(spec, refSpecSeparator)
return RefSpec(spec[separator+1:] + refSpecSeparator + spec[:separator])
}
func (s RefSpec) String() string {
return string(s)
}
// MatchAny returns true if any of the RefSpec match with the given ReferenceName.
func MatchAny(l []RefSpec, n plumbing.ReferenceName) bool {
for _, r := range l {
if r.Match(n) {
return true
}
}
return false
}

81
vendor/github.com/go-git/go-git/v5/config/url.go generated vendored Normal file
View File

@ -0,0 +1,81 @@
package config
import (
"errors"
"strings"
format "github.com/go-git/go-git/v5/plumbing/format/config"
)
var (
errURLEmptyInsteadOf = errors.New("url config: empty insteadOf")
)
// Url defines Url rewrite rules
type URL struct {
// Name new base url
Name string
// Any URL that starts with this value will be rewritten to start, instead, with <base>.
// When more than one insteadOf strings match a given URL, the longest match is used.
InsteadOf string
// raw representation of the subsection, filled by marshal or unmarshal are
// called.
raw *format.Subsection
}
// Validate validates fields of branch
func (b *URL) Validate() error {
if b.InsteadOf == "" {
return errURLEmptyInsteadOf
}
return nil
}
const (
insteadOfKey = "insteadOf"
)
func (u *URL) unmarshal(s *format.Subsection) error {
u.raw = s
u.Name = s.Name
u.InsteadOf = u.raw.Option(insteadOfKey)
return nil
}
func (u *URL) marshal() *format.Subsection {
if u.raw == nil {
u.raw = &format.Subsection{}
}
u.raw.Name = u.Name
u.raw.SetOption(insteadOfKey, u.InsteadOf)
return u.raw
}
func findLongestInsteadOfMatch(remoteURL string, urls map[string]*URL) *URL {
var longestMatch *URL
for _, u := range urls {
if !strings.HasPrefix(remoteURL, u.InsteadOf) {
continue
}
// according to spec if there is more than one match, take the logest
if longestMatch == nil || len(longestMatch.InsteadOf) < len(u.InsteadOf) {
longestMatch = u
}
}
return longestMatch
}
func (u *URL) ApplyInsteadOf(url string) string {
if !strings.HasPrefix(url, u.InsteadOf) {
return url
}
return u.Name + url[len(u.InsteadOf):]
}

10
vendor/github.com/go-git/go-git/v5/doc.go generated vendored Normal file
View File

@ -0,0 +1,10 @@
// A highly extensible git implementation in pure Go.
//
// go-git aims to reach the completeness of libgit2 or jgit, nowadays covers the
// majority of the plumbing read operations and some of the main write
// operations, but lacks the main porcelain operations such as merges.
//
// It is highly extensible, we have been following the open/close principle in
// its design to facilitate extensions, mainly focusing the efforts on the
// persistence of the objects.
package git

View File

@ -0,0 +1,29 @@
package path_util
import (
"os"
"os/user"
"strings"
)
func ReplaceTildeWithHome(path string) (string, error) {
if strings.HasPrefix(path, "~") {
firstSlash := strings.Index(path, "/")
if firstSlash == 1 {
home, err := os.UserHomeDir()
if err != nil {
return path, err
}
return strings.Replace(path, "~", home, 1), nil
} else if firstSlash > 1 {
username := path[1:firstSlash]
userAccount, err := user.Lookup(username)
if err != nil {
return path, err
}
return strings.Replace(path, path[:firstSlash], userAccount.HomeDir, 1), nil
}
}
return path, nil
}

View File

@ -0,0 +1,626 @@
// Package revision extracts git revision from string
// More information about revision : https://www.kernel.org/pub/software/scm/git/docs/gitrevisions.html
package revision
import (
"bytes"
"fmt"
"io"
"regexp"
"strconv"
"time"
)
// ErrInvalidRevision is emitted if string doesn't match valid revision
type ErrInvalidRevision struct {
s string
}
func (e *ErrInvalidRevision) Error() string {
return "Revision invalid : " + e.s
}
// Revisioner represents a revision component.
// A revision is made of multiple revision components
// obtained after parsing a revision string,
// for instance revision "master~" will be converted in
// two revision components Ref and TildePath
type Revisioner interface {
}
// Ref represents a reference name : HEAD, master, <hash>
type Ref string
// TildePath represents ~, ~{n}
type TildePath struct {
Depth int
}
// CaretPath represents ^, ^{n}
type CaretPath struct {
Depth int
}
// CaretReg represents ^{/foo bar}
type CaretReg struct {
Regexp *regexp.Regexp
Negate bool
}
// CaretType represents ^{commit}
type CaretType struct {
ObjectType string
}
// AtReflog represents @{n}
type AtReflog struct {
Depth int
}
// AtCheckout represents @{-n}
type AtCheckout struct {
Depth int
}
// AtUpstream represents @{upstream}, @{u}
type AtUpstream struct {
BranchName string
}
// AtPush represents @{push}
type AtPush struct {
BranchName string
}
// AtDate represents @{"2006-01-02T15:04:05Z"}
type AtDate struct {
Date time.Time
}
// ColonReg represents :/foo bar
type ColonReg struct {
Regexp *regexp.Regexp
Negate bool
}
// ColonPath represents :./<path> :<path>
type ColonPath struct {
Path string
}
// ColonStagePath represents :<n>:/<path>
type ColonStagePath struct {
Path string
Stage int
}
// Parser represents a parser
// use to tokenize and transform to revisioner chunks
// a given string
type Parser struct {
s *scanner
currentParsedChar struct {
tok token
lit string
}
unreadLastChar bool
}
// NewParserFromString returns a new instance of parser from a string.
func NewParserFromString(s string) *Parser {
return NewParser(bytes.NewBufferString(s))
}
// NewParser returns a new instance of parser.
func NewParser(r io.Reader) *Parser {
return &Parser{s: newScanner(r)}
}
// scan returns the next token from the underlying scanner
// or the last scanned token if an unscan was requested
func (p *Parser) scan() (token, string, error) {
if p.unreadLastChar {
p.unreadLastChar = false
return p.currentParsedChar.tok, p.currentParsedChar.lit, nil
}
tok, lit, err := p.s.scan()
p.currentParsedChar.tok, p.currentParsedChar.lit = tok, lit
return tok, lit, err
}
// unscan pushes the previously read token back onto the buffer.
func (p *Parser) unscan() { p.unreadLastChar = true }
// Parse explode a revision string into revisioner chunks
func (p *Parser) Parse() ([]Revisioner, error) {
var rev Revisioner
var revs []Revisioner
var tok token
var err error
for {
tok, _, err = p.scan()
if err != nil {
return nil, err
}
switch tok {
case at:
rev, err = p.parseAt()
case tilde:
rev, err = p.parseTilde()
case caret:
rev, err = p.parseCaret()
case colon:
rev, err = p.parseColon()
case eof:
err = p.validateFullRevision(&revs)
if err != nil {
return []Revisioner{}, err
}
return revs, nil
default:
p.unscan()
rev, err = p.parseRef()
}
if err != nil {
return []Revisioner{}, err
}
revs = append(revs, rev)
}
}
// validateFullRevision ensures all revisioner chunks make a valid revision
func (p *Parser) validateFullRevision(chunks *[]Revisioner) error {
var hasReference bool
for i, chunk := range *chunks {
switch chunk.(type) {
case Ref:
if i == 0 {
hasReference = true
} else {
return &ErrInvalidRevision{`reference must be defined once at the beginning`}
}
case AtDate:
if len(*chunks) == 1 || hasReference && len(*chunks) == 2 {
return nil
}
return &ErrInvalidRevision{`"@" statement is not valid, could be : <refname>@{<ISO-8601 date>}, @{<ISO-8601 date>}`}
case AtReflog:
if len(*chunks) == 1 || hasReference && len(*chunks) == 2 {
return nil
}
return &ErrInvalidRevision{`"@" statement is not valid, could be : <refname>@{<n>}, @{<n>}`}
case AtCheckout:
if len(*chunks) == 1 {
return nil
}
return &ErrInvalidRevision{`"@" statement is not valid, could be : @{-<n>}`}
case AtUpstream:
if len(*chunks) == 1 || hasReference && len(*chunks) == 2 {
return nil
}
return &ErrInvalidRevision{`"@" statement is not valid, could be : <refname>@{upstream}, @{upstream}, <refname>@{u}, @{u}`}
case AtPush:
if len(*chunks) == 1 || hasReference && len(*chunks) == 2 {
return nil
}
return &ErrInvalidRevision{`"@" statement is not valid, could be : <refname>@{push}, @{push}`}
case TildePath, CaretPath, CaretReg:
if !hasReference {
return &ErrInvalidRevision{`"~" or "^" statement must have a reference defined at the beginning`}
}
case ColonReg:
if len(*chunks) == 1 {
return nil
}
return &ErrInvalidRevision{`":" statement is not valid, could be : :/<regexp>`}
case ColonPath:
if i == len(*chunks)-1 && hasReference || len(*chunks) == 1 {
return nil
}
return &ErrInvalidRevision{`":" statement is not valid, could be : <revision>:<path>`}
case ColonStagePath:
if len(*chunks) == 1 {
return nil
}
return &ErrInvalidRevision{`":" statement is not valid, could be : :<n>:<path>`}
}
}
return nil
}
// parseAt extract @ statements
func (p *Parser) parseAt() (Revisioner, error) {
var tok, nextTok token
var lit, nextLit string
var err error
tok, _, err = p.scan()
if err != nil {
return nil, err
}
if tok != obrace {
p.unscan()
return Ref("HEAD"), nil
}
tok, lit, err = p.scan()
if err != nil {
return nil, err
}
nextTok, nextLit, err = p.scan()
if err != nil {
return nil, err
}
switch {
case tok == word && (lit == "u" || lit == "upstream") && nextTok == cbrace:
return AtUpstream{}, nil
case tok == word && lit == "push" && nextTok == cbrace:
return AtPush{}, nil
case tok == number && nextTok == cbrace:
n, _ := strconv.Atoi(lit)
return AtReflog{n}, nil
case tok == minus && nextTok == number:
n, _ := strconv.Atoi(nextLit)
t, _, err := p.scan()
if err != nil {
return nil, err
}
if t != cbrace {
return nil, &ErrInvalidRevision{s: `missing "}" in @{-n} structure`}
}
return AtCheckout{n}, nil
default:
p.unscan()
date := lit
for {
tok, lit, err = p.scan()
if err != nil {
return nil, err
}
switch {
case tok == cbrace:
t, err := time.Parse("2006-01-02T15:04:05Z", date)
if err != nil {
return nil, &ErrInvalidRevision{fmt.Sprintf(`wrong date "%s" must fit ISO-8601 format : 2006-01-02T15:04:05Z`, date)}
}
return AtDate{t}, nil
case tok == eof:
return nil, &ErrInvalidRevision{s: `missing "}" in @{<data>} structure`}
default:
date += lit
}
}
}
}
// parseTilde extract ~ statements
func (p *Parser) parseTilde() (Revisioner, error) {
var tok token
var lit string
var err error
tok, lit, err = p.scan()
if err != nil {
return nil, err
}
switch {
case tok == number:
n, _ := strconv.Atoi(lit)
return TildePath{n}, nil
default:
p.unscan()
return TildePath{1}, nil
}
}
// parseCaret extract ^ statements
func (p *Parser) parseCaret() (Revisioner, error) {
var tok token
var lit string
var err error
tok, lit, err = p.scan()
if err != nil {
return nil, err
}
switch {
case tok == obrace:
r, err := p.parseCaretBraces()
if err != nil {
return nil, err
}
return r, nil
case tok == number:
n, _ := strconv.Atoi(lit)
if n > 2 {
return nil, &ErrInvalidRevision{fmt.Sprintf(`"%s" found must be 0, 1 or 2 after "^"`, lit)}
}
return CaretPath{n}, nil
default:
p.unscan()
return CaretPath{1}, nil
}
}
// parseCaretBraces extract ^{<data>} statements
func (p *Parser) parseCaretBraces() (Revisioner, error) {
var tok, nextTok token
var lit, _ string
start := true
var re string
var negate bool
var err error
for {
tok, lit, err = p.scan()
if err != nil {
return nil, err
}
nextTok, _, err = p.scan()
if err != nil {
return nil, err
}
switch {
case tok == word && nextTok == cbrace && (lit == "commit" || lit == "tree" || lit == "blob" || lit == "tag" || lit == "object"):
return CaretType{lit}, nil
case re == "" && tok == cbrace:
return CaretType{"tag"}, nil
case re == "" && tok == emark && nextTok == emark:
re += lit
case re == "" && tok == emark && nextTok == minus:
negate = true
case re == "" && tok == emark:
return nil, &ErrInvalidRevision{s: `revision suffix brace component sequences starting with "/!" others than those defined are reserved`}
case re == "" && tok == slash:
p.unscan()
case tok != slash && start:
return nil, &ErrInvalidRevision{fmt.Sprintf(`"%s" is not a valid revision suffix brace component`, lit)}
case tok == eof:
return nil, &ErrInvalidRevision{s: `missing "}" in ^{<data>} structure`}
case tok != cbrace:
p.unscan()
re += lit
case tok == cbrace:
p.unscan()
reg, err := regexp.Compile(re)
if err != nil {
return CaretReg{}, &ErrInvalidRevision{fmt.Sprintf(`revision suffix brace component, %s`, err.Error())}
}
return CaretReg{reg, negate}, nil
}
start = false
}
}
// parseColon extract : statements
func (p *Parser) parseColon() (Revisioner, error) {
var tok token
var err error
tok, _, err = p.scan()
if err != nil {
return nil, err
}
switch tok {
case slash:
return p.parseColonSlash()
default:
p.unscan()
return p.parseColonDefault()
}
}
// parseColonSlash extract :/<data> statements
func (p *Parser) parseColonSlash() (Revisioner, error) {
var tok, nextTok token
var lit string
var re string
var negate bool
var err error
for {
tok, lit, err = p.scan()
if err != nil {
return nil, err
}
nextTok, _, err = p.scan()
if err != nil {
return nil, err
}
switch {
case tok == emark && nextTok == emark:
re += lit
case re == "" && tok == emark && nextTok == minus:
negate = true
case re == "" && tok == emark:
return nil, &ErrInvalidRevision{s: `revision suffix brace component sequences starting with "/!" others than those defined are reserved`}
case tok == eof:
p.unscan()
reg, err := regexp.Compile(re)
if err != nil {
return ColonReg{}, &ErrInvalidRevision{fmt.Sprintf(`revision suffix brace component, %s`, err.Error())}
}
return ColonReg{reg, negate}, nil
default:
p.unscan()
re += lit
}
}
}
// parseColonDefault extract :<data> statements
func (p *Parser) parseColonDefault() (Revisioner, error) {
var tok token
var lit string
var path string
var stage int
var err error
var n = -1
tok, lit, err = p.scan()
if err != nil {
return nil, err
}
nextTok, _, err := p.scan()
if err != nil {
return nil, err
}
if tok == number && nextTok == colon {
n, _ = strconv.Atoi(lit)
}
switch n {
case 0, 1, 2, 3:
stage = n
default:
path += lit
p.unscan()
}
for {
tok, lit, err = p.scan()
if err != nil {
return nil, err
}
switch {
case tok == eof && n == -1:
return ColonPath{path}, nil
case tok == eof:
return ColonStagePath{path, stage}, nil
default:
path += lit
}
}
}
// parseRef extract reference name
func (p *Parser) parseRef() (Revisioner, error) {
var tok, prevTok token
var lit, buf string
var endOfRef bool
var err error
for {
tok, lit, err = p.scan()
if err != nil {
return nil, err
}
switch tok {
case eof, at, colon, tilde, caret:
endOfRef = true
}
err := p.checkRefFormat(tok, lit, prevTok, buf, endOfRef)
if err != nil {
return "", err
}
if endOfRef {
p.unscan()
return Ref(buf), nil
}
buf += lit
prevTok = tok
}
}
// checkRefFormat ensure reference name follow rules defined here :
// https://git-scm.com/docs/git-check-ref-format
func (p *Parser) checkRefFormat(token token, literal string, previousToken token, buffer string, endOfRef bool) error {
switch token {
case aslash, space, control, qmark, asterisk, obracket:
return &ErrInvalidRevision{fmt.Sprintf(`must not contains "%s"`, literal)}
}
switch {
case (token == dot || token == slash) && buffer == "":
return &ErrInvalidRevision{fmt.Sprintf(`must not start with "%s"`, literal)}
case previousToken == slash && endOfRef:
return &ErrInvalidRevision{`must not end with "/"`}
case previousToken == dot && endOfRef:
return &ErrInvalidRevision{`must not end with "."`}
case token == dot && previousToken == slash:
return &ErrInvalidRevision{`must not contains "/."`}
case previousToken == dot && token == dot:
return &ErrInvalidRevision{`must not contains ".."`}
case previousToken == slash && token == slash:
return &ErrInvalidRevision{`must not contains consecutively "/"`}
case (token == slash || endOfRef) && len(buffer) > 4 && buffer[len(buffer)-5:] == ".lock":
return &ErrInvalidRevision{"cannot end with .lock"}
}
return nil
}

View File

@ -0,0 +1,117 @@
package revision
import (
"bufio"
"io"
"unicode"
)
// runeCategoryValidator takes a rune as input and
// validates it belongs to a rune category
type runeCategoryValidator func(r rune) bool
// tokenizeExpression aggregates a series of runes matching check predicate into a single
// string and provides given tokenType as token type
func tokenizeExpression(ch rune, tokenType token, check runeCategoryValidator, r *bufio.Reader) (token, string, error) {
var data []rune
data = append(data, ch)
for {
c, _, err := r.ReadRune()
if c == zeroRune {
break
}
if err != nil {
return tokenError, "", err
}
if check(c) {
data = append(data, c)
} else {
err := r.UnreadRune()
if err != nil {
return tokenError, "", err
}
return tokenType, string(data), nil
}
}
return tokenType, string(data), nil
}
var zeroRune = rune(0)
// scanner represents a lexical scanner.
type scanner struct {
r *bufio.Reader
}
// newScanner returns a new instance of scanner.
func newScanner(r io.Reader) *scanner {
return &scanner{r: bufio.NewReader(r)}
}
// Scan extracts tokens and their strings counterpart
// from the reader
func (s *scanner) scan() (token, string, error) {
ch, _, err := s.r.ReadRune()
if err != nil && err != io.EOF {
return tokenError, "", err
}
switch ch {
case zeroRune:
return eof, "", nil
case ':':
return colon, string(ch), nil
case '~':
return tilde, string(ch), nil
case '^':
return caret, string(ch), nil
case '.':
return dot, string(ch), nil
case '/':
return slash, string(ch), nil
case '{':
return obrace, string(ch), nil
case '}':
return cbrace, string(ch), nil
case '-':
return minus, string(ch), nil
case '@':
return at, string(ch), nil
case '\\':
return aslash, string(ch), nil
case '?':
return qmark, string(ch), nil
case '*':
return asterisk, string(ch), nil
case '[':
return obracket, string(ch), nil
case '!':
return emark, string(ch), nil
}
if unicode.IsSpace(ch) {
return space, string(ch), nil
}
if unicode.IsControl(ch) {
return control, string(ch), nil
}
if unicode.IsLetter(ch) {
return tokenizeExpression(ch, word, unicode.IsLetter, s.r)
}
if unicode.IsNumber(ch) {
return tokenizeExpression(ch, number, unicode.IsNumber, s.r)
}
return tokenError, string(ch), nil
}

View File

@ -0,0 +1,28 @@
package revision
// token represents a entity extracted from string parsing
type token int
const (
eof token = iota
aslash
asterisk
at
caret
cbrace
colon
control
dot
emark
minus
number
obrace
obracket
qmark
slash
space
tilde
tokenError
word
)

39
vendor/github.com/go-git/go-git/v5/internal/url/url.go generated vendored Normal file
View File

@ -0,0 +1,39 @@
package url
import (
"regexp"
)
var (
isSchemeRegExp = regexp.MustCompile(`^[^:]+://`)
// Ref: https://github.com/git/git/blob/master/Documentation/urls.txt#L37
scpLikeUrlRegExp = regexp.MustCompile(`^(?:(?P<user>[^@]+)@)?(?P<host>[^:\s]+):(?:(?P<port>[0-9]{1,5}):)?(?P<path>[^\\].*)$`)
)
// MatchesScheme returns true if the given string matches a URL-like
// format scheme.
func MatchesScheme(url string) bool {
return isSchemeRegExp.MatchString(url)
}
// MatchesScpLike returns true if the given string matches an SCP-like
// format scheme.
func MatchesScpLike(url string) bool {
return scpLikeUrlRegExp.MatchString(url)
}
// FindScpLikeComponents returns the user, host, port and path of the
// given SCP-like URL.
func FindScpLikeComponents(url string) (user, host, port, path string) {
m := scpLikeUrlRegExp.FindStringSubmatch(url)
return m[1], m[2], m[3], m[4]
}
// IsLocalEndpoint returns true if the given URL string specifies a
// local file endpoint. For example, on a Linux machine,
// `/home/user/src/go-git` would match as a local endpoint, but
// `https://github.com/src-d/go-git` would not.
func IsLocalEndpoint(url string) bool {
return !MatchesScheme(url) && !MatchesScpLike(url)
}

104
vendor/github.com/go-git/go-git/v5/object_walker.go generated vendored Normal file
View File

@ -0,0 +1,104 @@
package git
import (
"fmt"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/filemode"
"github.com/go-git/go-git/v5/plumbing/object"
"github.com/go-git/go-git/v5/storage"
)
type objectWalker struct {
Storer storage.Storer
// seen is the set of objects seen in the repo.
// seen map can become huge if walking over large
// repos. Thus using struct{} as the value type.
seen map[plumbing.Hash]struct{}
}
func newObjectWalker(s storage.Storer) *objectWalker {
return &objectWalker{s, map[plumbing.Hash]struct{}{}}
}
// walkAllRefs walks all (hash) references from the repo.
func (p *objectWalker) walkAllRefs() error {
// Walk over all the references in the repo.
it, err := p.Storer.IterReferences()
if err != nil {
return err
}
defer it.Close()
err = it.ForEach(func(ref *plumbing.Reference) error {
// Exit this iteration early for non-hash references.
if ref.Type() != plumbing.HashReference {
return nil
}
return p.walkObjectTree(ref.Hash())
})
return err
}
func (p *objectWalker) isSeen(hash plumbing.Hash) bool {
_, seen := p.seen[hash]
return seen
}
func (p *objectWalker) add(hash plumbing.Hash) {
p.seen[hash] = struct{}{}
}
// walkObjectTree walks over all objects and remembers references
// to them in the objectWalker. This is used instead of the revlist
// walks because memory usage is tight with huge repos.
func (p *objectWalker) walkObjectTree(hash plumbing.Hash) error {
// Check if we have already seen, and mark this object
if p.isSeen(hash) {
return nil
}
p.add(hash)
// Fetch the object.
obj, err := object.GetObject(p.Storer, hash)
if err != nil {
return fmt.Errorf("getting object %s failed: %v", hash, err)
}
// Walk all children depending on object type.
switch obj := obj.(type) {
case *object.Commit:
err = p.walkObjectTree(obj.TreeHash)
if err != nil {
return err
}
for _, h := range obj.ParentHashes {
err = p.walkObjectTree(h)
if err != nil {
return err
}
}
case *object.Tree:
for i := range obj.Entries {
// Shortcut for blob objects:
// 'or' the lower bits of a mode and check that it
// it matches a filemode.Executable. The type information
// is in the higher bits, but this is the cleanest way
// to handle plain files with different modes.
// Other non-tree objects are somewhat rare, so they
// are not special-cased.
if obj.Entries[i].Mode|0755 == filemode.Executable {
p.add(obj.Entries[i].Hash)
continue
}
// Normal walk for sub-trees (and symlinks etc).
err = p.walkObjectTree(obj.Entries[i].Hash)
if err != nil {
return err
}
}
case *object.Tag:
return p.walkObjectTree(obj.Target)
default:
// Error out on unhandled object types.
return fmt.Errorf("unknown object %X %s %T", obj.ID(), obj.Type(), obj)
}
return nil
}

792
vendor/github.com/go-git/go-git/v5/options.go generated vendored Normal file
View File

@ -0,0 +1,792 @@
package git
import (
"errors"
"fmt"
"regexp"
"strings"
"time"
"github.com/ProtonMail/go-crypto/openpgp"
"github.com/go-git/go-git/v5/config"
"github.com/go-git/go-git/v5/plumbing"
formatcfg "github.com/go-git/go-git/v5/plumbing/format/config"
"github.com/go-git/go-git/v5/plumbing/object"
"github.com/go-git/go-git/v5/plumbing/protocol/packp/sideband"
"github.com/go-git/go-git/v5/plumbing/transport"
)
// SubmoduleRescursivity defines how depth will affect any submodule recursive
// operation.
type SubmoduleRescursivity uint
const (
// DefaultRemoteName name of the default Remote, just like git command.
DefaultRemoteName = "origin"
// NoRecurseSubmodules disables the recursion for a submodule operation.
NoRecurseSubmodules SubmoduleRescursivity = 0
// DefaultSubmoduleRecursionDepth allow recursion in a submodule operation.
DefaultSubmoduleRecursionDepth SubmoduleRescursivity = 10
)
var (
ErrMissingURL = errors.New("URL field is required")
)
// CloneOptions describes how a clone should be performed.
type CloneOptions struct {
// The (possibly remote) repository URL to clone from.
URL string
// Auth credentials, if required, to use with the remote repository.
Auth transport.AuthMethod
// Name of the remote to be added, by default `origin`.
RemoteName string
// Remote branch to clone.
ReferenceName plumbing.ReferenceName
// Fetch only ReferenceName if true.
SingleBranch bool
// Mirror clones the repository as a mirror.
//
// Compared to a bare clone, mirror not only maps local branches of the
// source to local branches of the target, it maps all refs (including
// remote-tracking branches, notes etc.) and sets up a refspec configuration
// such that all these refs are overwritten by a git remote update in the
// target repository.
Mirror bool
// No checkout of HEAD after clone if true.
NoCheckout bool
// Limit fetching to the specified number of commits.
Depth int
// RecurseSubmodules after the clone is created, initialize all submodules
// within, using their default settings. This option is ignored if the
// cloned repository does not have a worktree.
RecurseSubmodules SubmoduleRescursivity
// ShallowSubmodules limit cloning submodules to the 1 level of depth.
// It matches the git command --shallow-submodules.
ShallowSubmodules bool
// Progress is where the human readable information sent by the server is
// stored, if nil nothing is stored and the capability (if supported)
// no-progress, is sent to the server to avoid send this information.
Progress sideband.Progress
// Tags describe how the tags will be fetched from the remote repository,
// by default is AllTags.
Tags TagMode
// InsecureSkipTLS skips ssl verify if protocol is https
InsecureSkipTLS bool
// CABundle specify additional ca bundle with system cert pool
CABundle []byte
// ProxyOptions provides info required for connecting to a proxy.
ProxyOptions transport.ProxyOptions
// When the repository to clone is on the local machine, instead of
// using hard links, automatically setup .git/objects/info/alternates
// to share the objects with the source repository.
// The resulting repository starts out without any object of its own.
// NOTE: this is a possibly dangerous operation; do not use it unless
// you understand what it does.
//
// [Reference]: https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---shared
Shared bool
}
// MergeOptions describes how a merge should be performed.
type MergeOptions struct {
// Strategy defines the merge strategy to be used.
Strategy MergeStrategy
}
// MergeStrategy represents the different types of merge strategies.
type MergeStrategy int8
const (
// FastForwardMerge represents a Git merge strategy where the current
// branch can be simply updated to point to the HEAD of the branch being
// merged. This is only possible if the history of the branch being merged
// is a linear descendant of the current branch, with no conflicting commits.
//
// This is the default option.
FastForwardMerge MergeStrategy = iota
)
// Validate validates the fields and sets the default values.
func (o *CloneOptions) Validate() error {
if o.URL == "" {
return ErrMissingURL
}
if o.RemoteName == "" {
o.RemoteName = DefaultRemoteName
}
if o.ReferenceName == "" {
o.ReferenceName = plumbing.HEAD
}
if o.Tags == InvalidTagMode {
o.Tags = AllTags
}
return nil
}
// PullOptions describes how a pull should be performed.
type PullOptions struct {
// Name of the remote to be pulled. If empty, uses the default.
RemoteName string
// RemoteURL overrides the remote repo address with a custom URL
RemoteURL string
// Remote branch to clone. If empty, uses HEAD.
ReferenceName plumbing.ReferenceName
// Fetch only ReferenceName if true.
SingleBranch bool
// Limit fetching to the specified number of commits.
Depth int
// Auth credentials, if required, to use with the remote repository.
Auth transport.AuthMethod
// RecurseSubmodules controls if new commits of all populated submodules
// should be fetched too.
RecurseSubmodules SubmoduleRescursivity
// Progress is where the human readable information sent by the server is
// stored, if nil nothing is stored and the capability (if supported)
// no-progress, is sent to the server to avoid send this information.
Progress sideband.Progress
// Force allows the pull to update a local branch even when the remote
// branch does not descend from it.
Force bool
// InsecureSkipTLS skips ssl verify if protocol is https
InsecureSkipTLS bool
// CABundle specify additional ca bundle with system cert pool
CABundle []byte
// ProxyOptions provides info required for connecting to a proxy.
ProxyOptions transport.ProxyOptions
}
// Validate validates the fields and sets the default values.
func (o *PullOptions) Validate() error {
if o.RemoteName == "" {
o.RemoteName = DefaultRemoteName
}
if o.ReferenceName == "" {
o.ReferenceName = plumbing.HEAD
}
return nil
}
type TagMode int
const (
InvalidTagMode TagMode = iota
// TagFollowing any tag that points into the histories being fetched is also
// fetched. TagFollowing requires a server with `include-tag` capability
// in order to fetch the annotated tags objects.
TagFollowing
// AllTags fetch all tags from the remote (i.e., fetch remote tags
// refs/tags/* into local tags with the same name)
AllTags
// NoTags fetch no tags from the remote at all
NoTags
)
// FetchOptions describes how a fetch should be performed
type FetchOptions struct {
// Name of the remote to fetch from. Defaults to origin.
RemoteName string
// RemoteURL overrides the remote repo address with a custom URL
RemoteURL string
RefSpecs []config.RefSpec
// Depth limit fetching to the specified number of commits from the tip of
// each remote branch history.
Depth int
// Auth credentials, if required, to use with the remote repository.
Auth transport.AuthMethod
// Progress is where the human readable information sent by the server is
// stored, if nil nothing is stored and the capability (if supported)
// no-progress, is sent to the server to avoid send this information.
Progress sideband.Progress
// Tags describe how the tags will be fetched from the remote repository,
// by default is TagFollowing.
Tags TagMode
// Force allows the fetch to update a local branch even when the remote
// branch does not descend from it.
Force bool
// InsecureSkipTLS skips ssl verify if protocol is https
InsecureSkipTLS bool
// CABundle specify additional ca bundle with system cert pool
CABundle []byte
// ProxyOptions provides info required for connecting to a proxy.
ProxyOptions transport.ProxyOptions
// Prune specify that local refs that match given RefSpecs and that do
// not exist remotely will be removed.
Prune bool
}
// Validate validates the fields and sets the default values.
func (o *FetchOptions) Validate() error {
if o.RemoteName == "" {
o.RemoteName = DefaultRemoteName
}
if o.Tags == InvalidTagMode {
o.Tags = TagFollowing
}
for _, r := range o.RefSpecs {
if err := r.Validate(); err != nil {
return err
}
}
return nil
}
// PushOptions describes how a push should be performed.
type PushOptions struct {
// RemoteName is the name of the remote to be pushed to.
RemoteName string
// RemoteURL overrides the remote repo address with a custom URL
RemoteURL string
// RefSpecs specify what destination ref to update with what source object.
//
// The format of a <refspec> parameter is an optional plus +, followed by
// the source object <src>, followed by a colon :, followed by the destination ref <dst>.
// The <src> is often the name of the branch you would want to push, but it can be a SHA-1.
// The <dst> tells which ref on the remote side is updated with this push.
//
// A refspec with empty src can be used to delete a reference.
RefSpecs []config.RefSpec
// Auth credentials, if required, to use with the remote repository.
Auth transport.AuthMethod
// Progress is where the human readable information sent by the server is
// stored, if nil nothing is stored.
Progress sideband.Progress
// Prune specify that remote refs that match given RefSpecs and that do
// not exist locally will be removed.
Prune bool
// Force allows the push to update a remote branch even when the local
// branch does not descend from it.
Force bool
// InsecureSkipTLS skips ssl verify if protocol is https
InsecureSkipTLS bool
// CABundle specify additional ca bundle with system cert pool
CABundle []byte
// RequireRemoteRefs only allows a remote ref to be updated if its current
// value is the one specified here.
RequireRemoteRefs []config.RefSpec
// FollowTags will send any annotated tags with a commit target reachable from
// the refs already being pushed
FollowTags bool
// ForceWithLease allows a force push as long as the remote ref adheres to a "lease"
ForceWithLease *ForceWithLease
// PushOptions sets options to be transferred to the server during push.
Options map[string]string
// Atomic sets option to be an atomic push
Atomic bool
// ProxyOptions provides info required for connecting to a proxy.
ProxyOptions transport.ProxyOptions
}
// ForceWithLease sets fields on the lease
// If neither RefName nor Hash are set, ForceWithLease protects
// all refs in the refspec by ensuring the ref of the remote in the local repsitory
// matches the one in the ref advertisement.
type ForceWithLease struct {
// RefName, when set will protect the ref by ensuring it matches the
// hash in the ref advertisement.
RefName plumbing.ReferenceName
// Hash is the expected object id of RefName. The push will be rejected unless this
// matches the corresponding object id of RefName in the refs advertisement.
Hash plumbing.Hash
}
// Validate validates the fields and sets the default values.
func (o *PushOptions) Validate() error {
if o.RemoteName == "" {
o.RemoteName = DefaultRemoteName
}
if len(o.RefSpecs) == 0 {
o.RefSpecs = []config.RefSpec{
config.RefSpec(config.DefaultPushRefSpec),
}
}
for _, r := range o.RefSpecs {
if err := r.Validate(); err != nil {
return err
}
}
return nil
}
// SubmoduleUpdateOptions describes how a submodule update should be performed.
type SubmoduleUpdateOptions struct {
// Init, if true initializes the submodules recorded in the index.
Init bool
// NoFetch tell to the update command to not fetch new objects from the
// remote site.
NoFetch bool
// RecurseSubmodules the update is performed not only in the submodules of
// the current repository but also in any nested submodules inside those
// submodules (and so on). Until the SubmoduleRescursivity is reached.
RecurseSubmodules SubmoduleRescursivity
// Auth credentials, if required, to use with the remote repository.
Auth transport.AuthMethod
// Depth limit fetching to the specified number of commits from the tip of
// each remote branch history.
Depth int
}
var (
ErrBranchHashExclusive = errors.New("Branch and Hash are mutually exclusive")
ErrCreateRequiresBranch = errors.New("Branch is mandatory when Create is used")
)
// CheckoutOptions describes how a checkout operation should be performed.
type CheckoutOptions struct {
// Hash is the hash of a commit or tag to be checked out. If used, HEAD
// will be in detached mode. If Create is not used, Branch and Hash are
// mutually exclusive.
Hash plumbing.Hash
// Branch to be checked out, if Branch and Hash are empty is set to `master`.
Branch plumbing.ReferenceName
// Create a new branch named Branch and start it at Hash.
Create bool
// Force, if true when switching branches, proceed even if the index or the
// working tree differs from HEAD. This is used to throw away local changes
Force bool
// Keep, if true when switching branches, local changes (the index or the
// working tree changes) will be kept so that they can be committed to the
// target branch. Force and Keep are mutually exclusive, should not be both
// set to true.
Keep bool
// SparseCheckoutDirectories
SparseCheckoutDirectories []string
}
// Validate validates the fields and sets the default values.
func (o *CheckoutOptions) Validate() error {
if !o.Create && !o.Hash.IsZero() && o.Branch != "" {
return ErrBranchHashExclusive
}
if o.Create && o.Branch == "" {
return ErrCreateRequiresBranch
}
if o.Branch == "" {
o.Branch = plumbing.Master
}
return nil
}
// ResetMode defines the mode of a reset operation.
type ResetMode int8
const (
// MixedReset resets the index but not the working tree (i.e., the changed
// files are preserved but not marked for commit) and reports what has not
// been updated. This is the default action.
MixedReset ResetMode = iota
// HardReset resets the index and working tree. Any changes to tracked files
// in the working tree are discarded.
HardReset
// MergeReset resets the index and updates the files in the working tree
// that are different between Commit and HEAD, but keeps those which are
// different between the index and working tree (i.e. which have changes
// which have not been added).
//
// If a file that is different between Commit and the index has unstaged
// changes, reset is aborted.
MergeReset
// SoftReset does not touch the index file or the working tree at all (but
// resets the head to <commit>, just like all modes do). This leaves all
// your changed files "Changes to be committed", as git status would put it.
SoftReset
)
// ResetOptions describes how a reset operation should be performed.
type ResetOptions struct {
// Commit, if commit is present set the current branch head (HEAD) to it.
Commit plumbing.Hash
// Mode, form resets the current branch head to Commit and possibly updates
// the index (resetting it to the tree of Commit) and the working tree
// depending on Mode. If empty MixedReset is used.
Mode ResetMode
}
// Validate validates the fields and sets the default values.
func (o *ResetOptions) Validate(r *Repository) error {
if o.Commit == plumbing.ZeroHash {
ref, err := r.Head()
if err != nil {
return err
}
o.Commit = ref.Hash()
} else {
_, err := r.CommitObject(o.Commit)
if err != nil {
return fmt.Errorf("invalid reset option: %w", err)
}
}
return nil
}
type LogOrder int8
const (
LogOrderDefault LogOrder = iota
LogOrderDFS
LogOrderDFSPost
LogOrderBSF
LogOrderCommitterTime
)
// LogOptions describes how a log action should be performed.
type LogOptions struct {
// When the From option is set the log will only contain commits
// reachable from it. If this option is not set, HEAD will be used as
// the default From.
From plumbing.Hash
// The default traversal algorithm is Depth-first search
// set Order=LogOrderCommitterTime for ordering by committer time (more compatible with `git log`)
// set Order=LogOrderBSF for Breadth-first search
Order LogOrder
// Show only those commits in which the specified file was inserted/updated.
// It is equivalent to running `git log -- <file-name>`.
// this field is kept for compatibility, it can be replaced with PathFilter
FileName *string
// Filter commits based on the path of files that are updated
// takes file path as argument and should return true if the file is desired
// It can be used to implement `git log -- <path>`
// either <path> is a file path, or directory path, or a regexp of file/directory path
PathFilter func(string) bool
// Pretend as if all the refs in refs/, along with HEAD, are listed on the command line as <commit>.
// It is equivalent to running `git log --all`.
// If set on true, the From option will be ignored.
All bool
// Show commits more recent than a specific date.
// It is equivalent to running `git log --since <date>` or `git log --after <date>`.
Since *time.Time
// Show commits older than a specific date.
// It is equivalent to running `git log --until <date>` or `git log --before <date>`.
Until *time.Time
}
var (
ErrMissingAuthor = errors.New("author field is required")
)
// AddOptions describes how an `add` operation should be performed
type AddOptions struct {
// All equivalent to `git add -A`, update the index not only where the
// working tree has a file matching `Path` but also where the index already
// has an entry. This adds, modifies, and removes index entries to match the
// working tree. If no `Path` nor `Glob` is given when `All` option is
// used, all files in the entire working tree are updated.
All bool
// Path is the exact filepath to the file or directory to be added.
Path string
// Glob adds all paths, matching pattern, to the index. If pattern matches a
// directory path, all directory contents are added to the index recursively.
Glob string
// SkipStatus adds the path with no status check. This option is relevant only
// when the `Path` option is specified and does not apply when the `All` option is used.
// Notice that when passing an ignored path it will be added anyway.
// When true it can speed up adding files to the worktree in very large repositories.
SkipStatus bool
}
// Validate validates the fields and sets the default values.
func (o *AddOptions) Validate(r *Repository) error {
if o.Path != "" && o.Glob != "" {
return fmt.Errorf("fields Path and Glob are mutual exclusive")
}
return nil
}
// CommitOptions describes how a commit operation should be performed.
type CommitOptions struct {
// All automatically stage files that have been modified and deleted, but
// new files you have not told Git about are not affected.
All bool
// AllowEmptyCommits enable empty commits to be created. An empty commit
// is when no changes to the tree were made, but a new commit message is
// provided. The default behavior is false, which results in ErrEmptyCommit.
AllowEmptyCommits bool
// Author is the author's signature of the commit. If Author is empty the
// Name and Email is read from the config, and time.Now it's used as When.
Author *object.Signature
// Committer is the committer's signature of the commit. If Committer is
// nil the Author signature is used.
Committer *object.Signature
// Parents are the parents commits for the new commit, by default when
// len(Parents) is zero, the hash of HEAD reference is used.
Parents []plumbing.Hash
// SignKey denotes a key to sign the commit with. A nil value here means the
// commit will not be signed. The private key must be present and already
// decrypted.
SignKey *openpgp.Entity
// Signer denotes a cryptographic signer to sign the commit with.
// A nil value here means the commit will not be signed.
// Takes precedence over SignKey.
Signer Signer
// Amend will create a new commit object and replace the commit that HEAD currently
// points to. Cannot be used with All nor Parents.
Amend bool
}
// Validate validates the fields and sets the default values.
func (o *CommitOptions) Validate(r *Repository) error {
if o.All && o.Amend {
return errors.New("all and amend cannot be used together")
}
if o.Amend && len(o.Parents) > 0 {
return errors.New("parents cannot be used with amend")
}
if o.Author == nil {
if err := o.loadConfigAuthorAndCommitter(r); err != nil {
return err
}
}
if o.Committer == nil {
o.Committer = o.Author
}
if len(o.Parents) == 0 {
head, err := r.Head()
if err != nil && err != plumbing.ErrReferenceNotFound {
return err
}
if head != nil {
o.Parents = []plumbing.Hash{head.Hash()}
}
}
return nil
}
func (o *CommitOptions) loadConfigAuthorAndCommitter(r *Repository) error {
cfg, err := r.ConfigScoped(config.SystemScope)
if err != nil {
return err
}
if o.Author == nil && cfg.Author.Email != "" && cfg.Author.Name != "" {
o.Author = &object.Signature{
Name: cfg.Author.Name,
Email: cfg.Author.Email,
When: time.Now(),
}
}
if o.Committer == nil && cfg.Committer.Email != "" && cfg.Committer.Name != "" {
o.Committer = &object.Signature{
Name: cfg.Committer.Name,
Email: cfg.Committer.Email,
When: time.Now(),
}
}
if o.Author == nil && cfg.User.Email != "" && cfg.User.Name != "" {
o.Author = &object.Signature{
Name: cfg.User.Name,
Email: cfg.User.Email,
When: time.Now(),
}
}
if o.Author == nil {
return ErrMissingAuthor
}
return nil
}
var (
ErrMissingName = errors.New("name field is required")
ErrMissingTagger = errors.New("tagger field is required")
ErrMissingMessage = errors.New("message field is required")
)
// CreateTagOptions describes how a tag object should be created.
type CreateTagOptions struct {
// Tagger defines the signature of the tag creator. If Tagger is empty the
// Name and Email is read from the config, and time.Now it's used as When.
Tagger *object.Signature
// Message defines the annotation of the tag. It is canonicalized during
// validation into the format expected by git - no leading whitespace and
// ending in a newline.
Message string
// SignKey denotes a key to sign the tag with. A nil value here means the tag
// will not be signed. The private key must be present and already decrypted.
SignKey *openpgp.Entity
}
// Validate validates the fields and sets the default values.
func (o *CreateTagOptions) Validate(r *Repository, hash plumbing.Hash) error {
if o.Tagger == nil {
if err := o.loadConfigTagger(r); err != nil {
return err
}
}
if o.Message == "" {
return ErrMissingMessage
}
// Canonicalize the message into the expected message format.
o.Message = strings.TrimSpace(o.Message) + "\n"
return nil
}
func (o *CreateTagOptions) loadConfigTagger(r *Repository) error {
cfg, err := r.ConfigScoped(config.SystemScope)
if err != nil {
return err
}
if o.Tagger == nil && cfg.Author.Email != "" && cfg.Author.Name != "" {
o.Tagger = &object.Signature{
Name: cfg.Author.Name,
Email: cfg.Author.Email,
When: time.Now(),
}
}
if o.Tagger == nil && cfg.User.Email != "" && cfg.User.Name != "" {
o.Tagger = &object.Signature{
Name: cfg.User.Name,
Email: cfg.User.Email,
When: time.Now(),
}
}
if o.Tagger == nil {
return ErrMissingTagger
}
return nil
}
// ListOptions describes how a remote list should be performed.
type ListOptions struct {
// Auth credentials, if required, to use with the remote repository.
Auth transport.AuthMethod
// InsecureSkipTLS skips ssl verify if protocol is https
InsecureSkipTLS bool
// CABundle specify additional ca bundle with system cert pool
CABundle []byte
// PeelingOption defines how peeled objects are handled during a
// remote list.
PeelingOption PeelingOption
// ProxyOptions provides info required for connecting to a proxy.
ProxyOptions transport.ProxyOptions
// Timeout specifies the timeout in seconds for list operations
Timeout int
}
// PeelingOption represents the different ways to handle peeled references.
//
// Peeled references represent the underlying object of an annotated
// (or signed) tag. Refer to upstream documentation for more info:
// https://github.com/git/git/blob/master/Documentation/technical/reftable.txt
type PeelingOption uint8
const (
// IgnorePeeled ignores all peeled reference names. This is the default behavior.
IgnorePeeled PeelingOption = 0
// OnlyPeeled returns only peeled reference names.
OnlyPeeled PeelingOption = 1
// AppendPeeled appends peeled reference names to the reference list.
AppendPeeled PeelingOption = 2
)
// CleanOptions describes how a clean should be performed.
type CleanOptions struct {
Dir bool
}
// GrepOptions describes how a grep should be performed.
type GrepOptions struct {
// Patterns are compiled Regexp objects to be matched.
Patterns []*regexp.Regexp
// InvertMatch selects non-matching lines.
InvertMatch bool
// CommitHash is the hash of the commit from which worktree should be derived.
CommitHash plumbing.Hash
// ReferenceName is the branch or tag name from which worktree should be derived.
ReferenceName plumbing.ReferenceName
// PathSpecs are compiled Regexp objects of pathspec to use in the matching.
PathSpecs []*regexp.Regexp
}
var (
ErrHashOrReference = errors.New("ambiguous options, only one of CommitHash or ReferenceName can be passed")
)
// Validate validates the fields and sets the default values.
//
// TODO: deprecate in favor of Validate(r *Repository) in v6.
func (o *GrepOptions) Validate(w *Worktree) error {
return o.validate(w.r)
}
func (o *GrepOptions) validate(r *Repository) error {
if !o.CommitHash.IsZero() && o.ReferenceName != "" {
return ErrHashOrReference
}
// If none of CommitHash and ReferenceName are provided, set commit hash of
// the repository's head.
if o.CommitHash.IsZero() && o.ReferenceName == "" {
ref, err := r.Head()
if err != nil {
return err
}
o.CommitHash = ref.Hash()
}
return nil
}
// PlainOpenOptions describes how opening a plain repository should be
// performed.
type PlainOpenOptions struct {
// DetectDotGit defines whether parent directories should be
// walked until a .git directory or file is found.
DetectDotGit bool
// Enable .git/commondir support (see https://git-scm.com/docs/gitrepository-layout#Documentation/gitrepository-layout.txt).
// NOTE: This option will only work with the filesystem storage.
EnableDotGitCommonDir bool
}
// Validate validates the fields and sets the default values.
func (o *PlainOpenOptions) Validate() error { return nil }
type PlainInitOptions struct {
InitOptions
// Determines if the repository will have a worktree (non-bare) or not (bare).
Bare bool
ObjectFormat formatcfg.ObjectFormat
}
// Validate validates the fields and sets the default values.
func (o *PlainInitOptions) Validate() error { return nil }

35
vendor/github.com/go-git/go-git/v5/oss-fuzz.sh generated vendored Normal file
View File

@ -0,0 +1,35 @@
#!/bin/bash -eu
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################
go mod download
go get github.com/AdamKorcz/go-118-fuzz-build/testing
if [ "$SANITIZER" != "coverage" ]; then
sed -i '/func (s \*DecoderSuite) TestDecode(/,/^}/ s/^/\/\//' plumbing/format/config/decoder_test.go
sed -n '35,$p' plumbing/format/packfile/common_test.go >> plumbing/format/packfile/delta_test.go
sed -n '20,53p' plumbing/object/object_test.go >> plumbing/object/tree_test.go
sed -i 's|func Test|// func Test|' plumbing/transport/common_test.go
fi
compile_native_go_fuzzer $(pwd)/internal/revision FuzzParser fuzz_parser
compile_native_go_fuzzer $(pwd)/plumbing/format/config FuzzDecoder fuzz_decoder_config
compile_native_go_fuzzer $(pwd)/plumbing/format/packfile FuzzPatchDelta fuzz_patch_delta
compile_native_go_fuzzer $(pwd)/plumbing/object FuzzParseSignedBytes fuzz_parse_signed_bytes
compile_native_go_fuzzer $(pwd)/plumbing/object FuzzDecode fuzz_decode
compile_native_go_fuzzer $(pwd)/plumbing/protocol/packp FuzzDecoder fuzz_decoder_packp
compile_native_go_fuzzer $(pwd)/plumbing/transport FuzzNewEndpoint fuzz_new_endpoint

View File

@ -0,0 +1,98 @@
package cache
import (
"container/list"
"sync"
)
// BufferLRU implements an object cache with an LRU eviction policy and a
// maximum size (measured in object size).
type BufferLRU struct {
MaxSize FileSize
actualSize FileSize
ll *list.List
cache map[int64]*list.Element
mut sync.Mutex
}
// NewBufferLRU creates a new BufferLRU with the given maximum size. The maximum
// size will never be exceeded.
func NewBufferLRU(maxSize FileSize) *BufferLRU {
return &BufferLRU{MaxSize: maxSize}
}
// NewBufferLRUDefault creates a new BufferLRU with the default cache size.
func NewBufferLRUDefault() *BufferLRU {
return &BufferLRU{MaxSize: DefaultMaxSize}
}
type buffer struct {
Key int64
Slice []byte
}
// Put puts a buffer into the cache. If the buffer is already in the cache, it
// will be marked as used. Otherwise, it will be inserted. A buffers might
// be evicted to make room for the new one.
func (c *BufferLRU) Put(key int64, slice []byte) {
c.mut.Lock()
defer c.mut.Unlock()
if c.cache == nil {
c.actualSize = 0
c.cache = make(map[int64]*list.Element, 1000)
c.ll = list.New()
}
bufSize := FileSize(len(slice))
if ee, ok := c.cache[key]; ok {
oldBuf := ee.Value.(buffer)
// in this case bufSize is a delta: new size - old size
bufSize -= FileSize(len(oldBuf.Slice))
c.ll.MoveToFront(ee)
ee.Value = buffer{key, slice}
} else {
if bufSize > c.MaxSize {
return
}
ee := c.ll.PushFront(buffer{key, slice})
c.cache[key] = ee
}
c.actualSize += bufSize
for c.actualSize > c.MaxSize {
last := c.ll.Back()
lastObj := last.Value.(buffer)
lastSize := FileSize(len(lastObj.Slice))
c.ll.Remove(last)
delete(c.cache, lastObj.Key)
c.actualSize -= lastSize
}
}
// Get returns a buffer by its key. It marks the buffer as used. If the buffer
// is not in the cache, (nil, false) will be returned.
func (c *BufferLRU) Get(key int64) ([]byte, bool) {
c.mut.Lock()
defer c.mut.Unlock()
ee, ok := c.cache[key]
if !ok {
return nil, false
}
c.ll.MoveToFront(ee)
return ee.Value.(buffer).Slice, true
}
// Clear the content of this buffer cache.
func (c *BufferLRU) Clear() {
c.mut.Lock()
defer c.mut.Unlock()
c.ll = nil
c.cache = nil
c.actualSize = 0
}

View File

@ -0,0 +1,39 @@
package cache
import "github.com/go-git/go-git/v5/plumbing"
const (
Byte FileSize = 1 << (iota * 10)
KiByte
MiByte
GiByte
)
type FileSize int64
const DefaultMaxSize FileSize = 96 * MiByte
// Object is an interface to a object cache.
type Object interface {
// Put puts the given object into the cache. Whether this object will
// actually be put into the cache or not is implementation specific.
Put(o plumbing.EncodedObject)
// Get gets an object from the cache given its hash. The second return value
// is true if the object was returned, and false otherwise.
Get(k plumbing.Hash) (plumbing.EncodedObject, bool)
// Clear clears every object from the cache.
Clear()
}
// Buffer is an interface to a buffer cache.
type Buffer interface {
// Put puts a buffer into the cache. If the buffer is already in the cache,
// it will be marked as used. Otherwise, it will be inserted. Buffer might
// be evicted to make room for the new one.
Put(key int64, slice []byte)
// Get returns a buffer by its key. It marks the buffer as used. If the
// buffer is not in the cache, (nil, false) will be returned.
Get(key int64) ([]byte, bool)
// Clear clears every object from the cache.
Clear()
}

View File

@ -0,0 +1,101 @@
package cache
import (
"container/list"
"sync"
"github.com/go-git/go-git/v5/plumbing"
)
// ObjectLRU implements an object cache with an LRU eviction policy and a
// maximum size (measured in object size).
type ObjectLRU struct {
MaxSize FileSize
actualSize FileSize
ll *list.List
cache map[interface{}]*list.Element
mut sync.Mutex
}
// NewObjectLRU creates a new ObjectLRU with the given maximum size. The maximum
// size will never be exceeded.
func NewObjectLRU(maxSize FileSize) *ObjectLRU {
return &ObjectLRU{MaxSize: maxSize}
}
// NewObjectLRUDefault creates a new ObjectLRU with the default cache size.
func NewObjectLRUDefault() *ObjectLRU {
return &ObjectLRU{MaxSize: DefaultMaxSize}
}
// Put puts an object into the cache. If the object is already in the cache, it
// will be marked as used. Otherwise, it will be inserted. A single object might
// be evicted to make room for the new object.
func (c *ObjectLRU) Put(obj plumbing.EncodedObject) {
c.mut.Lock()
defer c.mut.Unlock()
if c.cache == nil {
c.actualSize = 0
c.cache = make(map[interface{}]*list.Element, 1000)
c.ll = list.New()
}
objSize := FileSize(obj.Size())
key := obj.Hash()
if ee, ok := c.cache[key]; ok {
oldObj := ee.Value.(plumbing.EncodedObject)
// in this case objSize is a delta: new size - old size
objSize -= FileSize(oldObj.Size())
c.ll.MoveToFront(ee)
ee.Value = obj
} else {
if objSize > c.MaxSize {
return
}
ee := c.ll.PushFront(obj)
c.cache[key] = ee
}
c.actualSize += objSize
for c.actualSize > c.MaxSize {
last := c.ll.Back()
if last == nil {
c.actualSize = 0
break
}
lastObj := last.Value.(plumbing.EncodedObject)
lastSize := FileSize(lastObj.Size())
c.ll.Remove(last)
delete(c.cache, lastObj.Hash())
c.actualSize -= lastSize
}
}
// Get returns an object by its hash. It marks the object as used. If the object
// is not in the cache, (nil, false) will be returned.
func (c *ObjectLRU) Get(k plumbing.Hash) (plumbing.EncodedObject, bool) {
c.mut.Lock()
defer c.mut.Unlock()
ee, ok := c.cache[k]
if !ok {
return nil, false
}
c.ll.MoveToFront(ee)
return ee.Value.(plumbing.EncodedObject), true
}
// Clear the content of this object cache.
func (c *ObjectLRU) Clear() {
c.mut.Lock()
defer c.mut.Unlock()
c.ll = nil
c.cache = nil
c.actualSize = 0
}

View File

@ -0,0 +1,38 @@
package color
// TODO read colors from a github.com/go-git/go-git/plumbing/format/config.Config struct
// TODO implement color parsing, see https://github.com/git/git/blob/v2.26.2/color.c
// Colors. See https://github.com/git/git/blob/v2.26.2/color.h#L24-L53.
const (
Normal = ""
Reset = "\033[m"
Bold = "\033[1m"
Red = "\033[31m"
Green = "\033[32m"
Yellow = "\033[33m"
Blue = "\033[34m"
Magenta = "\033[35m"
Cyan = "\033[36m"
BoldRed = "\033[1;31m"
BoldGreen = "\033[1;32m"
BoldYellow = "\033[1;33m"
BoldBlue = "\033[1;34m"
BoldMagenta = "\033[1;35m"
BoldCyan = "\033[1;36m"
FaintRed = "\033[2;31m"
FaintGreen = "\033[2;32m"
FaintYellow = "\033[2;33m"
FaintBlue = "\033[2;34m"
FaintMagenta = "\033[2;35m"
FaintCyan = "\033[2;36m"
BgRed = "\033[41m"
BgGreen = "\033[42m"
BgYellow = "\033[43m"
BgBlue = "\033[44m"
BgMagenta = "\033[45m"
BgCyan = "\033[46m"
Faint = "\033[2m"
FaintItalic = "\033[2;3m"
Reverse = "\033[7m"
)

35
vendor/github.com/go-git/go-git/v5/plumbing/error.go generated vendored Normal file
View File

@ -0,0 +1,35 @@
package plumbing
import "fmt"
type PermanentError struct {
Err error
}
func NewPermanentError(err error) *PermanentError {
if err == nil {
return nil
}
return &PermanentError{Err: err}
}
func (e *PermanentError) Error() string {
return fmt.Sprintf("permanent client error: %s", e.Err.Error())
}
type UnexpectedError struct {
Err error
}
func NewUnexpectedError(err error) *UnexpectedError {
if err == nil {
return nil
}
return &UnexpectedError{Err: err}
}
func (e *UnexpectedError) Error() string {
return fmt.Sprintf("unexpected client error: %s", e.Err.Error())
}

View File

@ -0,0 +1,188 @@
package filemode
import (
"encoding/binary"
"fmt"
"os"
"strconv"
)
// A FileMode represents the kind of tree entries used by git. It
// resembles regular file systems modes, although FileModes are
// considerably simpler (there are not so many), and there are some,
// like Submodule that has no file system equivalent.
type FileMode uint32
const (
// Empty is used as the FileMode of tree elements when comparing
// trees in the following situations:
//
// - the mode of tree elements before their creation. - the mode of
// tree elements after their deletion. - the mode of unmerged
// elements when checking the index.
//
// Empty has no file system equivalent. As Empty is the zero value
// of FileMode, it is also returned by New and
// NewFromOsNewFromOSFileMode along with an error, when they fail.
Empty FileMode = 0
// Dir represent a Directory.
Dir FileMode = 0040000
// Regular represent non-executable files. Please note this is not
// the same as golang regular files, which include executable files.
Regular FileMode = 0100644
// Deprecated represent non-executable files with the group writable
// bit set. This mode was supported by the first versions of git,
// but it has been deprecated nowadays. This library uses them
// internally, so you can read old packfiles, but will treat them as
// Regulars when interfacing with the outside world. This is the
// standard git behaviour.
Deprecated FileMode = 0100664
// Executable represents executable files.
Executable FileMode = 0100755
// Symlink represents symbolic links to files.
Symlink FileMode = 0120000
// Submodule represents git submodules. This mode has no file system
// equivalent.
Submodule FileMode = 0160000
)
// New takes the octal string representation of a FileMode and returns
// the FileMode and a nil error. If the string can not be parsed to a
// 32 bit unsigned octal number, it returns Empty and the parsing error.
//
// Example: "40000" means Dir, "100644" means Regular.
//
// Please note this function does not check if the returned FileMode
// is valid in git or if it is malformed. For instance, "1" will
// return the malformed FileMode(1) and a nil error.
func New(s string) (FileMode, error) {
n, err := strconv.ParseUint(s, 8, 32)
if err != nil {
return Empty, err
}
return FileMode(n), nil
}
// NewFromOSFileMode returns the FileMode used by git to represent
// the provided file system modes and a nil error on success. If the
// file system mode cannot be mapped to any valid git mode (as with
// sockets or named pipes), it will return Empty and an error.
//
// Note that some git modes cannot be generated from os.FileModes, like
// Deprecated and Submodule; while Empty will be returned, along with an
// error, only when the method fails.
func NewFromOSFileMode(m os.FileMode) (FileMode, error) {
if m.IsRegular() {
if isSetTemporary(m) {
return Empty, fmt.Errorf("no equivalent git mode for %s", m)
}
if isSetCharDevice(m) {
return Empty, fmt.Errorf("no equivalent git mode for %s", m)
}
if isSetUserExecutable(m) {
return Executable, nil
}
return Regular, nil
}
if m.IsDir() {
return Dir, nil
}
if isSetSymLink(m) {
return Symlink, nil
}
return Empty, fmt.Errorf("no equivalent git mode for %s", m)
}
func isSetCharDevice(m os.FileMode) bool {
return m&os.ModeCharDevice != 0
}
func isSetTemporary(m os.FileMode) bool {
return m&os.ModeTemporary != 0
}
func isSetUserExecutable(m os.FileMode) bool {
return m&0100 != 0
}
func isSetSymLink(m os.FileMode) bool {
return m&os.ModeSymlink != 0
}
// Bytes return a slice of 4 bytes with the mode in little endian
// encoding.
func (m FileMode) Bytes() []byte {
ret := make([]byte, 4)
binary.LittleEndian.PutUint32(ret, uint32(m))
return ret
}
// IsMalformed returns if the FileMode should not appear in a git packfile,
// this is: Empty and any other mode not mentioned as a constant in this
// package.
func (m FileMode) IsMalformed() bool {
return m != Dir &&
m != Regular &&
m != Deprecated &&
m != Executable &&
m != Symlink &&
m != Submodule
}
// String returns the FileMode as a string in the standard git format,
// this is, an octal number padded with ceros to 7 digits. Malformed
// modes are printed in that same format, for easier debugging.
//
// Example: Regular is "0100644", Empty is "0000000".
func (m FileMode) String() string {
return fmt.Sprintf("%07o", uint32(m))
}
// IsRegular returns if the FileMode represents that of a regular file,
// this is, either Regular or Deprecated. Please note that Executable
// are not regular even though in the UNIX tradition, they usually are:
// See the IsFile method.
func (m FileMode) IsRegular() bool {
return m == Regular ||
m == Deprecated
}
// IsFile returns if the FileMode represents that of a file, this is,
// Regular, Deprecated, Executable or Link.
func (m FileMode) IsFile() bool {
return m == Regular ||
m == Deprecated ||
m == Executable ||
m == Symlink
}
// ToOSFileMode returns the os.FileMode to be used when creating file
// system elements with the given git mode and a nil error on success.
//
// When the provided mode cannot be mapped to a valid file system mode
// (e.g. Submodule) it returns os.FileMode(0) and an error.
//
// The returned file mode does not take into account the umask.
func (m FileMode) ToOSFileMode() (os.FileMode, error) {
switch m {
case Dir:
return os.ModePerm | os.ModeDir, nil
case Submodule:
return os.ModePerm | os.ModeDir, nil
case Regular:
return os.FileMode(0644), nil
// Deprecated is no longer allowed: treated as a Regular instead
case Deprecated:
return os.FileMode(0644), nil
case Executable:
return os.FileMode(0755), nil
case Symlink:
return os.ModePerm | os.ModeSymlink, nil
}
return os.FileMode(0), fmt.Errorf("malformed mode (%s)", m)
}

View File

@ -0,0 +1,109 @@
package config
// New creates a new config instance.
func New() *Config {
return &Config{}
}
// Config contains all the sections, comments and includes from a config file.
type Config struct {
Comment *Comment
Sections Sections
Includes Includes
}
// Includes is a list of Includes in a config file.
type Includes []*Include
// Include is a reference to an included config file.
type Include struct {
Path string
Config *Config
}
// Comment string without the prefix '#' or ';'.
type Comment string
const (
// NoSubsection token is passed to Config.Section and Config.SetSection to
// represent the absence of a section.
NoSubsection = ""
)
// Section returns a existing section with the given name or creates a new one.
func (c *Config) Section(name string) *Section {
for i := len(c.Sections) - 1; i >= 0; i-- {
s := c.Sections[i]
if s.IsName(name) {
return s
}
}
s := &Section{Name: name}
c.Sections = append(c.Sections, s)
return s
}
// HasSection checks if the Config has a section with the specified name.
func (c *Config) HasSection(name string) bool {
for _, s := range c.Sections {
if s.IsName(name) {
return true
}
}
return false
}
// RemoveSection removes a section from a config file.
func (c *Config) RemoveSection(name string) *Config {
result := Sections{}
for _, s := range c.Sections {
if !s.IsName(name) {
result = append(result, s)
}
}
c.Sections = result
return c
}
// RemoveSubsection remove a subsection from a config file.
func (c *Config) RemoveSubsection(section string, subsection string) *Config {
for _, s := range c.Sections {
if s.IsName(section) {
result := Subsections{}
for _, ss := range s.Subsections {
if !ss.IsName(subsection) {
result = append(result, ss)
}
}
s.Subsections = result
}
}
return c
}
// AddOption adds an option to a given section and subsection. Use the
// NoSubsection constant for the subsection argument if no subsection is wanted.
func (c *Config) AddOption(section string, subsection string, key string, value string) *Config {
if subsection == "" {
c.Section(section).AddOption(key, value)
} else {
c.Section(section).Subsection(subsection).AddOption(key, value)
}
return c
}
// SetOption sets an option to a given section and subsection. Use the
// NoSubsection constant for the subsection argument if no subsection is wanted.
func (c *Config) SetOption(section string, subsection string, key string, value string) *Config {
if subsection == "" {
c.Section(section).SetOption(key, value)
} else {
c.Section(section).Subsection(subsection).SetOption(key, value)
}
return c
}

View File

@ -0,0 +1,37 @@
package config
import (
"io"
"github.com/go-git/gcfg"
)
// A Decoder reads and decodes config files from an input stream.
type Decoder struct {
io.Reader
}
// NewDecoder returns a new decoder that reads from r.
func NewDecoder(r io.Reader) *Decoder {
return &Decoder{r}
}
// Decode reads the whole config from its input and stores it in the
// value pointed to by config.
func (d *Decoder) Decode(config *Config) error {
cb := func(s string, ss string, k string, v string, bv bool) error {
if ss == "" && k == "" {
config.Section(s)
return nil
}
if ss != "" && k == "" {
config.Section(s).Subsection(ss)
return nil
}
config.AddOption(s, ss, k, v)
return nil
}
return gcfg.ReadWithCallback(d, cb)
}

View File

@ -0,0 +1,121 @@
// Package config implements encoding and decoding of git config files.
//
// Configuration File
// ------------------
//
// The Git configuration file contains a number of variables that affect
// the Git commands' behavior. The `.git/config` file in each repository
// is used to store the configuration for that repository, and
// `$HOME/.gitconfig` is used to store a per-user configuration as
// fallback values for the `.git/config` file. The file `/etc/gitconfig`
// can be used to store a system-wide default configuration.
//
// The configuration variables are used by both the Git plumbing
// and the porcelains. The variables are divided into sections, wherein
// the fully qualified variable name of the variable itself is the last
// dot-separated segment and the section name is everything before the last
// dot. The variable names are case-insensitive, allow only alphanumeric
// characters and `-`, and must start with an alphabetic character. Some
// variables may appear multiple times; we say then that the variable is
// multivalued.
//
// Syntax
// ~~~~~~
//
// The syntax is fairly flexible and permissive; whitespaces are mostly
// ignored. The '#' and ';' characters begin comments to the end of line,
// blank lines are ignored.
//
// The file consists of sections and variables. A section begins with
// the name of the section in square brackets and continues until the next
// section begins. Section names are case-insensitive. Only alphanumeric
// characters, `-` and `.` are allowed in section names. Each variable
// must belong to some section, which means that there must be a section
// header before the first setting of a variable.
//
// Sections can be further divided into subsections. To begin a subsection
// put its name in double quotes, separated by space from the section name,
// in the section header, like in the example below:
//
// --------
// [section "subsection"]
//
// --------
//
// Subsection names are case sensitive and can contain any characters except
// newline (doublequote `"` and backslash can be included by escaping them
// as `\"` and `\\`, respectively). Section headers cannot span multiple
// lines. Variables may belong directly to a section or to a given subsection.
// You can have `[section]` if you have `[section "subsection"]`, but you
// don't need to.
//
// There is also a deprecated `[section.subsection]` syntax. With this
// syntax, the subsection name is converted to lower-case and is also
// compared case sensitively. These subsection names follow the same
// restrictions as section names.
//
// All the other lines (and the remainder of the line after the section
// header) are recognized as setting variables, in the form
// 'name = value' (or just 'name', which is a short-hand to say that
// the variable is the boolean "true").
// The variable names are case-insensitive, allow only alphanumeric characters
// and `-`, and must start with an alphabetic character.
//
// A line that defines a value can be continued to the next line by
// ending it with a `\`; the backquote and the end-of-line are
// stripped. Leading whitespaces after 'name =', the remainder of the
// line after the first comment character '#' or ';', and trailing
// whitespaces of the line are discarded unless they are enclosed in
// double quotes. Internal whitespaces within the value are retained
// verbatim.
//
// Inside double quotes, double quote `"` and backslash `\` characters
// must be escaped: use `\"` for `"` and `\\` for `\`.
//
// The following escape sequences (beside `\"` and `\\`) are recognized:
// `\n` for newline character (NL), `\t` for horizontal tabulation (HT, TAB)
// and `\b` for backspace (BS). Other char escape sequences (including octal
// escape sequences) are invalid.
//
// Includes
// ~~~~~~~~
//
// You can include one config file from another by setting the special
// `include.path` variable to the name of the file to be included. The
// variable takes a pathname as its value, and is subject to tilde
// expansion.
//
// The included file is expanded immediately, as if its contents had been
// found at the location of the include directive. If the value of the
// `include.path` variable is a relative path, the path is considered to be
// relative to the configuration file in which the include directive was
// found. See below for examples.
//
//
// Example
// ~~~~~~~
//
// # Core variables
// [core]
// ; Don't trust file modes
// filemode = false
//
// # Our diff algorithm
// [diff]
// external = /usr/local/bin/diff-wrapper
// renames = true
//
// [branch "devel"]
// remote = origin
// merge = refs/heads/devel
//
// # Proxy settings
// [core]
// gitProxy="ssh" for "kernel.org"
// gitProxy=default-proxy ; for the rest
//
// [include]
// path = /path/to/foo.inc ; include by absolute path
// path = foo ; expand "foo" relative to the current file
// path = ~/foo ; expand "foo" in your `$HOME` directory
package config

View File

@ -0,0 +1,83 @@
package config
import (
"fmt"
"io"
"strings"
)
// An Encoder writes config files to an output stream.
type Encoder struct {
w io.Writer
}
var (
subsectionReplacer = strings.NewReplacer(`"`, `\"`, `\`, `\\`)
valueReplacer = strings.NewReplacer(`"`, `\"`, `\`, `\\`, "\n", `\n`, "\t", `\t`, "\b", `\b`)
)
// NewEncoder returns a new encoder that writes to w.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{w}
}
// Encode writes the config in git config format to the stream of the encoder.
func (e *Encoder) Encode(cfg *Config) error {
for _, s := range cfg.Sections {
if err := e.encodeSection(s); err != nil {
return err
}
}
return nil
}
func (e *Encoder) encodeSection(s *Section) error {
if len(s.Options) > 0 {
if err := e.printf("[%s]\n", s.Name); err != nil {
return err
}
if err := e.encodeOptions(s.Options); err != nil {
return err
}
}
for _, ss := range s.Subsections {
if err := e.encodeSubsection(s.Name, ss); err != nil {
return err
}
}
return nil
}
func (e *Encoder) encodeSubsection(sectionName string, s *Subsection) error {
if err := e.printf("[%s \"%s\"]\n", sectionName, subsectionReplacer.Replace(s.Name)); err != nil {
return err
}
return e.encodeOptions(s.Options)
}
func (e *Encoder) encodeOptions(opts Options) error {
for _, o := range opts {
var value string
if strings.ContainsAny(o.Value, "#;\"\t\n\\") || strings.HasPrefix(o.Value, " ") || strings.HasSuffix(o.Value, " ") {
value = `"` + valueReplacer.Replace(o.Value) + `"`
} else {
value = o.Value
}
if err := e.printf("\t%s = %s\n", o.Key, value); err != nil {
return err
}
}
return nil
}
func (e *Encoder) printf(msg string, args ...interface{}) error {
_, err := fmt.Fprintf(e.w, msg, args...)
return err
}

View File

@ -0,0 +1,53 @@
package config
// RepositoryFormatVersion represents the repository format version,
// as per defined at:
//
// https://git-scm.com/docs/repository-version
type RepositoryFormatVersion string
const (
// Version_0 is the format defined by the initial version of git,
// including but not limited to the format of the repository
// directory, the repository configuration file, and the object
// and ref storage.
//
// Specifying the complete behavior of git is beyond the scope
// of this document.
Version_0 = "0"
// Version_1 is identical to version 0, with the following exceptions:
//
// 1. When reading the core.repositoryformatversion variable, a git
// implementation which supports version 1 MUST also read any
// configuration keys found in the extensions section of the
// configuration file.
//
// 2. If a version-1 repository specifies any extensions.* keys that
// the running git has not implemented, the operation MUST NOT proceed.
// Similarly, if the value of any known key is not understood by the
// implementation, the operation MUST NOT proceed.
//
// Note that if no extensions are specified in the config file, then
// core.repositoryformatversion SHOULD be set to 0 (setting it to 1 provides
// no benefit, and makes the repository incompatible with older
// implementations of git).
Version_1 = "1"
// DefaultRepositoryFormatVersion holds the default repository format version.
DefaultRepositoryFormatVersion = Version_0
)
// ObjectFormat defines the object format.
type ObjectFormat string
const (
// SHA1 represents the object format used for SHA1.
SHA1 ObjectFormat = "sha1"
// SHA256 represents the object format used for SHA256.
SHA256 ObjectFormat = "sha256"
// DefaultObjectFormat holds the default object format.
DefaultObjectFormat = SHA1
)

View File

@ -0,0 +1,127 @@
package config
import (
"fmt"
"strings"
)
// Option defines a key/value entity in a config file.
type Option struct {
// Key preserving original caseness.
// Use IsKey instead to compare key regardless of caseness.
Key string
// Original value as string, could be not normalized.
Value string
}
type Options []*Option
// IsKey returns true if the given key matches
// this option's key in a case-insensitive comparison.
func (o *Option) IsKey(key string) bool {
return strings.EqualFold(o.Key, key)
}
func (opts Options) GoString() string {
var strs []string
for _, opt := range opts {
strs = append(strs, fmt.Sprintf("%#v", opt))
}
return strings.Join(strs, ", ")
}
// Get gets the value for the given key if set,
// otherwise it returns the empty string.
//
// # Note that there is no difference
//
// This matches git behaviour since git v1.8.1-rc1,
// if there are multiple definitions of a key, the
// last one wins.
//
// See: http://article.gmane.org/gmane.linux.kernel/1407184
//
// In order to get all possible values for the same key,
// use GetAll.
func (opts Options) Get(key string) string {
for i := len(opts) - 1; i >= 0; i-- {
o := opts[i]
if o.IsKey(key) {
return o.Value
}
}
return ""
}
// Has checks if an Option exist with the given key.
func (opts Options) Has(key string) bool {
for _, o := range opts {
if o.IsKey(key) {
return true
}
}
return false
}
// GetAll returns all possible values for the same key.
func (opts Options) GetAll(key string) []string {
result := []string{}
for _, o := range opts {
if o.IsKey(key) {
result = append(result, o.Value)
}
}
return result
}
func (opts Options) withoutOption(key string) Options {
result := Options{}
for _, o := range opts {
if !o.IsKey(key) {
result = append(result, o)
}
}
return result
}
func (opts Options) withAddedOption(key string, value string) Options {
return append(opts, &Option{key, value})
}
func (opts Options) withSettedOption(key string, values ...string) Options {
var result Options
var added []string
for _, o := range opts {
if !o.IsKey(key) {
result = append(result, o)
continue
}
if contains(values, o.Value) {
added = append(added, o.Value)
result = append(result, o)
continue
}
}
for _, value := range values {
if contains(added, value) {
continue
}
result = result.withAddedOption(key, value)
}
return result
}
func contains(haystack []string, needle string) bool {
for _, s := range haystack {
if s == needle {
return true
}
}
return false
}

View File

@ -0,0 +1,180 @@
package config
import (
"fmt"
"strings"
)
// Section is the representation of a section inside git configuration files.
// Each Section contains Options that are used by both the Git plumbing
// and the porcelains.
// Sections can be further divided into subsections. To begin a subsection
// put its name in double quotes, separated by space from the section name,
// in the section header, like in the example below:
//
// [section "subsection"]
//
// All the other lines (and the remainder of the line after the section header)
// are recognized as option variables, in the form "name = value" (or just name,
// which is a short-hand to say that the variable is the boolean "true").
// The variable names are case-insensitive, allow only alphanumeric characters
// and -, and must start with an alphabetic character:
//
// [section "subsection1"]
// option1 = value1
// option2
// [section "subsection2"]
// option3 = value2
type Section struct {
Name string
Options Options
Subsections Subsections
}
type Subsection struct {
Name string
Options Options
}
type Sections []*Section
func (s Sections) GoString() string {
var strs []string
for _, ss := range s {
strs = append(strs, fmt.Sprintf("%#v", ss))
}
return strings.Join(strs, ", ")
}
type Subsections []*Subsection
func (s Subsections) GoString() string {
var strs []string
for _, ss := range s {
strs = append(strs, fmt.Sprintf("%#v", ss))
}
return strings.Join(strs, ", ")
}
// IsName checks if the name provided is equals to the Section name, case insensitive.
func (s *Section) IsName(name string) bool {
return strings.EqualFold(s.Name, name)
}
// Subsection returns a Subsection from the specified Section. If the
// Subsection does not exists, new one is created and added to Section.
func (s *Section) Subsection(name string) *Subsection {
for i := len(s.Subsections) - 1; i >= 0; i-- {
ss := s.Subsections[i]
if ss.IsName(name) {
return ss
}
}
ss := &Subsection{Name: name}
s.Subsections = append(s.Subsections, ss)
return ss
}
// HasSubsection checks if the Section has a Subsection with the specified name.
func (s *Section) HasSubsection(name string) bool {
for _, ss := range s.Subsections {
if ss.IsName(name) {
return true
}
}
return false
}
// RemoveSubsection removes a subsection from a Section.
func (s *Section) RemoveSubsection(name string) *Section {
result := Subsections{}
for _, s := range s.Subsections {
if !s.IsName(name) {
result = append(result, s)
}
}
s.Subsections = result
return s
}
// Option returns the value for the specified key. Empty string is returned if
// key does not exists.
func (s *Section) Option(key string) string {
return s.Options.Get(key)
}
// OptionAll returns all possible values for an option with the specified key.
// If the option does not exists, an empty slice will be returned.
func (s *Section) OptionAll(key string) []string {
return s.Options.GetAll(key)
}
// HasOption checks if the Section has an Option with the given key.
func (s *Section) HasOption(key string) bool {
return s.Options.Has(key)
}
// AddOption adds a new Option to the Section. The updated Section is returned.
func (s *Section) AddOption(key string, value string) *Section {
s.Options = s.Options.withAddedOption(key, value)
return s
}
// SetOption adds a new Option to the Section. If the option already exists, is replaced.
// The updated Section is returned.
func (s *Section) SetOption(key string, value string) *Section {
s.Options = s.Options.withSettedOption(key, value)
return s
}
// Remove an option with the specified key. The updated Section is returned.
func (s *Section) RemoveOption(key string) *Section {
s.Options = s.Options.withoutOption(key)
return s
}
// IsName checks if the name of the subsection is exactly the specified name.
func (s *Subsection) IsName(name string) bool {
return s.Name == name
}
// Option returns an option with the specified key. If the option does not exists,
// empty spring will be returned.
func (s *Subsection) Option(key string) string {
return s.Options.Get(key)
}
// OptionAll returns all possible values for an option with the specified key.
// If the option does not exists, an empty slice will be returned.
func (s *Subsection) OptionAll(key string) []string {
return s.Options.GetAll(key)
}
// HasOption checks if the Subsection has an Option with the given key.
func (s *Subsection) HasOption(key string) bool {
return s.Options.Has(key)
}
// AddOption adds a new Option to the Subsection. The updated Subsection is returned.
func (s *Subsection) AddOption(key string, value string) *Subsection {
s.Options = s.Options.withAddedOption(key, value)
return s
}
// SetOption adds a new Option to the Subsection. If the option already exists, is replaced.
// The updated Subsection is returned.
func (s *Subsection) SetOption(key string, value ...string) *Subsection {
s.Options = s.Options.withSettedOption(key, value...)
return s
}
// RemoveOption removes the option with the specified key. The updated Subsection is returned.
func (s *Subsection) RemoveOption(key string) *Subsection {
s.Options = s.Options.withoutOption(key)
return s
}

View File

@ -0,0 +1,97 @@
package diff
import "github.com/go-git/go-git/v5/plumbing/color"
// A ColorKey is a key into a ColorConfig map and also equal to the key in the
// diff.color subsection of the config. See
// https://github.com/git/git/blob/v2.26.2/diff.c#L83-L106.
type ColorKey string
// ColorKeys.
const (
Context ColorKey = "context"
Meta ColorKey = "meta"
Frag ColorKey = "frag"
Old ColorKey = "old"
New ColorKey = "new"
Commit ColorKey = "commit"
Whitespace ColorKey = "whitespace"
Func ColorKey = "func"
OldMoved ColorKey = "oldMoved"
OldMovedAlternative ColorKey = "oldMovedAlternative"
OldMovedDimmed ColorKey = "oldMovedDimmed"
OldMovedAlternativeDimmed ColorKey = "oldMovedAlternativeDimmed"
NewMoved ColorKey = "newMoved"
NewMovedAlternative ColorKey = "newMovedAlternative"
NewMovedDimmed ColorKey = "newMovedDimmed"
NewMovedAlternativeDimmed ColorKey = "newMovedAlternativeDimmed"
ContextDimmed ColorKey = "contextDimmed"
OldDimmed ColorKey = "oldDimmed"
NewDimmed ColorKey = "newDimmed"
ContextBold ColorKey = "contextBold"
OldBold ColorKey = "oldBold"
NewBold ColorKey = "newBold"
)
// A ColorConfig is a color configuration. A nil or empty ColorConfig
// corresponds to no color.
type ColorConfig map[ColorKey]string
// A ColorConfigOption sets an option on a ColorConfig.
type ColorConfigOption func(ColorConfig)
// WithColor sets the color for key.
func WithColor(key ColorKey, color string) ColorConfigOption {
return func(cc ColorConfig) {
cc[key] = color
}
}
// defaultColorConfig is the default color configuration. See
// https://github.com/git/git/blob/v2.26.2/diff.c#L57-L81.
var defaultColorConfig = ColorConfig{
Context: color.Normal,
Meta: color.Bold,
Frag: color.Cyan,
Old: color.Red,
New: color.Green,
Commit: color.Yellow,
Whitespace: color.BgRed,
Func: color.Normal,
OldMoved: color.BoldMagenta,
OldMovedAlternative: color.BoldBlue,
OldMovedDimmed: color.Faint,
OldMovedAlternativeDimmed: color.FaintItalic,
NewMoved: color.BoldCyan,
NewMovedAlternative: color.BoldYellow,
NewMovedDimmed: color.Faint,
NewMovedAlternativeDimmed: color.FaintItalic,
ContextDimmed: color.Faint,
OldDimmed: color.FaintRed,
NewDimmed: color.FaintGreen,
ContextBold: color.Bold,
OldBold: color.BoldRed,
NewBold: color.BoldGreen,
}
// NewColorConfig returns a new ColorConfig.
func NewColorConfig(options ...ColorConfigOption) ColorConfig {
cc := make(ColorConfig)
for key, value := range defaultColorConfig {
cc[key] = value
}
for _, option := range options {
option(cc)
}
return cc
}
// Reset returns the ANSI escape sequence to reset the color with key set from
// cc. If no color was set then no reset is needed so it returns the empty
// string.
func (cc ColorConfig) Reset(key ColorKey) string {
if cc[key] == "" {
return ""
}
return color.Reset
}

View File

@ -0,0 +1,58 @@
package diff
import (
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/filemode"
)
// Operation defines the operation of a diff item.
type Operation int
const (
// Equal item represents an equals diff.
Equal Operation = iota
// Add item represents an insert diff.
Add
// Delete item represents a delete diff.
Delete
)
// Patch represents a collection of steps to transform several files.
type Patch interface {
// FilePatches returns a slice of patches per file.
FilePatches() []FilePatch
// Message returns an optional message that can be at the top of the
// Patch representation.
Message() string
}
// FilePatch represents the necessary steps to transform one file into another.
type FilePatch interface {
// IsBinary returns true if this patch is representing a binary file.
IsBinary() bool
// Files returns the from and to Files, with all the necessary metadata
// about them. If the patch creates a new file, "from" will be nil.
// If the patch deletes a file, "to" will be nil.
Files() (from, to File)
// Chunks returns a slice of ordered changes to transform "from" File into
// "to" File. If the file is a binary one, Chunks will be empty.
Chunks() []Chunk
}
// File contains all the file metadata necessary to print some patch formats.
type File interface {
// Hash returns the File Hash.
Hash() plumbing.Hash
// Mode returns the FileMode.
Mode() filemode.FileMode
// Path returns the complete Path to the file, including the filename.
Path() string
}
// Chunk represents a portion of a file transformation into another.
type Chunk interface {
// Content contains the portion of the file.
Content() string
// Type contains the Operation to do with this Chunk.
Type() Operation
}

View File

@ -0,0 +1,395 @@
package diff
import (
"fmt"
"io"
"regexp"
"strconv"
"strings"
"github.com/go-git/go-git/v5/plumbing"
)
// DefaultContextLines is the default number of context lines.
const DefaultContextLines = 3
var (
splitLinesRegexp = regexp.MustCompile(`[^\n]*(\n|$)`)
operationChar = map[Operation]byte{
Add: '+',
Delete: '-',
Equal: ' ',
}
operationColorKey = map[Operation]ColorKey{
Add: New,
Delete: Old,
Equal: Context,
}
)
// UnifiedEncoder encodes an unified diff into the provided Writer. It does not
// support similarity index for renames or sorting hash representations.
type UnifiedEncoder struct {
io.Writer
// contextLines is the count of unchanged lines that will appear surrounding
// a change.
contextLines int
// srcPrefix and dstPrefix are prepended to file paths when encoding a diff.
srcPrefix string
dstPrefix string
// colorConfig is the color configuration. The default is no color.
color ColorConfig
}
// NewUnifiedEncoder returns a new UnifiedEncoder that writes to w.
func NewUnifiedEncoder(w io.Writer, contextLines int) *UnifiedEncoder {
return &UnifiedEncoder{
Writer: w,
srcPrefix: "a/",
dstPrefix: "b/",
contextLines: contextLines,
}
}
// SetColor sets e's color configuration and returns e.
func (e *UnifiedEncoder) SetColor(colorConfig ColorConfig) *UnifiedEncoder {
e.color = colorConfig
return e
}
// SetSrcPrefix sets e's srcPrefix and returns e.
func (e *UnifiedEncoder) SetSrcPrefix(prefix string) *UnifiedEncoder {
e.srcPrefix = prefix
return e
}
// SetDstPrefix sets e's dstPrefix and returns e.
func (e *UnifiedEncoder) SetDstPrefix(prefix string) *UnifiedEncoder {
e.dstPrefix = prefix
return e
}
// Encode encodes patch.
func (e *UnifiedEncoder) Encode(patch Patch) error {
sb := &strings.Builder{}
if message := patch.Message(); message != "" {
sb.WriteString(message)
if !strings.HasSuffix(message, "\n") {
sb.WriteByte('\n')
}
}
for _, filePatch := range patch.FilePatches() {
e.writeFilePatchHeader(sb, filePatch)
g := newHunksGenerator(filePatch.Chunks(), e.contextLines)
for _, hunk := range g.Generate() {
hunk.writeTo(sb, e.color)
}
}
_, err := e.Write([]byte(sb.String()))
return err
}
func (e *UnifiedEncoder) writeFilePatchHeader(sb *strings.Builder, filePatch FilePatch) {
from, to := filePatch.Files()
if from == nil && to == nil {
return
}
isBinary := filePatch.IsBinary()
var lines []string
switch {
case from != nil && to != nil:
hashEquals := from.Hash() == to.Hash()
lines = append(lines,
fmt.Sprintf("diff --git %s%s %s%s",
e.srcPrefix, from.Path(), e.dstPrefix, to.Path()),
)
if from.Mode() != to.Mode() {
lines = append(lines,
fmt.Sprintf("old mode %o", from.Mode()),
fmt.Sprintf("new mode %o", to.Mode()),
)
}
if from.Path() != to.Path() {
lines = append(lines,
fmt.Sprintf("rename from %s", from.Path()),
fmt.Sprintf("rename to %s", to.Path()),
)
}
if from.Mode() != to.Mode() && !hashEquals {
lines = append(lines,
fmt.Sprintf("index %s..%s", from.Hash(), to.Hash()),
)
} else if !hashEquals {
lines = append(lines,
fmt.Sprintf("index %s..%s %o", from.Hash(), to.Hash(), from.Mode()),
)
}
if !hashEquals {
lines = e.appendPathLines(lines, e.srcPrefix+from.Path(), e.dstPrefix+to.Path(), isBinary)
}
case from == nil:
lines = append(lines,
fmt.Sprintf("diff --git %s %s", e.srcPrefix+to.Path(), e.dstPrefix+to.Path()),
fmt.Sprintf("new file mode %o", to.Mode()),
fmt.Sprintf("index %s..%s", plumbing.ZeroHash, to.Hash()),
)
lines = e.appendPathLines(lines, "/dev/null", e.dstPrefix+to.Path(), isBinary)
case to == nil:
lines = append(lines,
fmt.Sprintf("diff --git %s %s", e.srcPrefix+from.Path(), e.dstPrefix+from.Path()),
fmt.Sprintf("deleted file mode %o", from.Mode()),
fmt.Sprintf("index %s..%s", from.Hash(), plumbing.ZeroHash),
)
lines = e.appendPathLines(lines, e.srcPrefix+from.Path(), "/dev/null", isBinary)
}
sb.WriteString(e.color[Meta])
sb.WriteString(lines[0])
for _, line := range lines[1:] {
sb.WriteByte('\n')
sb.WriteString(line)
}
sb.WriteString(e.color.Reset(Meta))
sb.WriteByte('\n')
}
func (e *UnifiedEncoder) appendPathLines(lines []string, fromPath, toPath string, isBinary bool) []string {
if isBinary {
return append(lines,
fmt.Sprintf("Binary files %s and %s differ", fromPath, toPath),
)
}
return append(lines,
fmt.Sprintf("--- %s", fromPath),
fmt.Sprintf("+++ %s", toPath),
)
}
type hunksGenerator struct {
fromLine, toLine int
ctxLines int
chunks []Chunk
current *hunk
hunks []*hunk
beforeContext, afterContext []string
}
func newHunksGenerator(chunks []Chunk, ctxLines int) *hunksGenerator {
return &hunksGenerator{
chunks: chunks,
ctxLines: ctxLines,
}
}
func (g *hunksGenerator) Generate() []*hunk {
for i, chunk := range g.chunks {
lines := splitLines(chunk.Content())
nLines := len(lines)
switch chunk.Type() {
case Equal:
g.fromLine += nLines
g.toLine += nLines
g.processEqualsLines(lines, i)
case Delete:
if nLines != 0 {
g.fromLine++
}
g.processHunk(i, chunk.Type())
g.fromLine += nLines - 1
g.current.AddOp(chunk.Type(), lines...)
case Add:
if nLines != 0 {
g.toLine++
}
g.processHunk(i, chunk.Type())
g.toLine += nLines - 1
g.current.AddOp(chunk.Type(), lines...)
}
if i == len(g.chunks)-1 && g.current != nil {
g.hunks = append(g.hunks, g.current)
}
}
return g.hunks
}
func (g *hunksGenerator) processHunk(i int, op Operation) {
if g.current != nil {
return
}
var ctxPrefix string
linesBefore := len(g.beforeContext)
if linesBefore > g.ctxLines {
ctxPrefix = g.beforeContext[linesBefore-g.ctxLines-1]
g.beforeContext = g.beforeContext[linesBefore-g.ctxLines:]
linesBefore = g.ctxLines
}
g.current = &hunk{ctxPrefix: strings.TrimSuffix(ctxPrefix, "\n")}
g.current.AddOp(Equal, g.beforeContext...)
switch op {
case Delete:
g.current.fromLine, g.current.toLine =
g.addLineNumbers(g.fromLine, g.toLine, linesBefore, i, Add)
case Add:
g.current.toLine, g.current.fromLine =
g.addLineNumbers(g.toLine, g.fromLine, linesBefore, i, Delete)
}
g.beforeContext = nil
}
// addLineNumbers obtains the line numbers in a new chunk.
func (g *hunksGenerator) addLineNumbers(la, lb int, linesBefore int, i int, op Operation) (cla, clb int) {
cla = la - linesBefore
// we need to search for a reference for the next diff
switch {
case linesBefore != 0 && g.ctxLines != 0:
if lb > g.ctxLines {
clb = lb - g.ctxLines + 1
} else {
clb = 1
}
case g.ctxLines == 0:
clb = lb
case i != len(g.chunks)-1:
next := g.chunks[i+1]
if next.Type() == op || next.Type() == Equal {
// this diff will be into this chunk
clb = lb + 1
}
}
return
}
func (g *hunksGenerator) processEqualsLines(ls []string, i int) {
if g.current == nil {
g.beforeContext = append(g.beforeContext, ls...)
return
}
g.afterContext = append(g.afterContext, ls...)
if len(g.afterContext) <= g.ctxLines*2 && i != len(g.chunks)-1 {
g.current.AddOp(Equal, g.afterContext...)
g.afterContext = nil
} else {
ctxLines := g.ctxLines
if ctxLines > len(g.afterContext) {
ctxLines = len(g.afterContext)
}
g.current.AddOp(Equal, g.afterContext[:ctxLines]...)
g.hunks = append(g.hunks, g.current)
g.current = nil
g.beforeContext = g.afterContext[ctxLines:]
g.afterContext = nil
}
}
func splitLines(s string) []string {
out := splitLinesRegexp.FindAllString(s, -1)
if out[len(out)-1] == "" {
out = out[:len(out)-1]
}
return out
}
type hunk struct {
fromLine int
toLine int
fromCount int
toCount int
ctxPrefix string
ops []*op
}
func (h *hunk) writeTo(sb *strings.Builder, color ColorConfig) {
sb.WriteString(color[Frag])
sb.WriteString("@@ -")
if h.fromCount == 1 {
sb.WriteString(strconv.Itoa(h.fromLine))
} else {
sb.WriteString(strconv.Itoa(h.fromLine))
sb.WriteByte(',')
sb.WriteString(strconv.Itoa(h.fromCount))
}
sb.WriteString(" +")
if h.toCount == 1 {
sb.WriteString(strconv.Itoa(h.toLine))
} else {
sb.WriteString(strconv.Itoa(h.toLine))
sb.WriteByte(',')
sb.WriteString(strconv.Itoa(h.toCount))
}
sb.WriteString(" @@")
sb.WriteString(color.Reset(Frag))
if h.ctxPrefix != "" {
sb.WriteByte(' ')
sb.WriteString(color[Func])
sb.WriteString(h.ctxPrefix)
sb.WriteString(color.Reset(Func))
}
sb.WriteByte('\n')
for _, op := range h.ops {
op.writeTo(sb, color)
}
}
func (h *hunk) AddOp(t Operation, ss ...string) {
n := len(ss)
switch t {
case Add:
h.toCount += n
case Delete:
h.fromCount += n
case Equal:
h.toCount += n
h.fromCount += n
}
for _, s := range ss {
h.ops = append(h.ops, &op{s, t})
}
}
type op struct {
text string
t Operation
}
func (o *op) writeTo(sb *strings.Builder, color ColorConfig) {
colorKey := operationColorKey[o.t]
sb.WriteString(color[colorKey])
sb.WriteByte(operationChar[o.t])
if strings.HasSuffix(o.text, "\n") {
sb.WriteString(strings.TrimSuffix(o.text, "\n"))
} else {
sb.WriteString(o.text + "\n\\ No newline at end of file")
}
sb.WriteString(color.Reset(colorKey))
sb.WriteByte('\n')
}

View File

@ -0,0 +1,144 @@
package gitignore
import (
"bufio"
"bytes"
"io"
"os"
"strings"
"github.com/go-git/go-billy/v5"
"github.com/go-git/go-git/v5/internal/path_util"
"github.com/go-git/go-git/v5/plumbing/format/config"
gioutil "github.com/go-git/go-git/v5/utils/ioutil"
)
const (
commentPrefix = "#"
coreSection = "core"
excludesfile = "excludesfile"
gitDir = ".git"
gitignoreFile = ".gitignore"
gitconfigFile = ".gitconfig"
systemFile = "/etc/gitconfig"
infoExcludeFile = gitDir + "/info/exclude"
)
// readIgnoreFile reads a specific git ignore file.
func readIgnoreFile(fs billy.Filesystem, path []string, ignoreFile string) (ps []Pattern, err error) {
ignoreFile, _ = path_util.ReplaceTildeWithHome(ignoreFile)
f, err := fs.Open(fs.Join(append(path, ignoreFile)...))
if err == nil {
defer f.Close()
scanner := bufio.NewScanner(f)
for scanner.Scan() {
s := scanner.Text()
if !strings.HasPrefix(s, commentPrefix) && len(strings.TrimSpace(s)) > 0 {
ps = append(ps, ParsePattern(s, path))
}
}
} else if !os.IsNotExist(err) {
return nil, err
}
return
}
// ReadPatterns reads the .git/info/exclude and then the gitignore patterns
// recursively traversing through the directory structure. The result is in
// the ascending order of priority (last higher).
func ReadPatterns(fs billy.Filesystem, path []string) (ps []Pattern, err error) {
ps, _ = readIgnoreFile(fs, path, infoExcludeFile)
subps, _ := readIgnoreFile(fs, path, gitignoreFile)
ps = append(ps, subps...)
var fis []os.FileInfo
fis, err = fs.ReadDir(fs.Join(path...))
if err != nil {
return
}
for _, fi := range fis {
if fi.IsDir() && fi.Name() != gitDir {
var subps []Pattern
subps, err = ReadPatterns(fs, append(path, fi.Name()))
if err != nil {
return
}
if len(subps) > 0 {
ps = append(ps, subps...)
}
}
}
return
}
func loadPatterns(fs billy.Filesystem, path string) (ps []Pattern, err error) {
f, err := fs.Open(path)
if err != nil {
if os.IsNotExist(err) {
return nil, nil
}
return nil, err
}
defer gioutil.CheckClose(f, &err)
b, err := io.ReadAll(f)
if err != nil {
return
}
d := config.NewDecoder(bytes.NewBuffer(b))
raw := config.New()
if err = d.Decode(raw); err != nil {
return
}
s := raw.Section(coreSection)
efo := s.Options.Get(excludesfile)
if efo == "" {
return nil, nil
}
ps, err = readIgnoreFile(fs, nil, efo)
if os.IsNotExist(err) {
return nil, nil
}
return
}
// LoadGlobalPatterns loads gitignore patterns from the gitignore file
// declared in a user's ~/.gitconfig file. If the ~/.gitconfig file does not
// exist the function will return nil. If the core.excludesfile property
// is not declared, the function will return nil. If the file pointed to by
// the core.excludesfile property does not exist, the function will return nil.
//
// The function assumes fs is rooted at the root filesystem.
func LoadGlobalPatterns(fs billy.Filesystem) (ps []Pattern, err error) {
home, err := os.UserHomeDir()
if err != nil {
return
}
return loadPatterns(fs, fs.Join(home, gitconfigFile))
}
// LoadSystemPatterns loads gitignore patterns from the gitignore file
// declared in a system's /etc/gitconfig file. If the /etc/gitconfig file does
// not exist the function will return nil. If the core.excludesfile property
// is not declared, the function will return nil. If the file pointed to by
// the core.excludesfile property does not exist, the function will return nil.
//
// The function assumes fs is rooted at the root filesystem.
func LoadSystemPatterns(fs billy.Filesystem) (ps []Pattern, err error) {
return loadPatterns(fs, systemFile)
}

View File

@ -0,0 +1,70 @@
// Package gitignore implements matching file system paths to gitignore patterns that
// can be automatically read from a git repository tree in the order of definition
// priorities. It support all pattern formats as specified in the original gitignore
// documentation, copied below:
//
// Pattern format
// ==============
//
// - A blank line matches no files, so it can serve as a separator for readability.
//
// - A line starting with # serves as a comment. Put a backslash ("\") in front of
// the first hash for patterns that begin with a hash.
//
// - Trailing spaces are ignored unless they are quoted with backslash ("\").
//
// - An optional prefix "!" which negates the pattern; any matching file excluded
// by a previous pattern will become included again. It is not possible to
// re-include a file if a parent directory of that file is excluded.
// Git doesnt list excluded directories for performance reasons, so
// any patterns on contained files have no effect, no matter where they are
// defined. Put a backslash ("\") in front of the first "!" for patterns
// that begin with a literal "!", for example, "\!important!.txt".
//
// - If the pattern ends with a slash, it is removed for the purpose of the
// following description, but it would only find a match with a directory.
// In other words, foo/ will match a directory foo and paths underneath it,
// but will not match a regular file or a symbolic link foo (this is consistent
// with the way how pathspec works in general in Git).
//
// - If the pattern does not contain a slash /, Git treats it as a shell glob
// pattern and checks for a match against the pathname relative to the location
// of the .gitignore file (relative to the toplevel of the work tree if not
// from a .gitignore file).
//
// - Otherwise, Git treats the pattern as a shell glob suitable for consumption
// by fnmatch(3) with the FNM_PATHNAME flag: wildcards in the pattern will
// not match a / in the pathname. For example, "Documentation/*.html" matches
// "Documentation/git.html" but not "Documentation/ppc/ppc.html" or
// "tools/perf/Documentation/perf.html".
//
// - A leading slash matches the beginning of the pathname. For example,
// "/*.c" matches "cat-file.c" but not "mozilla-sha1/sha1.c".
//
// Two consecutive asterisks ("**") in patterns matched against full pathname
// may have special meaning:
//
// - A leading "**" followed by a slash means match in all directories.
// For example, "**/foo" matches file or directory "foo" anywhere, the same as
// pattern "foo". "**/foo/bar" matches file or directory "bar"
// anywhere that is directly under directory "foo".
//
// - A trailing "/**" matches everything inside. For example, "abc/**" matches
// all files inside directory "abc", relative to the location of the
// .gitignore file, with infinite depth.
//
// - A slash followed by two consecutive asterisks then a slash matches
// zero or more directories. For example, "a/**/b" matches "a/b", "a/x/b",
// "a/x/y/b" and so on.
//
// - Other consecutive asterisks are considered invalid.
//
// Copyright and license
// =====================
//
// Copyright (c) Oleg Sklyar, Silvertern and source{d}
//
// The package code was donated to source{d} to include, modify and develop
// further as a part of the `go-git` project, release it on the license of
// the whole project or delete it from the project.
package gitignore

View File

@ -0,0 +1,30 @@
package gitignore
// Matcher defines a global multi-pattern matcher for gitignore patterns
type Matcher interface {
// Match matches patterns in the order of priorities. As soon as an inclusion or
// exclusion is found, not further matching is performed.
Match(path []string, isDir bool) bool
}
// NewMatcher constructs a new global matcher. Patterns must be given in the order of
// increasing priority. That is most generic settings files first, then the content of
// the repo .gitignore, then content of .gitignore down the path or the repo and then
// the content command line arguments.
func NewMatcher(ps []Pattern) Matcher {
return &matcher{ps}
}
type matcher struct {
patterns []Pattern
}
func (m *matcher) Match(path []string, isDir bool) bool {
n := len(m.patterns)
for i := n - 1; i >= 0; i-- {
if match := m.patterns[i].Match(path, isDir); match > NoMatch {
return match == Exclude
}
}
return false
}

View File

@ -0,0 +1,155 @@
package gitignore
import (
"path/filepath"
"strings"
)
// MatchResult defines outcomes of a match, no match, exclusion or inclusion.
type MatchResult int
const (
// NoMatch defines the no match outcome of a match check
NoMatch MatchResult = iota
// Exclude defines an exclusion of a file as a result of a match check
Exclude
// Include defines an explicit inclusion of a file as a result of a match check
Include
)
const (
inclusionPrefix = "!"
zeroToManyDirs = "**"
patternDirSep = "/"
)
// Pattern defines a single gitignore pattern.
type Pattern interface {
// Match matches the given path to the pattern.
Match(path []string, isDir bool) MatchResult
}
type pattern struct {
domain []string
pattern []string
inclusion bool
dirOnly bool
isGlob bool
}
// ParsePattern parses a gitignore pattern string into the Pattern structure.
func ParsePattern(p string, domain []string) Pattern {
// storing domain, copy it to ensure it isn't changed externally
domain = append([]string(nil), domain...)
res := pattern{domain: domain}
if strings.HasPrefix(p, inclusionPrefix) {
res.inclusion = true
p = p[1:]
}
if !strings.HasSuffix(p, "\\ ") {
p = strings.TrimRight(p, " ")
}
if strings.HasSuffix(p, patternDirSep) {
res.dirOnly = true
p = p[:len(p)-1]
}
if strings.Contains(p, patternDirSep) {
res.isGlob = true
}
res.pattern = strings.Split(p, patternDirSep)
return &res
}
func (p *pattern) Match(path []string, isDir bool) MatchResult {
if len(path) <= len(p.domain) {
return NoMatch
}
for i, e := range p.domain {
if path[i] != e {
return NoMatch
}
}
path = path[len(p.domain):]
if p.isGlob && !p.globMatch(path, isDir) {
return NoMatch
} else if !p.isGlob && !p.simpleNameMatch(path, isDir) {
return NoMatch
}
if p.inclusion {
return Include
} else {
return Exclude
}
}
func (p *pattern) simpleNameMatch(path []string, isDir bool) bool {
for i, name := range path {
if match, err := filepath.Match(p.pattern[0], name); err != nil {
return false
} else if !match {
continue
}
if p.dirOnly && !isDir && i == len(path)-1 {
return false
}
return true
}
return false
}
func (p *pattern) globMatch(path []string, isDir bool) bool {
matched := false
canTraverse := false
for i, pattern := range p.pattern {
if pattern == "" {
canTraverse = false
continue
}
if pattern == zeroToManyDirs {
if i == len(p.pattern)-1 {
break
}
canTraverse = true
continue
}
if strings.Contains(pattern, zeroToManyDirs) {
return false
}
if len(path) == 0 {
return false
}
if canTraverse {
canTraverse = false
for len(path) > 0 {
e := path[0]
path = path[1:]
if match, err := filepath.Match(pattern, e); err != nil {
return false
} else if match {
matched = true
break
} else if len(path) == 0 {
// if nothing left then fail
matched = false
}
}
} else {
if match, err := filepath.Match(pattern, path[0]); err != nil || !match {
return false
}
matched = true
path = path[1:]
}
}
if matched && p.dirOnly && !isDir && len(path) == 0 {
matched = false
}
return matched
}

View File

@ -0,0 +1,178 @@
package idxfile
import (
"bufio"
"bytes"
"errors"
"io"
"github.com/go-git/go-git/v5/plumbing/hash"
"github.com/go-git/go-git/v5/utils/binary"
)
var (
// ErrUnsupportedVersion is returned by Decode when the idx file version
// is not supported.
ErrUnsupportedVersion = errors.New("unsupported version")
// ErrMalformedIdxFile is returned by Decode when the idx file is corrupted.
ErrMalformedIdxFile = errors.New("malformed IDX file")
)
const (
fanout = 256
objectIDLength = hash.Size
)
// Decoder reads and decodes idx files from an input stream.
type Decoder struct {
*bufio.Reader
}
// NewDecoder builds a new idx stream decoder, that reads from r.
func NewDecoder(r io.Reader) *Decoder {
return &Decoder{bufio.NewReader(r)}
}
// Decode reads from the stream and decode the content into the MemoryIndex struct.
func (d *Decoder) Decode(idx *MemoryIndex) error {
if err := validateHeader(d); err != nil {
return err
}
flow := []func(*MemoryIndex, io.Reader) error{
readVersion,
readFanout,
readObjectNames,
readCRC32,
readOffsets,
readChecksums,
}
for _, f := range flow {
if err := f(idx, d); err != nil {
return err
}
}
return nil
}
func validateHeader(r io.Reader) error {
var h = make([]byte, 4)
if _, err := io.ReadFull(r, h); err != nil {
return err
}
if !bytes.Equal(h, idxHeader) {
return ErrMalformedIdxFile
}
return nil
}
func readVersion(idx *MemoryIndex, r io.Reader) error {
v, err := binary.ReadUint32(r)
if err != nil {
return err
}
if v > VersionSupported {
return ErrUnsupportedVersion
}
idx.Version = v
return nil
}
func readFanout(idx *MemoryIndex, r io.Reader) error {
for k := 0; k < fanout; k++ {
n, err := binary.ReadUint32(r)
if err != nil {
return err
}
idx.Fanout[k] = n
idx.FanoutMapping[k] = noMapping
}
return nil
}
func readObjectNames(idx *MemoryIndex, r io.Reader) error {
for k := 0; k < fanout; k++ {
var buckets uint32
if k == 0 {
buckets = idx.Fanout[k]
} else {
buckets = idx.Fanout[k] - idx.Fanout[k-1]
}
if buckets == 0 {
continue
}
idx.FanoutMapping[k] = len(idx.Names)
nameLen := int(buckets * objectIDLength)
bin := make([]byte, nameLen)
if _, err := io.ReadFull(r, bin); err != nil {
return err
}
idx.Names = append(idx.Names, bin)
idx.Offset32 = append(idx.Offset32, make([]byte, buckets*4))
idx.CRC32 = append(idx.CRC32, make([]byte, buckets*4))
}
return nil
}
func readCRC32(idx *MemoryIndex, r io.Reader) error {
for k := 0; k < fanout; k++ {
if pos := idx.FanoutMapping[k]; pos != noMapping {
if _, err := io.ReadFull(r, idx.CRC32[pos]); err != nil {
return err
}
}
}
return nil
}
func readOffsets(idx *MemoryIndex, r io.Reader) error {
var o64cnt int
for k := 0; k < fanout; k++ {
if pos := idx.FanoutMapping[k]; pos != noMapping {
if _, err := io.ReadFull(r, idx.Offset32[pos]); err != nil {
return err
}
for p := 0; p < len(idx.Offset32[pos]); p += 4 {
if idx.Offset32[pos][p]&(byte(1)<<7) > 0 {
o64cnt++
}
}
}
}
if o64cnt > 0 {
idx.Offset64 = make([]byte, o64cnt*8)
if _, err := io.ReadFull(r, idx.Offset64); err != nil {
return err
}
}
return nil
}
func readChecksums(idx *MemoryIndex, r io.Reader) error {
if _, err := io.ReadFull(r, idx.PackfileChecksum[:]); err != nil {
return err
}
if _, err := io.ReadFull(r, idx.IdxChecksum[:]); err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,128 @@
// Package idxfile implements encoding and decoding of packfile idx files.
//
// == Original (version 1) pack-*.idx files have the following format:
//
// - The header consists of 256 4-byte network byte order
// integers. N-th entry of this table records the number of
// objects in the corresponding pack, the first byte of whose
// object name is less than or equal to N. This is called the
// 'first-level fan-out' table.
//
// - The header is followed by sorted 24-byte entries, one entry
// per object in the pack. Each entry is:
//
// 4-byte network byte order integer, recording where the
// object is stored in the packfile as the offset from the
// beginning.
//
// 20-byte object name.
//
// - The file is concluded with a trailer:
//
// A copy of the 20-byte SHA1 checksum at the end of
// corresponding packfile.
//
// 20-byte SHA1-checksum of all of the above.
//
// Pack Idx file:
//
// -- +--------------------------------+
// fanout | fanout[0] = 2 (for example) |-.
// table +--------------------------------+ |
// | fanout[1] | |
// +--------------------------------+ |
// | fanout[2] | |
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
// | fanout[255] = total objects |---.
// -- +--------------------------------+ | |
// main | offset | | |
// index | object name 00XXXXXXXXXXXXXXXX | | |
// tab +--------------------------------+ | |
// | offset | | |
// | object name 00XXXXXXXXXXXXXXXX | | |
// +--------------------------------+<+ |
// .-| offset | |
// | | object name 01XXXXXXXXXXXXXXXX | |
// | +--------------------------------+ |
// | | offset | |
// | | object name 01XXXXXXXXXXXXXXXX | |
// | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
// | | offset | |
// | | object name FFXXXXXXXXXXXXXXXX | |
// --| +--------------------------------+<--+
// trailer | | packfile checksum |
// | +--------------------------------+
// | | idxfile checksum |
// | +--------------------------------+
// .---------.
// |
// Pack file entry: <+
//
// packed object header:
// 1-byte size extension bit (MSB)
// type (next 3 bit)
// size0 (lower 4-bit)
// n-byte sizeN (as long as MSB is set, each 7-bit)
// size0..sizeN form 4+7+7+..+7 bit integer, size0
// is the least significant part, and sizeN is the
// most significant part.
// packed object data:
// If it is not DELTA, then deflated bytes (the size above
// is the size before compression).
// If it is REF_DELTA, then
// 20-byte base object name SHA1 (the size above is the
// size of the delta data that follows).
// delta data, deflated.
// If it is OFS_DELTA, then
// n-byte offset (see below) interpreted as a negative
// offset from the type-byte of the header of the
// ofs-delta entry (the size above is the size of
// the delta data that follows).
// delta data, deflated.
//
// offset encoding:
// n bytes with MSB set in all but the last one.
// The offset is then the number constructed by
// concatenating the lower 7 bit of each byte, and
// for n >= 2 adding 2^7 + 2^14 + ... + 2^(7*(n-1))
// to the result.
//
// == Version 2 pack-*.idx files support packs larger than 4 GiB, and
// have some other reorganizations. They have the format:
//
// - A 4-byte magic number '\377tOc' which is an unreasonable
// fanout[0] value.
//
// - A 4-byte version number (= 2)
//
// - A 256-entry fan-out table just like v1.
//
// - A table of sorted 20-byte SHA1 object names. These are
// packed together without offset values to reduce the cache
// footprint of the binary search for a specific object name.
//
// - A table of 4-byte CRC32 values of the packed object data.
// This is new in v2 so compressed data can be copied directly
// from pack to pack during repacking without undetected
// data corruption.
//
// - A table of 4-byte offset values (in network byte order).
// These are usually 31-bit pack file offsets, but large
// offsets are encoded as an index into the next table with
// the msbit set.
//
// - A table of 8-byte offset entries (empty for pack files less
// than 2 GiB). Pack files are organized with heavily used
// objects toward the front, so most object references should
// not need to refer to this table.
//
// - The same trailer as a v1 pack file:
//
// A copy of the 20-byte SHA1 checksum at the end of
// corresponding packfile.
//
// 20-byte SHA1-checksum of all of the above.
//
// Source:
// https://www.kernel.org/pub/software/scm/git/docs/v1.7.5/technical/pack-format.txt
package idxfile

View File

@ -0,0 +1,141 @@
package idxfile
import (
"io"
"github.com/go-git/go-git/v5/plumbing/hash"
"github.com/go-git/go-git/v5/utils/binary"
)
// Encoder writes MemoryIndex structs to an output stream.
type Encoder struct {
io.Writer
hash hash.Hash
}
// NewEncoder returns a new stream encoder that writes to w.
func NewEncoder(w io.Writer) *Encoder {
h := hash.New(hash.CryptoType)
mw := io.MultiWriter(w, h)
return &Encoder{mw, h}
}
// Encode encodes an MemoryIndex to the encoder writer.
func (e *Encoder) Encode(idx *MemoryIndex) (int, error) {
flow := []func(*MemoryIndex) (int, error){
e.encodeHeader,
e.encodeFanout,
e.encodeHashes,
e.encodeCRC32,
e.encodeOffsets,
e.encodeChecksums,
}
sz := 0
for _, f := range flow {
i, err := f(idx)
sz += i
if err != nil {
return sz, err
}
}
return sz, nil
}
func (e *Encoder) encodeHeader(idx *MemoryIndex) (int, error) {
c, err := e.Write(idxHeader)
if err != nil {
return c, err
}
return c + 4, binary.WriteUint32(e, idx.Version)
}
func (e *Encoder) encodeFanout(idx *MemoryIndex) (int, error) {
for _, c := range idx.Fanout {
if err := binary.WriteUint32(e, c); err != nil {
return 0, err
}
}
return fanout * 4, nil
}
func (e *Encoder) encodeHashes(idx *MemoryIndex) (int, error) {
var size int
for k := 0; k < fanout; k++ {
pos := idx.FanoutMapping[k]
if pos == noMapping {
continue
}
n, err := e.Write(idx.Names[pos])
if err != nil {
return size, err
}
size += n
}
return size, nil
}
func (e *Encoder) encodeCRC32(idx *MemoryIndex) (int, error) {
var size int
for k := 0; k < fanout; k++ {
pos := idx.FanoutMapping[k]
if pos == noMapping {
continue
}
n, err := e.Write(idx.CRC32[pos])
if err != nil {
return size, err
}
size += n
}
return size, nil
}
func (e *Encoder) encodeOffsets(idx *MemoryIndex) (int, error) {
var size int
for k := 0; k < fanout; k++ {
pos := idx.FanoutMapping[k]
if pos == noMapping {
continue
}
n, err := e.Write(idx.Offset32[pos])
if err != nil {
return size, err
}
size += n
}
if len(idx.Offset64) > 0 {
n, err := e.Write(idx.Offset64)
if err != nil {
return size, err
}
size += n
}
return size, nil
}
func (e *Encoder) encodeChecksums(idx *MemoryIndex) (int, error) {
if _, err := e.Write(idx.PackfileChecksum[:]); err != nil {
return 0, err
}
copy(idx.IdxChecksum[:], e.hash.Sum(nil)[:hash.Size])
if _, err := e.Write(idx.IdxChecksum[:]); err != nil {
return 0, err
}
return hash.HexSize, nil
}

View File

@ -0,0 +1,347 @@
package idxfile
import (
"bytes"
"io"
"sort"
encbin "encoding/binary"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/hash"
)
const (
// VersionSupported is the only idx version supported.
VersionSupported = 2
noMapping = -1
)
var (
idxHeader = []byte{255, 't', 'O', 'c'}
)
// Index represents an index of a packfile.
type Index interface {
// Contains checks whether the given hash is in the index.
Contains(h plumbing.Hash) (bool, error)
// FindOffset finds the offset in the packfile for the object with
// the given hash.
FindOffset(h plumbing.Hash) (int64, error)
// FindCRC32 finds the CRC32 of the object with the given hash.
FindCRC32(h plumbing.Hash) (uint32, error)
// FindHash finds the hash for the object with the given offset.
FindHash(o int64) (plumbing.Hash, error)
// Count returns the number of entries in the index.
Count() (int64, error)
// Entries returns an iterator to retrieve all index entries.
Entries() (EntryIter, error)
// EntriesByOffset returns an iterator to retrieve all index entries ordered
// by offset.
EntriesByOffset() (EntryIter, error)
}
// MemoryIndex is the in memory representation of an idx file.
type MemoryIndex struct {
Version uint32
Fanout [256]uint32
// FanoutMapping maps the position in the fanout table to the position
// in the Names, Offset32 and CRC32 slices. This improves the memory
// usage by not needing an array with unnecessary empty slots.
FanoutMapping [256]int
Names [][]byte
Offset32 [][]byte
CRC32 [][]byte
Offset64 []byte
PackfileChecksum [hash.Size]byte
IdxChecksum [hash.Size]byte
offsetHash map[int64]plumbing.Hash
offsetHashIsFull bool
}
var _ Index = (*MemoryIndex)(nil)
// NewMemoryIndex returns an instance of a new MemoryIndex.
func NewMemoryIndex() *MemoryIndex {
return &MemoryIndex{}
}
func (idx *MemoryIndex) findHashIndex(h plumbing.Hash) (int, bool) {
k := idx.FanoutMapping[h[0]]
if k == noMapping {
return 0, false
}
if len(idx.Names) <= k {
return 0, false
}
data := idx.Names[k]
high := uint64(len(idx.Offset32[k])) >> 2
if high == 0 {
return 0, false
}
low := uint64(0)
for {
mid := (low + high) >> 1
offset := mid * objectIDLength
cmp := bytes.Compare(h[:], data[offset:offset+objectIDLength])
if cmp < 0 {
high = mid
} else if cmp == 0 {
return int(mid), true
} else {
low = mid + 1
}
if low >= high {
break
}
}
return 0, false
}
// Contains implements the Index interface.
func (idx *MemoryIndex) Contains(h plumbing.Hash) (bool, error) {
_, ok := idx.findHashIndex(h)
return ok, nil
}
// FindOffset implements the Index interface.
func (idx *MemoryIndex) FindOffset(h plumbing.Hash) (int64, error) {
if len(idx.FanoutMapping) <= int(h[0]) {
return 0, plumbing.ErrObjectNotFound
}
k := idx.FanoutMapping[h[0]]
i, ok := idx.findHashIndex(h)
if !ok {
return 0, plumbing.ErrObjectNotFound
}
offset := idx.getOffset(k, i)
if !idx.offsetHashIsFull {
// Save the offset for reverse lookup
if idx.offsetHash == nil {
idx.offsetHash = make(map[int64]plumbing.Hash)
}
idx.offsetHash[int64(offset)] = h
}
return int64(offset), nil
}
const isO64Mask = uint64(1) << 31
func (idx *MemoryIndex) getOffset(firstLevel, secondLevel int) uint64 {
offset := secondLevel << 2
ofs := encbin.BigEndian.Uint32(idx.Offset32[firstLevel][offset : offset+4])
if (uint64(ofs) & isO64Mask) != 0 {
offset := 8 * (uint64(ofs) & ^isO64Mask)
n := encbin.BigEndian.Uint64(idx.Offset64[offset : offset+8])
return n
}
return uint64(ofs)
}
// FindCRC32 implements the Index interface.
func (idx *MemoryIndex) FindCRC32(h plumbing.Hash) (uint32, error) {
k := idx.FanoutMapping[h[0]]
i, ok := idx.findHashIndex(h)
if !ok {
return 0, plumbing.ErrObjectNotFound
}
return idx.getCRC32(k, i), nil
}
func (idx *MemoryIndex) getCRC32(firstLevel, secondLevel int) uint32 {
offset := secondLevel << 2
return encbin.BigEndian.Uint32(idx.CRC32[firstLevel][offset : offset+4])
}
// FindHash implements the Index interface.
func (idx *MemoryIndex) FindHash(o int64) (plumbing.Hash, error) {
var hash plumbing.Hash
var ok bool
if idx.offsetHash != nil {
if hash, ok = idx.offsetHash[o]; ok {
return hash, nil
}
}
// Lazily generate the reverse offset/hash map if required.
if !idx.offsetHashIsFull || idx.offsetHash == nil {
if err := idx.genOffsetHash(); err != nil {
return plumbing.ZeroHash, err
}
hash, ok = idx.offsetHash[o]
}
if !ok {
return plumbing.ZeroHash, plumbing.ErrObjectNotFound
}
return hash, nil
}
// genOffsetHash generates the offset/hash mapping for reverse search.
func (idx *MemoryIndex) genOffsetHash() error {
count, err := idx.Count()
if err != nil {
return err
}
idx.offsetHash = make(map[int64]plumbing.Hash, count)
idx.offsetHashIsFull = true
var hash plumbing.Hash
i := uint32(0)
for firstLevel, fanoutValue := range idx.Fanout {
mappedFirstLevel := idx.FanoutMapping[firstLevel]
for secondLevel := uint32(0); i < fanoutValue; i++ {
copy(hash[:], idx.Names[mappedFirstLevel][secondLevel*objectIDLength:])
offset := int64(idx.getOffset(mappedFirstLevel, int(secondLevel)))
idx.offsetHash[offset] = hash
secondLevel++
}
}
return nil
}
// Count implements the Index interface.
func (idx *MemoryIndex) Count() (int64, error) {
return int64(idx.Fanout[fanout-1]), nil
}
// Entries implements the Index interface.
func (idx *MemoryIndex) Entries() (EntryIter, error) {
return &idxfileEntryIter{idx, 0, 0, 0}, nil
}
// EntriesByOffset implements the Index interface.
func (idx *MemoryIndex) EntriesByOffset() (EntryIter, error) {
count, err := idx.Count()
if err != nil {
return nil, err
}
iter := &idxfileEntryOffsetIter{
entries: make(entriesByOffset, count),
}
entries, err := idx.Entries()
if err != nil {
return nil, err
}
for pos := 0; int64(pos) < count; pos++ {
entry, err := entries.Next()
if err != nil {
return nil, err
}
iter.entries[pos] = entry
}
sort.Sort(iter.entries)
return iter, nil
}
// EntryIter is an iterator that will return the entries in a packfile index.
type EntryIter interface {
// Next returns the next entry in the packfile index.
Next() (*Entry, error)
// Close closes the iterator.
Close() error
}
type idxfileEntryIter struct {
idx *MemoryIndex
total int
firstLevel, secondLevel int
}
func (i *idxfileEntryIter) Next() (*Entry, error) {
for {
if i.firstLevel >= fanout {
return nil, io.EOF
}
if i.total >= int(i.idx.Fanout[i.firstLevel]) {
i.firstLevel++
i.secondLevel = 0
continue
}
mappedFirstLevel := i.idx.FanoutMapping[i.firstLevel]
entry := new(Entry)
copy(entry.Hash[:], i.idx.Names[mappedFirstLevel][i.secondLevel*objectIDLength:])
entry.Offset = i.idx.getOffset(mappedFirstLevel, i.secondLevel)
entry.CRC32 = i.idx.getCRC32(mappedFirstLevel, i.secondLevel)
i.secondLevel++
i.total++
return entry, nil
}
}
func (i *idxfileEntryIter) Close() error {
i.firstLevel = fanout
return nil
}
// Entry is the in memory representation of an object entry in the idx file.
type Entry struct {
Hash plumbing.Hash
CRC32 uint32
Offset uint64
}
type idxfileEntryOffsetIter struct {
entries entriesByOffset
pos int
}
func (i *idxfileEntryOffsetIter) Next() (*Entry, error) {
if i.pos >= len(i.entries) {
return nil, io.EOF
}
entry := i.entries[i.pos]
i.pos++
return entry, nil
}
func (i *idxfileEntryOffsetIter) Close() error {
i.pos = len(i.entries) + 1
return nil
}
type entriesByOffset []*Entry
func (o entriesByOffset) Len() int {
return len(o)
}
func (o entriesByOffset) Less(i int, j int) bool {
return o[i].Offset < o[j].Offset
}
func (o entriesByOffset) Swap(i int, j int) {
o[i], o[j] = o[j], o[i]
}

View File

@ -0,0 +1,193 @@
package idxfile
import (
"bytes"
"fmt"
"math"
"sort"
"sync"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/utils/binary"
)
// objects implements sort.Interface and uses hash as sorting key.
type objects []Entry
// Writer implements a packfile Observer interface and is used to generate
// indexes.
type Writer struct {
m sync.Mutex
count uint32
checksum plumbing.Hash
objects objects
offset64 uint32
finished bool
index *MemoryIndex
added map[plumbing.Hash]struct{}
}
// Index returns a previously created MemoryIndex or creates a new one if
// needed.
func (w *Writer) Index() (*MemoryIndex, error) {
w.m.Lock()
defer w.m.Unlock()
if w.index == nil {
return w.createIndex()
}
return w.index, nil
}
// Add appends new object data.
func (w *Writer) Add(h plumbing.Hash, pos uint64, crc uint32) {
w.m.Lock()
defer w.m.Unlock()
if w.added == nil {
w.added = make(map[plumbing.Hash]struct{})
}
if _, ok := w.added[h]; !ok {
w.added[h] = struct{}{}
w.objects = append(w.objects, Entry{h, crc, pos})
}
}
func (w *Writer) Finished() bool {
return w.finished
}
// OnHeader implements packfile.Observer interface.
func (w *Writer) OnHeader(count uint32) error {
w.count = count
w.objects = make(objects, 0, count)
return nil
}
// OnInflatedObjectHeader implements packfile.Observer interface.
func (w *Writer) OnInflatedObjectHeader(t plumbing.ObjectType, objSize int64, pos int64) error {
return nil
}
// OnInflatedObjectContent implements packfile.Observer interface.
func (w *Writer) OnInflatedObjectContent(h plumbing.Hash, pos int64, crc uint32, _ []byte) error {
w.Add(h, uint64(pos), crc)
return nil
}
// OnFooter implements packfile.Observer interface.
func (w *Writer) OnFooter(h plumbing.Hash) error {
w.checksum = h
w.finished = true
_, err := w.createIndex()
return err
}
// creatIndex returns a filled MemoryIndex with the information filled by
// the observer callbacks.
func (w *Writer) createIndex() (*MemoryIndex, error) {
if !w.finished {
return nil, fmt.Errorf("the index still hasn't finished building")
}
idx := new(MemoryIndex)
w.index = idx
sort.Sort(w.objects)
// unmap all fans by default
for i := range idx.FanoutMapping {
idx.FanoutMapping[i] = noMapping
}
buf := new(bytes.Buffer)
last := -1
bucket := -1
for i, o := range w.objects {
fan := o.Hash[0]
// fill the gaps between fans
for j := last + 1; j < int(fan); j++ {
idx.Fanout[j] = uint32(i)
}
// update the number of objects for this position
idx.Fanout[fan] = uint32(i + 1)
// we move from one bucket to another, update counters and allocate
// memory
if last != int(fan) {
bucket++
idx.FanoutMapping[fan] = bucket
last = int(fan)
idx.Names = append(idx.Names, make([]byte, 0))
idx.Offset32 = append(idx.Offset32, make([]byte, 0))
idx.CRC32 = append(idx.CRC32, make([]byte, 0))
}
idx.Names[bucket] = append(idx.Names[bucket], o.Hash[:]...)
offset := o.Offset
if offset > math.MaxInt32 {
var err error
offset, err = w.addOffset64(offset)
if err != nil {
return nil, err
}
}
buf.Truncate(0)
if err := binary.WriteUint32(buf, uint32(offset)); err != nil {
return nil, err
}
idx.Offset32[bucket] = append(idx.Offset32[bucket], buf.Bytes()...)
buf.Truncate(0)
if err := binary.WriteUint32(buf, o.CRC32); err != nil {
return nil, err
}
idx.CRC32[bucket] = append(idx.CRC32[bucket], buf.Bytes()...)
}
for j := last + 1; j < 256; j++ {
idx.Fanout[j] = uint32(len(w.objects))
}
idx.Version = VersionSupported
idx.PackfileChecksum = w.checksum
return idx, nil
}
func (w *Writer) addOffset64(pos uint64) (uint64, error) {
buf := new(bytes.Buffer)
if err := binary.WriteUint64(buf, pos); err != nil {
return 0, err
}
w.index.Offset64 = append(w.index.Offset64, buf.Bytes()...)
index := uint64(w.offset64 | (1 << 31))
w.offset64++
return index, nil
}
func (o objects) Len() int {
return len(o)
}
func (o objects) Less(i int, j int) bool {
cmp := bytes.Compare(o[i].Hash[:], o[j].Hash[:])
return cmp < 0
}
func (o objects) Swap(i int, j int) {
o[i], o[j] = o[j], o[i]
}

View File

@ -0,0 +1,478 @@
package index
import (
"bufio"
"bytes"
"errors"
"io"
"strconv"
"time"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/hash"
"github.com/go-git/go-git/v5/utils/binary"
)
var (
// DecodeVersionSupported is the range of supported index versions
DecodeVersionSupported = struct{ Min, Max uint32 }{Min: 2, Max: 4}
// ErrMalformedSignature is returned by Decode when the index header file is
// malformed
ErrMalformedSignature = errors.New("malformed index signature file")
// ErrInvalidChecksum is returned by Decode if the SHA1 hash mismatch with
// the read content
ErrInvalidChecksum = errors.New("invalid checksum")
errUnknownExtension = errors.New("unknown extension")
)
const (
entryHeaderLength = 62
entryExtended = 0x4000
entryValid = 0x8000
nameMask = 0xfff
intentToAddMask = 1 << 13
skipWorkTreeMask = 1 << 14
)
// A Decoder reads and decodes index files from an input stream.
type Decoder struct {
r io.Reader
hash hash.Hash
lastEntry *Entry
extReader *bufio.Reader
}
// NewDecoder returns a new decoder that reads from r.
func NewDecoder(r io.Reader) *Decoder {
h := hash.New(hash.CryptoType)
return &Decoder{
r: io.TeeReader(r, h),
hash: h,
extReader: bufio.NewReader(nil),
}
}
// Decode reads the whole index object from its input and stores it in the
// value pointed to by idx.
func (d *Decoder) Decode(idx *Index) error {
var err error
idx.Version, err = validateHeader(d.r)
if err != nil {
return err
}
entryCount, err := binary.ReadUint32(d.r)
if err != nil {
return err
}
if err := d.readEntries(idx, int(entryCount)); err != nil {
return err
}
return d.readExtensions(idx)
}
func (d *Decoder) readEntries(idx *Index, count int) error {
for i := 0; i < count; i++ {
e, err := d.readEntry(idx)
if err != nil {
return err
}
d.lastEntry = e
idx.Entries = append(idx.Entries, e)
}
return nil
}
func (d *Decoder) readEntry(idx *Index) (*Entry, error) {
e := &Entry{}
var msec, mnsec, sec, nsec uint32
var flags uint16
flow := []interface{}{
&sec, &nsec,
&msec, &mnsec,
&e.Dev,
&e.Inode,
&e.Mode,
&e.UID,
&e.GID,
&e.Size,
&e.Hash,
&flags,
}
if err := binary.Read(d.r, flow...); err != nil {
return nil, err
}
read := entryHeaderLength
if sec != 0 || nsec != 0 {
e.CreatedAt = time.Unix(int64(sec), int64(nsec))
}
if msec != 0 || mnsec != 0 {
e.ModifiedAt = time.Unix(int64(msec), int64(mnsec))
}
e.Stage = Stage(flags>>12) & 0x3
if flags&entryExtended != 0 {
extended, err := binary.ReadUint16(d.r)
if err != nil {
return nil, err
}
read += 2
e.IntentToAdd = extended&intentToAddMask != 0
e.SkipWorktree = extended&skipWorkTreeMask != 0
}
if err := d.readEntryName(idx, e, flags); err != nil {
return nil, err
}
return e, d.padEntry(idx, e, read)
}
func (d *Decoder) readEntryName(idx *Index, e *Entry, flags uint16) error {
var name string
var err error
switch idx.Version {
case 2, 3:
len := flags & nameMask
name, err = d.doReadEntryName(len)
case 4:
name, err = d.doReadEntryNameV4()
default:
return ErrUnsupportedVersion
}
if err != nil {
return err
}
e.Name = name
return nil
}
func (d *Decoder) doReadEntryNameV4() (string, error) {
l, err := binary.ReadVariableWidthInt(d.r)
if err != nil {
return "", err
}
var base string
if d.lastEntry != nil {
base = d.lastEntry.Name[:len(d.lastEntry.Name)-int(l)]
}
name, err := binary.ReadUntil(d.r, '\x00')
if err != nil {
return "", err
}
return base + string(name), nil
}
func (d *Decoder) doReadEntryName(len uint16) (string, error) {
name := make([]byte, len)
_, err := io.ReadFull(d.r, name)
return string(name), err
}
// Index entries are padded out to the next 8 byte alignment
// for historical reasons related to how C Git read the files.
func (d *Decoder) padEntry(idx *Index, e *Entry, read int) error {
if idx.Version == 4 {
return nil
}
entrySize := read + len(e.Name)
padLen := 8 - entrySize%8
_, err := io.CopyN(io.Discard, d.r, int64(padLen))
return err
}
func (d *Decoder) readExtensions(idx *Index) error {
// TODO: support 'Split index' and 'Untracked cache' extensions, take in
// count that they are not supported by jgit or libgit
var expected []byte
var err error
var header [4]byte
for {
expected = d.hash.Sum(nil)
var n int
if n, err = io.ReadFull(d.r, header[:]); err != nil {
if n == 0 {
err = io.EOF
}
break
}
err = d.readExtension(idx, header[:])
if err != nil {
break
}
}
if err != errUnknownExtension {
return err
}
return d.readChecksum(expected, header)
}
func (d *Decoder) readExtension(idx *Index, header []byte) error {
switch {
case bytes.Equal(header, treeExtSignature):
r, err := d.getExtensionReader()
if err != nil {
return err
}
idx.Cache = &Tree{}
d := &treeExtensionDecoder{r}
if err := d.Decode(idx.Cache); err != nil {
return err
}
case bytes.Equal(header, resolveUndoExtSignature):
r, err := d.getExtensionReader()
if err != nil {
return err
}
idx.ResolveUndo = &ResolveUndo{}
d := &resolveUndoDecoder{r}
if err := d.Decode(idx.ResolveUndo); err != nil {
return err
}
case bytes.Equal(header, endOfIndexEntryExtSignature):
r, err := d.getExtensionReader()
if err != nil {
return err
}
idx.EndOfIndexEntry = &EndOfIndexEntry{}
d := &endOfIndexEntryDecoder{r}
if err := d.Decode(idx.EndOfIndexEntry); err != nil {
return err
}
default:
return errUnknownExtension
}
return nil
}
func (d *Decoder) getExtensionReader() (*bufio.Reader, error) {
len, err := binary.ReadUint32(d.r)
if err != nil {
return nil, err
}
d.extReader.Reset(&io.LimitedReader{R: d.r, N: int64(len)})
return d.extReader, nil
}
func (d *Decoder) readChecksum(expected []byte, alreadyRead [4]byte) error {
var h plumbing.Hash
copy(h[:4], alreadyRead[:])
if _, err := io.ReadFull(d.r, h[4:]); err != nil {
return err
}
if !bytes.Equal(h[:], expected) {
return ErrInvalidChecksum
}
return nil
}
func validateHeader(r io.Reader) (version uint32, err error) {
var s = make([]byte, 4)
if _, err := io.ReadFull(r, s); err != nil {
return 0, err
}
if !bytes.Equal(s, indexSignature) {
return 0, ErrMalformedSignature
}
version, err = binary.ReadUint32(r)
if err != nil {
return 0, err
}
if version < DecodeVersionSupported.Min || version > DecodeVersionSupported.Max {
return 0, ErrUnsupportedVersion
}
return
}
type treeExtensionDecoder struct {
r *bufio.Reader
}
func (d *treeExtensionDecoder) Decode(t *Tree) error {
for {
e, err := d.readEntry()
if err != nil {
if err == io.EOF {
return nil
}
return err
}
if e == nil {
continue
}
t.Entries = append(t.Entries, *e)
}
}
func (d *treeExtensionDecoder) readEntry() (*TreeEntry, error) {
e := &TreeEntry{}
path, err := binary.ReadUntil(d.r, '\x00')
if err != nil {
return nil, err
}
e.Path = string(path)
count, err := binary.ReadUntil(d.r, ' ')
if err != nil {
return nil, err
}
i, err := strconv.Atoi(string(count))
if err != nil {
return nil, err
}
// An entry can be in an invalidated state and is represented by having a
// negative number in the entry_count field.
if i == -1 {
return nil, nil
}
e.Entries = i
trees, err := binary.ReadUntil(d.r, '\n')
if err != nil {
return nil, err
}
i, err = strconv.Atoi(string(trees))
if err != nil {
return nil, err
}
e.Trees = i
_, err = io.ReadFull(d.r, e.Hash[:])
if err != nil {
return nil, err
}
return e, nil
}
type resolveUndoDecoder struct {
r *bufio.Reader
}
func (d *resolveUndoDecoder) Decode(ru *ResolveUndo) error {
for {
e, err := d.readEntry()
if err != nil {
if err == io.EOF {
return nil
}
return err
}
ru.Entries = append(ru.Entries, *e)
}
}
func (d *resolveUndoDecoder) readEntry() (*ResolveUndoEntry, error) {
e := &ResolveUndoEntry{
Stages: make(map[Stage]plumbing.Hash),
}
path, err := binary.ReadUntil(d.r, '\x00')
if err != nil {
return nil, err
}
e.Path = string(path)
for i := 0; i < 3; i++ {
if err := d.readStage(e, Stage(i+1)); err != nil {
return nil, err
}
}
for s := range e.Stages {
var hash plumbing.Hash
if _, err := io.ReadFull(d.r, hash[:]); err != nil {
return nil, err
}
e.Stages[s] = hash
}
return e, nil
}
func (d *resolveUndoDecoder) readStage(e *ResolveUndoEntry, s Stage) error {
ascii, err := binary.ReadUntil(d.r, '\x00')
if err != nil {
return err
}
stage, err := strconv.ParseInt(string(ascii), 8, 64)
if err != nil {
return err
}
if stage != 0 {
e.Stages[s] = plumbing.ZeroHash
}
return nil
}
type endOfIndexEntryDecoder struct {
r *bufio.Reader
}
func (d *endOfIndexEntryDecoder) Decode(e *EndOfIndexEntry) error {
var err error
e.Offset, err = binary.ReadUint32(d.r)
if err != nil {
return err
}
_, err = io.ReadFull(d.r, e.Hash[:])
return err
}

View File

@ -0,0 +1,360 @@
// Package index implements encoding and decoding of index format files.
//
// Git index format
// ================
//
// == The Git index file has the following format
//
// All binary numbers are in network byte order. Version 2 is described
// here unless stated otherwise.
//
// - A 12-byte header consisting of
//
// 4-byte signature:
// The signature is { 'D', 'I', 'R', 'C' } (stands for "dircache")
//
// 4-byte version number:
// The current supported versions are 2, 3 and 4.
//
// 32-bit number of index entries.
//
// - A number of sorted index entries (see below).
//
// - Extensions
//
// Extensions are identified by signature. Optional extensions can
// be ignored if Git does not understand them.
//
// Git currently supports cached tree and resolve undo extensions.
//
// 4-byte extension signature. If the first byte is 'A'..'Z' the
// extension is optional and can be ignored.
//
// 32-bit size of the extension
//
// Extension data
//
// - 160-bit SHA-1 over the content of the index file before this
// checksum.
//
// == Index entry
//
// Index entries are sorted in ascending order on the name field,
// interpreted as a string of unsigned bytes (i.e. memcmp() order, no
// localization, no special casing of directory separator '/'). Entries
// with the same name are sorted by their stage field.
//
// 32-bit ctime seconds, the last time a file's metadata changed
// this is stat(2) data
//
// 32-bit ctime nanosecond fractions
// this is stat(2) data
//
// 32-bit mtime seconds, the last time a file's data changed
// this is stat(2) data
//
// 32-bit mtime nanosecond fractions
// this is stat(2) data
//
// 32-bit dev
// this is stat(2) data
//
// 32-bit ino
// this is stat(2) data
//
// 32-bit mode, split into (high to low bits)
//
// 4-bit object type
// valid values in binary are 1000 (regular file), 1010 (symbolic link)
// and 1110 (gitlink)
//
// 3-bit unused
//
// 9-bit unix permission. Only 0755 and 0644 are valid for regular files.
// Symbolic links and gitlinks have value 0 in this field.
//
// 32-bit uid
// this is stat(2) data
//
// 32-bit gid
// this is stat(2) data
//
// 32-bit file size
// This is the on-disk size from stat(2), truncated to 32-bit.
//
// 160-bit SHA-1 for the represented object
//
// A 16-bit 'flags' field split into (high to low bits)
//
// 1-bit assume-valid flag
//
// 1-bit extended flag (must be zero in version 2)
//
// 2-bit stage (during merge)
//
// 12-bit name length if the length is less than 0xFFF; otherwise 0xFFF
// is stored in this field.
//
// (Version 3 or later) A 16-bit field, only applicable if the
// "extended flag" above is 1, split into (high to low bits).
//
// 1-bit reserved for future
//
// 1-bit skip-worktree flag (used by sparse checkout)
//
// 1-bit intent-to-add flag (used by "git add -N")
//
// 13-bit unused, must be zero
//
// Entry path name (variable length) relative to top level directory
// (without leading slash). '/' is used as path separator. The special
// path components ".", ".." and ".git" (without quotes) are disallowed.
// Trailing slash is also disallowed.
//
// The exact encoding is undefined, but the '.' and '/' characters
// are encoded in 7-bit ASCII and the encoding cannot contain a NUL
// byte (iow, this is a UNIX pathname).
//
// (Version 4) In version 4, the entry path name is prefix-compressed
// relative to the path name for the previous entry (the very first
// entry is encoded as if the path name for the previous entry is an
// empty string). At the beginning of an entry, an integer N in the
// variable width encoding (the same encoding as the offset is encoded
// for OFS_DELTA pack entries; see pack-format.txt) is stored, followed
// by a NUL-terminated string S. Removing N bytes from the end of the
// path name for the previous entry, and replacing it with the string S
// yields the path name for this entry.
//
// 1-8 nul bytes as necessary to pad the entry to a multiple of eight bytes
// while keeping the name NUL-terminated.
//
// (Version 4) In version 4, the padding after the pathname does not
// exist.
//
// Interpretation of index entries in split index mode is completely
// different. See below for details.
//
// == Extensions
//
// === Cached tree
//
// Cached tree extension contains pre-computed hashes for trees that can
// be derived from the index. It helps speed up tree object generation
// from index for a new commit.
//
// When a path is updated in index, the path must be invalidated and
// removed from tree cache.
//
// The signature for this extension is { 'T', 'R', 'E', 'E' }.
//
// A series of entries fill the entire extension; each of which
// consists of:
//
// - NUL-terminated path component (relative to its parent directory);
//
// - ASCII decimal number of entries in the index that is covered by the
// tree this entry represents (entry_count);
//
// - A space (ASCII 32);
//
// - ASCII decimal number that represents the number of subtrees this
// tree has;
//
// - A newline (ASCII 10); and
//
// - 160-bit object name for the object that would result from writing
// this span of index as a tree.
//
// An entry can be in an invalidated state and is represented by having
// a negative number in the entry_count field. In this case, there is no
// object name and the next entry starts immediately after the newline.
// When writing an invalid entry, -1 should always be used as entry_count.
//
// The entries are written out in the top-down, depth-first order. The
// first entry represents the root level of the repository, followed by the
// first subtree--let's call this A--of the root level (with its name
// relative to the root level), followed by the first subtree of A (with
// its name relative to A), ...
//
// === Resolve undo
//
// A conflict is represented in the index as a set of higher stage entries.
// When a conflict is resolved (e.g. with "git add path"), these higher
// stage entries will be removed and a stage-0 entry with proper resolution
// is added.
//
// When these higher stage entries are removed, they are saved in the
// resolve undo extension, so that conflicts can be recreated (e.g. with
// "git checkout -m"), in case users want to redo a conflict resolution
// from scratch.
//
// The signature for this extension is { 'R', 'E', 'U', 'C' }.
//
// A series of entries fill the entire extension; each of which
// consists of:
//
// - NUL-terminated pathname the entry describes (relative to the root of
// the repository, i.e. full pathname);
//
// - Three NUL-terminated ASCII octal numbers, entry mode of entries in
// stage 1 to 3 (a missing stage is represented by "0" in this field);
// and
//
// - At most three 160-bit object names of the entry in stages from 1 to 3
// (nothing is written for a missing stage).
//
// === Split index
//
// In split index mode, the majority of index entries could be stored
// in a separate file. This extension records the changes to be made on
// top of that to produce the final index.
//
// The signature for this extension is { 'l', 'i', 'n', 'k' }.
//
// The extension consists of:
//
// - 160-bit SHA-1 of the shared index file. The shared index file path
// is $GIT_DIR/sharedindex.<SHA-1>. If all 160 bits are zero, the
// index does not require a shared index file.
//
// - An ewah-encoded delete bitmap, each bit represents an entry in the
// shared index. If a bit is set, its corresponding entry in the
// shared index will be removed from the final index. Note, because
// a delete operation changes index entry positions, but we do need
// original positions in replace phase, it's best to just mark
// entries for removal, then do a mass deletion after replacement.
//
// - An ewah-encoded replace bitmap, each bit represents an entry in
// the shared index. If a bit is set, its corresponding entry in the
// shared index will be replaced with an entry in this index
// file. All replaced entries are stored in sorted order in this
// index. The first "1" bit in the replace bitmap corresponds to the
// first index entry, the second "1" bit to the second entry and so
// on. Replaced entries may have empty path names to save space.
//
// The remaining index entries after replaced ones will be added to the
// final index. These added entries are also sorted by entry name then
// stage.
//
// == Untracked cache
//
// Untracked cache saves the untracked file list and necessary data to
// verify the cache. The signature for this extension is { 'U', 'N',
// 'T', 'R' }.
//
// The extension starts with
//
// - A sequence of NUL-terminated strings, preceded by the size of the
// sequence in variable width encoding. Each string describes the
// environment where the cache can be used.
//
// - Stat data of $GIT_DIR/info/exclude. See "Index entry" section from
// ctime field until "file size".
//
// - Stat data of plumbing.excludesfile
//
// - 32-bit dir_flags (see struct dir_struct)
//
// - 160-bit SHA-1 of $GIT_DIR/info/exclude. Null SHA-1 means the file
// does not exist.
//
// - 160-bit SHA-1 of plumbing.excludesfile. Null SHA-1 means the file does
// not exist.
//
// - NUL-terminated string of per-dir exclude file name. This usually
// is ".gitignore".
//
// - The number of following directory blocks, variable width
// encoding. If this number is zero, the extension ends here with a
// following NUL.
//
// - A number of directory blocks in depth-first-search order, each
// consists of
//
// - The number of untracked entries, variable width encoding.
//
// - The number of sub-directory blocks, variable width encoding.
//
// - The directory name terminated by NUL.
//
// - A number of untracked file/dir names terminated by NUL.
//
// The remaining data of each directory block is grouped by type:
//
// - An ewah bitmap, the n-th bit marks whether the n-th directory has
// valid untracked cache entries.
//
// - An ewah bitmap, the n-th bit records "check-only" bit of
// read_directory_recursive() for the n-th directory.
//
// - An ewah bitmap, the n-th bit indicates whether SHA-1 and stat data
// is valid for the n-th directory and exists in the next data.
//
// - An array of stat data. The n-th data corresponds with the n-th
// "one" bit in the previous ewah bitmap.
//
// - An array of SHA-1. The n-th SHA-1 corresponds with the n-th "one" bit
// in the previous ewah bitmap.
//
// - One NUL.
//
// == File System Monitor cache
//
// The file system monitor cache tracks files for which the core.fsmonitor
// hook has told us about changes. The signature for this extension is
// { 'F', 'S', 'M', 'N' }.
//
// The extension starts with
//
// - 32-bit version number: the current supported version is 1.
//
// - 64-bit time: the extension data reflects all changes through the given
// time which is stored as the nanoseconds elapsed since midnight,
// January 1, 1970.
//
// - 32-bit bitmap size: the size of the CE_FSMONITOR_VALID bitmap.
//
// - An ewah bitmap, the n-th bit indicates whether the n-th index entry
// is not CE_FSMONITOR_VALID.
//
// == End of Index Entry
//
// The End of Index Entry (EOIE) is used to locate the end of the variable
// length index entries and the beginning of the extensions. Code can take
// advantage of this to quickly locate the index extensions without having
// to parse through all of the index entries.
//
// Because it must be able to be loaded before the variable length cache
// entries and other index extensions, this extension must be written last.
// The signature for this extension is { 'E', 'O', 'I', 'E' }.
//
// The extension consists of:
//
// - 32-bit offset to the end of the index entries
//
// - 160-bit SHA-1 over the extension types and their sizes (but not
// their contents). E.g. if we have "TREE" extension that is N-bytes
// long, "REUC" extension that is M-bytes long, followed by "EOIE",
// then the hash would be:
//
// SHA-1("TREE" + <binary representation of N> +
// "REUC" + <binary representation of M>)
//
// == Index Entry Offset Table
//
// The Index Entry Offset Table (IEOT) is used to help address the CPU
// cost of loading the index by enabling multi-threading the process of
// converting cache entries from the on-disk format to the in-memory format.
// The signature for this extension is { 'I', 'E', 'O', 'T' }.
//
// The extension consists of:
//
// - 32-bit version (currently 1)
//
// - A number of index offset entries each consisting of:
//
// - 32-bit offset from the beginning of the file to the first cache entry
// in this block of entries.
//
// - 32-bit count of cache entries in this blockpackage index
package index

View File

@ -0,0 +1,165 @@
package index
import (
"bytes"
"errors"
"io"
"sort"
"time"
"github.com/go-git/go-git/v5/plumbing/hash"
"github.com/go-git/go-git/v5/utils/binary"
)
var (
// EncodeVersionSupported is the range of supported index versions
EncodeVersionSupported uint32 = 3
// ErrInvalidTimestamp is returned by Encode if a Index with a Entry with
// negative timestamp values
ErrInvalidTimestamp = errors.New("negative timestamps are not allowed")
)
// An Encoder writes an Index to an output stream.
type Encoder struct {
w io.Writer
hash hash.Hash
}
// NewEncoder returns a new encoder that writes to w.
func NewEncoder(w io.Writer) *Encoder {
h := hash.New(hash.CryptoType)
mw := io.MultiWriter(w, h)
return &Encoder{mw, h}
}
// Encode writes the Index to the stream of the encoder.
func (e *Encoder) Encode(idx *Index) error {
// TODO: support v4
// TODO: support extensions
if idx.Version > EncodeVersionSupported {
return ErrUnsupportedVersion
}
if err := e.encodeHeader(idx); err != nil {
return err
}
if err := e.encodeEntries(idx); err != nil {
return err
}
return e.encodeFooter()
}
func (e *Encoder) encodeHeader(idx *Index) error {
return binary.Write(e.w,
indexSignature,
idx.Version,
uint32(len(idx.Entries)),
)
}
func (e *Encoder) encodeEntries(idx *Index) error {
sort.Sort(byName(idx.Entries))
for _, entry := range idx.Entries {
if err := e.encodeEntry(entry); err != nil {
return err
}
entryLength := entryHeaderLength
if entry.IntentToAdd || entry.SkipWorktree {
entryLength += 2
}
wrote := entryLength + len(entry.Name)
if err := e.padEntry(wrote); err != nil {
return err
}
}
return nil
}
func (e *Encoder) encodeEntry(entry *Entry) error {
sec, nsec, err := e.timeToUint32(&entry.CreatedAt)
if err != nil {
return err
}
msec, mnsec, err := e.timeToUint32(&entry.ModifiedAt)
if err != nil {
return err
}
flags := uint16(entry.Stage&0x3) << 12
if l := len(entry.Name); l < nameMask {
flags |= uint16(l)
} else {
flags |= nameMask
}
flow := []interface{}{
sec, nsec,
msec, mnsec,
entry.Dev,
entry.Inode,
entry.Mode,
entry.UID,
entry.GID,
entry.Size,
entry.Hash[:],
}
flagsFlow := []interface{}{flags}
if entry.IntentToAdd || entry.SkipWorktree {
var extendedFlags uint16
if entry.IntentToAdd {
extendedFlags |= intentToAddMask
}
if entry.SkipWorktree {
extendedFlags |= skipWorkTreeMask
}
flagsFlow = []interface{}{flags | entryExtended, extendedFlags}
}
flow = append(flow, flagsFlow...)
if err := binary.Write(e.w, flow...); err != nil {
return err
}
return binary.Write(e.w, []byte(entry.Name))
}
func (e *Encoder) timeToUint32(t *time.Time) (uint32, uint32, error) {
if t.IsZero() {
return 0, 0, nil
}
if t.Unix() < 0 || t.UnixNano() < 0 {
return 0, 0, ErrInvalidTimestamp
}
return uint32(t.Unix()), uint32(t.Nanosecond()), nil
}
func (e *Encoder) padEntry(wrote int) error {
padLen := 8 - wrote%8
_, err := e.w.Write(bytes.Repeat([]byte{'\x00'}, padLen))
return err
}
func (e *Encoder) encodeFooter() error {
return binary.Write(e.w, e.hash.Sum(nil))
}
type byName []*Entry
func (l byName) Len() int { return len(l) }
func (l byName) Swap(i, j int) { l[i], l[j] = l[j], l[i] }
func (l byName) Less(i, j int) bool { return l[i].Name < l[j].Name }

View File

@ -0,0 +1,231 @@
package index
import (
"bytes"
"errors"
"fmt"
"path/filepath"
"strings"
"time"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/filemode"
)
var (
// ErrUnsupportedVersion is returned by Decode when the index file version
// is not supported.
ErrUnsupportedVersion = errors.New("unsupported version")
// ErrEntryNotFound is returned by Index.Entry, if an entry is not found.
ErrEntryNotFound = errors.New("entry not found")
indexSignature = []byte{'D', 'I', 'R', 'C'}
treeExtSignature = []byte{'T', 'R', 'E', 'E'}
resolveUndoExtSignature = []byte{'R', 'E', 'U', 'C'}
endOfIndexEntryExtSignature = []byte{'E', 'O', 'I', 'E'}
)
// Stage during merge
type Stage int
const (
// Merged is the default stage, fully merged
Merged Stage = 1
// AncestorMode is the base revision
AncestorMode Stage = 1
// OurMode is the first tree revision, ours
OurMode Stage = 2
// TheirMode is the second tree revision, theirs
TheirMode Stage = 3
)
// Index contains the information about which objects are currently checked out
// in the worktree, having information about the working files. Changes in
// worktree are detected using this Index. The Index is also used during merges
type Index struct {
// Version is index version
Version uint32
// Entries collection of entries represented by this Index. The order of
// this collection is not guaranteed
Entries []*Entry
// Cache represents the 'Cached tree' extension
Cache *Tree
// ResolveUndo represents the 'Resolve undo' extension
ResolveUndo *ResolveUndo
// EndOfIndexEntry represents the 'End of Index Entry' extension
EndOfIndexEntry *EndOfIndexEntry
}
// Add creates a new Entry and returns it. The caller should first check that
// another entry with the same path does not exist.
func (i *Index) Add(path string) *Entry {
e := &Entry{
Name: filepath.ToSlash(path),
}
i.Entries = append(i.Entries, e)
return e
}
// Entry returns the entry that match the given path, if any.
func (i *Index) Entry(path string) (*Entry, error) {
path = filepath.ToSlash(path)
for _, e := range i.Entries {
if e.Name == path {
return e, nil
}
}
return nil, ErrEntryNotFound
}
// Remove remove the entry that match the give path and returns deleted entry.
func (i *Index) Remove(path string) (*Entry, error) {
path = filepath.ToSlash(path)
for index, e := range i.Entries {
if e.Name == path {
i.Entries = append(i.Entries[:index], i.Entries[index+1:]...)
return e, nil
}
}
return nil, ErrEntryNotFound
}
// Glob returns the all entries matching pattern or nil if there is no matching
// entry. The syntax of patterns is the same as in filepath.Glob.
func (i *Index) Glob(pattern string) (matches []*Entry, err error) {
pattern = filepath.ToSlash(pattern)
for _, e := range i.Entries {
m, err := match(pattern, e.Name)
if err != nil {
return nil, err
}
if m {
matches = append(matches, e)
}
}
return
}
// String is equivalent to `git ls-files --stage --debug`
func (i *Index) String() string {
buf := bytes.NewBuffer(nil)
for _, e := range i.Entries {
buf.WriteString(e.String())
}
return buf.String()
}
// Entry represents a single file (or stage of a file) in the cache. An entry
// represents exactly one stage of a file. If a file path is unmerged then
// multiple Entry instances may appear for the same path name.
type Entry struct {
// Hash is the SHA1 of the represented file
Hash plumbing.Hash
// Name is the Entry path name relative to top level directory
Name string
// CreatedAt time when the tracked path was created
CreatedAt time.Time
// ModifiedAt time when the tracked path was changed
ModifiedAt time.Time
// Dev and Inode of the tracked path
Dev, Inode uint32
// Mode of the path
Mode filemode.FileMode
// UID and GID, userid and group id of the owner
UID, GID uint32
// Size is the length in bytes for regular files
Size uint32
// Stage on a merge is defines what stage is representing this entry
// https://git-scm.com/book/en/v2/Git-Tools-Advanced-Merging
Stage Stage
// SkipWorktree used in sparse checkouts
// https://git-scm.com/docs/git-read-tree#_sparse_checkout
SkipWorktree bool
// IntentToAdd record only the fact that the path will be added later
// https://git-scm.com/docs/git-add ("git add -N")
IntentToAdd bool
}
func (e Entry) String() string {
buf := bytes.NewBuffer(nil)
fmt.Fprintf(buf, "%06o %s %d\t%s\n", e.Mode, e.Hash, e.Stage, e.Name)
fmt.Fprintf(buf, " ctime: %d:%d\n", e.CreatedAt.Unix(), e.CreatedAt.Nanosecond())
fmt.Fprintf(buf, " mtime: %d:%d\n", e.ModifiedAt.Unix(), e.ModifiedAt.Nanosecond())
fmt.Fprintf(buf, " dev: %d\tino: %d\n", e.Dev, e.Inode)
fmt.Fprintf(buf, " uid: %d\tgid: %d\n", e.UID, e.GID)
fmt.Fprintf(buf, " size: %d\tflags: %x\n", e.Size, 0)
return buf.String()
}
// Tree contains pre-computed hashes for trees that can be derived from the
// index. It helps speed up tree object generation from index for a new commit.
type Tree struct {
Entries []TreeEntry
}
// TreeEntry entry of a cached Tree
type TreeEntry struct {
// Path component (relative to its parent directory)
Path string
// Entries is the number of entries in the index that is covered by the tree
// this entry represents.
Entries int
// Trees is the number that represents the number of subtrees this tree has
Trees int
// Hash object name for the object that would result from writing this span
// of index as a tree.
Hash plumbing.Hash
}
// ResolveUndo is used when a conflict is resolved (e.g. with "git add path"),
// these higher stage entries are removed and a stage-0 entry with proper
// resolution is added. When these higher stage entries are removed, they are
// saved in the resolve undo extension.
type ResolveUndo struct {
Entries []ResolveUndoEntry
}
// ResolveUndoEntry contains the information about a conflict when is resolved
type ResolveUndoEntry struct {
Path string
Stages map[Stage]plumbing.Hash
}
// EndOfIndexEntry is the End of Index Entry (EOIE) is used to locate the end of
// the variable length index entries and the beginning of the extensions. Code
// can take advantage of this to quickly locate the index extensions without
// having to parse through all of the index entries.
//
// Because it must be able to be loaded before the variable length cache
// entries and other index extensions, this extension must be written last.
type EndOfIndexEntry struct {
// Offset to the end of the index entries
Offset uint32
// Hash is a SHA-1 over the extension types and their sizes (but not
// their contents).
Hash plumbing.Hash
}
// SkipUnless applies patterns in the form of A, A/B, A/B/C
// to the index to prevent the files from being checked out
func (i *Index) SkipUnless(patterns []string) {
for _, e := range i.Entries {
var include bool
for _, pattern := range patterns {
if strings.HasPrefix(e.Name, pattern) {
include = true
break
}
}
if !include {
e.SkipWorktree = true
}
}
}

View File

@ -0,0 +1,186 @@
package index
import (
"path/filepath"
"runtime"
"unicode/utf8"
)
// match is filepath.Match with support to match fullpath and not only filenames
// code from:
// https://github.com/golang/go/blob/39852bf4cce6927e01d0136c7843f65a801738cb/src/path/filepath/match.go#L44-L224
func match(pattern, name string) (matched bool, err error) {
Pattern:
for len(pattern) > 0 {
var star bool
var chunk string
star, chunk, pattern = scanChunk(pattern)
// Look for match at current position.
t, ok, err := matchChunk(chunk, name)
// if we're the last chunk, make sure we've exhausted the name
// otherwise we'll give a false result even if we could still match
// using the star
if ok && (len(t) == 0 || len(pattern) > 0) {
name = t
continue
}
if err != nil {
return false, err
}
if star {
// Look for match skipping i+1 bytes.
// Cannot skip /.
for i := 0; i < len(name); i++ {
t, ok, err := matchChunk(chunk, name[i+1:])
if ok {
// if we're the last chunk, make sure we exhausted the name
if len(pattern) == 0 && len(t) > 0 {
continue
}
name = t
continue Pattern
}
if err != nil {
return false, err
}
}
}
return false, nil
}
return len(name) == 0, nil
}
// scanChunk gets the next segment of pattern, which is a non-star string
// possibly preceded by a star.
func scanChunk(pattern string) (star bool, chunk, rest string) {
for len(pattern) > 0 && pattern[0] == '*' {
pattern = pattern[1:]
star = true
}
inrange := false
var i int
Scan:
for i = 0; i < len(pattern); i++ {
switch pattern[i] {
case '\\':
if runtime.GOOS != "windows" {
// error check handled in matchChunk: bad pattern.
if i+1 < len(pattern) {
i++
}
}
case '[':
inrange = true
case ']':
inrange = false
case '*':
if !inrange {
break Scan
}
}
}
return star, pattern[0:i], pattern[i:]
}
// matchChunk checks whether chunk matches the beginning of s.
// If so, it returns the remainder of s (after the match).
// Chunk is all single-character operators: literals, char classes, and ?.
func matchChunk(chunk, s string) (rest string, ok bool, err error) {
for len(chunk) > 0 {
if len(s) == 0 {
return
}
switch chunk[0] {
case '[':
// character class
r, n := utf8.DecodeRuneInString(s)
s = s[n:]
chunk = chunk[1:]
// We can't end right after '[', we're expecting at least
// a closing bracket and possibly a caret.
if len(chunk) == 0 {
err = filepath.ErrBadPattern
return
}
// possibly negated
negated := chunk[0] == '^'
if negated {
chunk = chunk[1:]
}
// parse all ranges
match := false
nrange := 0
for {
if len(chunk) > 0 && chunk[0] == ']' && nrange > 0 {
chunk = chunk[1:]
break
}
var lo, hi rune
if lo, chunk, err = getEsc(chunk); err != nil {
return
}
hi = lo
if chunk[0] == '-' {
if hi, chunk, err = getEsc(chunk[1:]); err != nil {
return
}
}
if lo <= r && r <= hi {
match = true
}
nrange++
}
if match == negated {
return
}
case '?':
_, n := utf8.DecodeRuneInString(s)
s = s[n:]
chunk = chunk[1:]
case '\\':
if runtime.GOOS != "windows" {
chunk = chunk[1:]
if len(chunk) == 0 {
err = filepath.ErrBadPattern
return
}
}
fallthrough
default:
if chunk[0] != s[0] {
return
}
s = s[1:]
chunk = chunk[1:]
}
}
return s, true, nil
}
// getEsc gets a possibly-escaped character from chunk, for a character class.
func getEsc(chunk string) (r rune, nchunk string, err error) {
if len(chunk) == 0 || chunk[0] == '-' || chunk[0] == ']' {
err = filepath.ErrBadPattern
return
}
if chunk[0] == '\\' && runtime.GOOS != "windows" {
chunk = chunk[1:]
if len(chunk) == 0 {
err = filepath.ErrBadPattern
return
}
}
r, n := utf8.DecodeRuneInString(chunk)
if r == utf8.RuneError && n == 1 {
err = filepath.ErrBadPattern
}
nchunk = chunk[n:]
if len(nchunk) == 0 {
err = filepath.ErrBadPattern
}
return
}

View File

@ -0,0 +1,2 @@
// Package objfile implements encoding and decoding of object files.
package objfile

View File

@ -0,0 +1,117 @@
package objfile
import (
"errors"
"io"
"strconv"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/format/packfile"
"github.com/go-git/go-git/v5/utils/sync"
)
var (
ErrClosed = errors.New("objfile: already closed")
ErrHeader = errors.New("objfile: invalid header")
ErrNegativeSize = errors.New("objfile: negative object size")
)
// Reader reads and decodes compressed objfile data from a provided io.Reader.
// Reader implements io.ReadCloser. Close should be called when finished with
// the Reader. Close will not close the underlying io.Reader.
type Reader struct {
multi io.Reader
zlib io.Reader
zlibref sync.ZLibReader
hasher plumbing.Hasher
}
// NewReader returns a new Reader reading from r.
func NewReader(r io.Reader) (*Reader, error) {
zlib, err := sync.GetZlibReader(r)
if err != nil {
return nil, packfile.ErrZLib.AddDetails(err.Error())
}
return &Reader{
zlib: zlib.Reader,
zlibref: zlib,
}, nil
}
// Header reads the type and the size of object, and prepares the reader for read
func (r *Reader) Header() (t plumbing.ObjectType, size int64, err error) {
var raw []byte
raw, err = r.readUntil(' ')
if err != nil {
return
}
t, err = plumbing.ParseObjectType(string(raw))
if err != nil {
return
}
raw, err = r.readUntil(0)
if err != nil {
return
}
size, err = strconv.ParseInt(string(raw), 10, 64)
if err != nil {
err = ErrHeader
return
}
defer r.prepareForRead(t, size)
return
}
// readSlice reads one byte at a time from r until it encounters delim or an
// error.
func (r *Reader) readUntil(delim byte) ([]byte, error) {
var buf [1]byte
value := make([]byte, 0, 16)
for {
if n, err := r.zlib.Read(buf[:]); err != nil && (err != io.EOF || n == 0) {
if err == io.EOF {
return nil, ErrHeader
}
return nil, err
}
if buf[0] == delim {
return value, nil
}
value = append(value, buf[0])
}
}
func (r *Reader) prepareForRead(t plumbing.ObjectType, size int64) {
r.hasher = plumbing.NewHasher(t, size)
r.multi = io.TeeReader(r.zlib, r.hasher)
}
// Read reads len(p) bytes into p from the object data stream. It returns
// the number of bytes read (0 <= n <= len(p)) and any error encountered. Even
// if Read returns n < len(p), it may use all of p as scratch space during the
// call.
//
// If Read encounters the end of the data stream it will return err == io.EOF,
// either in the current call if n > 0 or in a subsequent call.
func (r *Reader) Read(p []byte) (n int, err error) {
return r.multi.Read(p)
}
// Hash returns the hash of the object data stream that has been read so far.
func (r *Reader) Hash() plumbing.Hash {
return r.hasher.Sum()
}
// Close releases any resources consumed by the Reader. Calling Close does not
// close the wrapped io.Reader originally passed to NewReader.
func (r *Reader) Close() error {
sync.PutZlibReader(r.zlibref)
return nil
}

View File

@ -0,0 +1,112 @@
package objfile
import (
"compress/zlib"
"errors"
"io"
"strconv"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/utils/sync"
)
var (
ErrOverflow = errors.New("objfile: declared data length exceeded (overflow)")
)
// Writer writes and encodes data in compressed objfile format to a provided
// io.Writer. Close should be called when finished with the Writer. Close will
// not close the underlying io.Writer.
type Writer struct {
raw io.Writer
hasher plumbing.Hasher
multi io.Writer
zlib *zlib.Writer
closed bool
pending int64 // number of unwritten bytes
}
// NewWriter returns a new Writer writing to w.
//
// The returned Writer implements io.WriteCloser. Close should be called when
// finished with the Writer. Close will not close the underlying io.Writer.
func NewWriter(w io.Writer) *Writer {
zlib := sync.GetZlibWriter(w)
return &Writer{
raw: w,
zlib: zlib,
}
}
// WriteHeader writes the type and the size and prepares to accept the object's
// contents. If an invalid t is provided, plumbing.ErrInvalidType is returned. If a
// negative size is provided, ErrNegativeSize is returned.
func (w *Writer) WriteHeader(t plumbing.ObjectType, size int64) error {
if !t.Valid() {
return plumbing.ErrInvalidType
}
if size < 0 {
return ErrNegativeSize
}
b := t.Bytes()
b = append(b, ' ')
b = append(b, []byte(strconv.FormatInt(size, 10))...)
b = append(b, 0)
defer w.prepareForWrite(t, size)
_, err := w.zlib.Write(b)
return err
}
func (w *Writer) prepareForWrite(t plumbing.ObjectType, size int64) {
w.pending = size
w.hasher = plumbing.NewHasher(t, size)
w.multi = io.MultiWriter(w.zlib, w.hasher)
}
// Write writes the object's contents. Write returns the error ErrOverflow if
// more than size bytes are written after WriteHeader.
func (w *Writer) Write(p []byte) (n int, err error) {
if w.closed {
return 0, ErrClosed
}
overwrite := false
if int64(len(p)) > w.pending {
p = p[0:w.pending]
overwrite = true
}
n, err = w.multi.Write(p)
w.pending -= int64(n)
if err == nil && overwrite {
err = ErrOverflow
return
}
return
}
// Hash returns the hash of the object data stream that has been written so far.
// It can be called before or after Close.
func (w *Writer) Hash() plumbing.Hash {
return w.hasher.Sum() // Not yet closed, return hash of data written so far
}
// Close releases any resources consumed by the Writer.
//
// Calling Close does not close the wrapped io.Writer originally passed to
// NewWriter.
func (w *Writer) Close() error {
defer sync.PutZlibWriter(w.zlib)
if err := w.zlib.Close(); err != nil {
return err
}
w.closed = true
return nil
}

View File

@ -0,0 +1,60 @@
package packfile
import (
"io"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/ioutil"
)
var signature = []byte{'P', 'A', 'C', 'K'}
const (
// VersionSupported is the packfile version supported by this package
VersionSupported uint32 = 2
firstLengthBits = uint8(4) // the first byte into object header has 4 bits to store the length
lengthBits = uint8(7) // subsequent bytes has 7 bits to store the length
maskFirstLength = 15 // 0000 1111
maskContinue = 0x80 // 1000 0000
maskLength = uint8(127) // 0111 1111
maskType = uint8(112) // 0111 0000
)
// UpdateObjectStorage updates the storer with the objects in the given
// packfile.
func UpdateObjectStorage(s storer.Storer, packfile io.Reader) error {
if pw, ok := s.(storer.PackfileWriter); ok {
return WritePackfileToObjectStorage(pw, packfile)
}
p, err := NewParserWithStorage(NewScanner(packfile), s)
if err != nil {
return err
}
_, err = p.Parse()
return err
}
// WritePackfileToObjectStorage writes all the packfile objects into the given
// object storage.
func WritePackfileToObjectStorage(
sw storer.PackfileWriter,
packfile io.Reader,
) (err error) {
w, err := sw.PackfileWriter()
if err != nil {
return err
}
defer ioutil.CheckClose(w, &err)
var n int64
n, err = io.Copy(w, packfile)
if err == nil && n == 0 {
return ErrEmptyPackfile
}
return err
}

View File

@ -0,0 +1,297 @@
package packfile
const blksz = 16
const maxChainLength = 64
// deltaIndex is a modified version of JGit's DeltaIndex adapted to our current
// design.
type deltaIndex struct {
table []int
entries []int
mask int
}
func (idx *deltaIndex) init(buf []byte) {
scanner := newDeltaIndexScanner(buf, len(buf))
idx.mask = scanner.mask
idx.table = scanner.table
idx.entries = make([]int, countEntries(scanner)+1)
idx.copyEntries(scanner)
}
// findMatch returns the offset of src where the block starting at tgtOffset
// is and the length of the match. A length of 0 means there was no match. A
// length of -1 means the src length is lower than the blksz and whatever
// other positive length is the length of the match in bytes.
func (idx *deltaIndex) findMatch(src, tgt []byte, tgtOffset int) (srcOffset, l int) {
if len(tgt) < tgtOffset+s {
return 0, len(tgt) - tgtOffset
}
if len(src) < blksz {
return 0, -1
}
if len(tgt) >= tgtOffset+s && len(src) >= blksz {
h := hashBlock(tgt, tgtOffset)
tIdx := h & idx.mask
eIdx := idx.table[tIdx]
if eIdx != 0 {
srcOffset = idx.entries[eIdx]
} else {
return
}
l = matchLength(src, tgt, tgtOffset, srcOffset)
}
return
}
func matchLength(src, tgt []byte, otgt, osrc int) (l int) {
lensrc := len(src)
lentgt := len(tgt)
for (osrc < lensrc && otgt < lentgt) && src[osrc] == tgt[otgt] {
l++
osrc++
otgt++
}
return
}
func countEntries(scan *deltaIndexScanner) (cnt int) {
// Figure out exactly how many entries we need. As we do the
// enumeration truncate any delta chains longer than what we
// are willing to scan during encode. This keeps the encode
// logic linear in the size of the input rather than quadratic.
for i := 0; i < len(scan.table); i++ {
h := scan.table[i]
if h == 0 {
continue
}
size := 0
for {
size++
if size == maxChainLength {
scan.next[h] = 0
break
}
h = scan.next[h]
if h == 0 {
break
}
}
cnt += size
}
return
}
func (idx *deltaIndex) copyEntries(scanner *deltaIndexScanner) {
// Rebuild the entries list from the scanner, positioning all
// blocks in the same hash chain next to each other. We can
// then later discard the next list, along with the scanner.
//
next := 1
for i := 0; i < len(idx.table); i++ {
h := idx.table[i]
if h == 0 {
continue
}
idx.table[i] = next
for {
idx.entries[next] = scanner.entries[h]
next++
h = scanner.next[h]
if h == 0 {
break
}
}
}
}
type deltaIndexScanner struct {
table []int
entries []int
next []int
mask int
count int
}
func newDeltaIndexScanner(buf []byte, size int) *deltaIndexScanner {
size -= size % blksz
worstCaseBlockCnt := size / blksz
if worstCaseBlockCnt < 1 {
return new(deltaIndexScanner)
}
tableSize := tableSize(worstCaseBlockCnt)
scanner := &deltaIndexScanner{
table: make([]int, tableSize),
mask: tableSize - 1,
entries: make([]int, worstCaseBlockCnt+1),
next: make([]int, worstCaseBlockCnt+1),
}
scanner.scan(buf, size)
return scanner
}
// slightly modified version of JGit's DeltaIndexScanner. We store the offset on the entries
// instead of the entries and the key, so we avoid operations to retrieve the offset later, as
// we don't use the key.
// See: https://github.com/eclipse/jgit/blob/005e5feb4ecd08c4e4d141a38b9e7942accb3212/org.eclipse.jgit/src/org/eclipse/jgit/internal/storage/pack/DeltaIndexScanner.java
func (s *deltaIndexScanner) scan(buf []byte, end int) {
lastHash := 0
ptr := end - blksz
for {
key := hashBlock(buf, ptr)
tIdx := key & s.mask
head := s.table[tIdx]
if head != 0 && lastHash == key {
s.entries[head] = ptr
} else {
s.count++
eIdx := s.count
s.entries[eIdx] = ptr
s.next[eIdx] = head
s.table[tIdx] = eIdx
}
lastHash = key
ptr -= blksz
if 0 > ptr {
break
}
}
}
func tableSize(worstCaseBlockCnt int) int {
shift := 32 - leadingZeros(uint32(worstCaseBlockCnt))
sz := 1 << uint(shift-1)
if sz < worstCaseBlockCnt {
sz <<= 1
}
return sz
}
// use https://golang.org/pkg/math/bits/#LeadingZeros32 in the future
func leadingZeros(x uint32) (n int) {
if x >= 1<<16 {
x >>= 16
n = 16
}
if x >= 1<<8 {
x >>= 8
n += 8
}
n += int(len8tab[x])
return 32 - n
}
var len8tab = [256]uint8{
0x00, 0x01, 0x02, 0x02, 0x03, 0x03, 0x03, 0x03, 0x04, 0x04, 0x04, 0x04, 0x04, 0x04, 0x04, 0x04,
0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05,
0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06,
0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06,
0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07,
0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07,
0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07,
0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07,
0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08,
0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08,
0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08,
0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08,
0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08,
0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08,
0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08,
0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08, 0x08,
}
func hashBlock(raw []byte, ptr int) int {
// The first 4 steps collapse out into a 4 byte big-endian decode,
// with a larger right shift as we combined shift lefts together.
//
hash := ((uint32(raw[ptr]) & 0xff) << 24) |
((uint32(raw[ptr+1]) & 0xff) << 16) |
((uint32(raw[ptr+2]) & 0xff) << 8) |
(uint32(raw[ptr+3]) & 0xff)
hash ^= T[hash>>31]
hash = ((hash << 8) | (uint32(raw[ptr+4]) & 0xff)) ^ T[hash>>23]
hash = ((hash << 8) | (uint32(raw[ptr+5]) & 0xff)) ^ T[hash>>23]
hash = ((hash << 8) | (uint32(raw[ptr+6]) & 0xff)) ^ T[hash>>23]
hash = ((hash << 8) | (uint32(raw[ptr+7]) & 0xff)) ^ T[hash>>23]
hash = ((hash << 8) | (uint32(raw[ptr+8]) & 0xff)) ^ T[hash>>23]
hash = ((hash << 8) | (uint32(raw[ptr+9]) & 0xff)) ^ T[hash>>23]
hash = ((hash << 8) | (uint32(raw[ptr+10]) & 0xff)) ^ T[hash>>23]
hash = ((hash << 8) | (uint32(raw[ptr+11]) & 0xff)) ^ T[hash>>23]
hash = ((hash << 8) | (uint32(raw[ptr+12]) & 0xff)) ^ T[hash>>23]
hash = ((hash << 8) | (uint32(raw[ptr+13]) & 0xff)) ^ T[hash>>23]
hash = ((hash << 8) | (uint32(raw[ptr+14]) & 0xff)) ^ T[hash>>23]
hash = ((hash << 8) | (uint32(raw[ptr+15]) & 0xff)) ^ T[hash>>23]
return int(hash)
}
var T = []uint32{0x00000000, 0xd4c6b32d, 0x7d4bd577,
0xa98d665a, 0x2e5119c3, 0xfa97aaee, 0x531accb4, 0x87dc7f99,
0x5ca23386, 0x886480ab, 0x21e9e6f1, 0xf52f55dc, 0x72f32a45,
0xa6359968, 0x0fb8ff32, 0xdb7e4c1f, 0x6d82d421, 0xb944670c,
0x10c90156, 0xc40fb27b, 0x43d3cde2, 0x97157ecf, 0x3e981895,
0xea5eabb8, 0x3120e7a7, 0xe5e6548a, 0x4c6b32d0, 0x98ad81fd,
0x1f71fe64, 0xcbb74d49, 0x623a2b13, 0xb6fc983e, 0x0fc31b6f,
0xdb05a842, 0x7288ce18, 0xa64e7d35, 0x219202ac, 0xf554b181,
0x5cd9d7db, 0x881f64f6, 0x536128e9, 0x87a79bc4, 0x2e2afd9e,
0xfaec4eb3, 0x7d30312a, 0xa9f68207, 0x007be45d, 0xd4bd5770,
0x6241cf4e, 0xb6877c63, 0x1f0a1a39, 0xcbcca914, 0x4c10d68d,
0x98d665a0, 0x315b03fa, 0xe59db0d7, 0x3ee3fcc8, 0xea254fe5,
0x43a829bf, 0x976e9a92, 0x10b2e50b, 0xc4745626, 0x6df9307c,
0xb93f8351, 0x1f8636de, 0xcb4085f3, 0x62cde3a9, 0xb60b5084,
0x31d72f1d, 0xe5119c30, 0x4c9cfa6a, 0x985a4947, 0x43240558,
0x97e2b675, 0x3e6fd02f, 0xeaa96302, 0x6d751c9b, 0xb9b3afb6,
0x103ec9ec, 0xc4f87ac1, 0x7204e2ff, 0xa6c251d2, 0x0f4f3788,
0xdb8984a5, 0x5c55fb3c, 0x88934811, 0x211e2e4b, 0xf5d89d66,
0x2ea6d179, 0xfa606254, 0x53ed040e, 0x872bb723, 0x00f7c8ba,
0xd4317b97, 0x7dbc1dcd, 0xa97aaee0, 0x10452db1, 0xc4839e9c,
0x6d0ef8c6, 0xb9c84beb, 0x3e143472, 0xead2875f, 0x435fe105,
0x97995228, 0x4ce71e37, 0x9821ad1a, 0x31accb40, 0xe56a786d,
0x62b607f4, 0xb670b4d9, 0x1ffdd283, 0xcb3b61ae, 0x7dc7f990,
0xa9014abd, 0x008c2ce7, 0xd44a9fca, 0x5396e053, 0x8750537e,
0x2edd3524, 0xfa1b8609, 0x2165ca16, 0xf5a3793b, 0x5c2e1f61,
0x88e8ac4c, 0x0f34d3d5, 0xdbf260f8, 0x727f06a2, 0xa6b9b58f,
0x3f0c6dbc, 0xebcade91, 0x4247b8cb, 0x96810be6, 0x115d747f,
0xc59bc752, 0x6c16a108, 0xb8d01225, 0x63ae5e3a, 0xb768ed17,
0x1ee58b4d, 0xca233860, 0x4dff47f9, 0x9939f4d4, 0x30b4928e,
0xe47221a3, 0x528eb99d, 0x86480ab0, 0x2fc56cea, 0xfb03dfc7,
0x7cdfa05e, 0xa8191373, 0x01947529, 0xd552c604, 0x0e2c8a1b,
0xdaea3936, 0x73675f6c, 0xa7a1ec41, 0x207d93d8, 0xf4bb20f5,
0x5d3646af, 0x89f0f582, 0x30cf76d3, 0xe409c5fe, 0x4d84a3a4,
0x99421089, 0x1e9e6f10, 0xca58dc3d, 0x63d5ba67, 0xb713094a,
0x6c6d4555, 0xb8abf678, 0x11269022, 0xc5e0230f, 0x423c5c96,
0x96faefbb, 0x3f7789e1, 0xebb13acc, 0x5d4da2f2, 0x898b11df,
0x20067785, 0xf4c0c4a8, 0x731cbb31, 0xa7da081c, 0x0e576e46,
0xda91dd6b, 0x01ef9174, 0xd5292259, 0x7ca44403, 0xa862f72e,
0x2fbe88b7, 0xfb783b9a, 0x52f55dc0, 0x8633eeed, 0x208a5b62,
0xf44ce84f, 0x5dc18e15, 0x89073d38, 0x0edb42a1, 0xda1df18c,
0x739097d6, 0xa75624fb, 0x7c2868e4, 0xa8eedbc9, 0x0163bd93,
0xd5a50ebe, 0x52797127, 0x86bfc20a, 0x2f32a450, 0xfbf4177d,
0x4d088f43, 0x99ce3c6e, 0x30435a34, 0xe485e919, 0x63599680,
0xb79f25ad, 0x1e1243f7, 0xcad4f0da, 0x11aabcc5, 0xc56c0fe8,
0x6ce169b2, 0xb827da9f, 0x3ffba506, 0xeb3d162b, 0x42b07071,
0x9676c35c, 0x2f49400d, 0xfb8ff320, 0x5202957a, 0x86c42657,
0x011859ce, 0xd5deeae3, 0x7c538cb9, 0xa8953f94, 0x73eb738b,
0xa72dc0a6, 0x0ea0a6fc, 0xda6615d1, 0x5dba6a48, 0x897cd965,
0x20f1bf3f, 0xf4370c12, 0x42cb942c, 0x960d2701, 0x3f80415b,
0xeb46f276, 0x6c9a8def, 0xb85c3ec2, 0x11d15898, 0xc517ebb5,
0x1e69a7aa, 0xcaaf1487, 0x632272dd, 0xb7e4c1f0, 0x3038be69,
0xe4fe0d44, 0x4d736b1e, 0x99b5d833,
}

View File

@ -0,0 +1,369 @@
package packfile
import (
"sort"
"sync"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/storer"
)
const (
// deltas based on deltas, how many steps we can do.
// 50 is the default value used in JGit
maxDepth = int64(50)
)
// applyDelta is the set of object types that we should apply deltas
var applyDelta = map[plumbing.ObjectType]bool{
plumbing.BlobObject: true,
plumbing.TreeObject: true,
}
type deltaSelector struct {
storer storer.EncodedObjectStorer
}
func newDeltaSelector(s storer.EncodedObjectStorer) *deltaSelector {
return &deltaSelector{s}
}
// ObjectsToPack creates a list of ObjectToPack from the hashes
// provided, creating deltas if it's suitable, using an specific
// internal logic. `packWindow` specifies the size of the sliding
// window used to compare objects for delta compression; 0 turns off
// delta compression entirely.
func (dw *deltaSelector) ObjectsToPack(
hashes []plumbing.Hash,
packWindow uint,
) ([]*ObjectToPack, error) {
otp, err := dw.objectsToPack(hashes, packWindow)
if err != nil {
return nil, err
}
if packWindow == 0 {
return otp, nil
}
dw.sort(otp)
var objectGroups [][]*ObjectToPack
var prev *ObjectToPack
i := -1
for _, obj := range otp {
if prev == nil || prev.Type() != obj.Type() {
objectGroups = append(objectGroups, []*ObjectToPack{obj})
i++
prev = obj
} else {
objectGroups[i] = append(objectGroups[i], obj)
}
}
var wg sync.WaitGroup
var once sync.Once
for _, objs := range objectGroups {
objs := objs
wg.Add(1)
go func() {
if walkErr := dw.walk(objs, packWindow); walkErr != nil {
once.Do(func() {
err = walkErr
})
}
wg.Done()
}()
}
wg.Wait()
if err != nil {
return nil, err
}
return otp, nil
}
func (dw *deltaSelector) objectsToPack(
hashes []plumbing.Hash,
packWindow uint,
) ([]*ObjectToPack, error) {
var objectsToPack []*ObjectToPack
for _, h := range hashes {
var o plumbing.EncodedObject
var err error
if packWindow == 0 {
o, err = dw.encodedObject(h)
} else {
o, err = dw.encodedDeltaObject(h)
}
if err != nil {
return nil, err
}
otp := newObjectToPack(o)
if _, ok := o.(plumbing.DeltaObject); ok {
otp.CleanOriginal()
}
objectsToPack = append(objectsToPack, otp)
}
if packWindow == 0 {
return objectsToPack, nil
}
if err := dw.fixAndBreakChains(objectsToPack); err != nil {
return nil, err
}
return objectsToPack, nil
}
func (dw *deltaSelector) encodedDeltaObject(h plumbing.Hash) (plumbing.EncodedObject, error) {
edos, ok := dw.storer.(storer.DeltaObjectStorer)
if !ok {
return dw.encodedObject(h)
}
return edos.DeltaObject(plumbing.AnyObject, h)
}
func (dw *deltaSelector) encodedObject(h plumbing.Hash) (plumbing.EncodedObject, error) {
return dw.storer.EncodedObject(plumbing.AnyObject, h)
}
func (dw *deltaSelector) fixAndBreakChains(objectsToPack []*ObjectToPack) error {
m := make(map[plumbing.Hash]*ObjectToPack, len(objectsToPack))
for _, otp := range objectsToPack {
m[otp.Hash()] = otp
}
for _, otp := range objectsToPack {
if err := dw.fixAndBreakChainsOne(m, otp); err != nil {
return err
}
}
return nil
}
func (dw *deltaSelector) fixAndBreakChainsOne(objectsToPack map[plumbing.Hash]*ObjectToPack, otp *ObjectToPack) error {
if !otp.Object.Type().IsDelta() {
return nil
}
// Initial ObjectToPack instances might have a delta assigned to Object
// but no actual base initially. Once Base is assigned to a delta, it means
// we already fixed it.
if otp.Base != nil {
return nil
}
do, ok := otp.Object.(plumbing.DeltaObject)
if !ok {
// if this is not a DeltaObject, then we cannot retrieve its base,
// so we have to break the delta chain here.
return dw.undeltify(otp)
}
base, ok := objectsToPack[do.BaseHash()]
if !ok {
// The base of the delta is not in our list of objects to pack, so
// we break the chain.
return dw.undeltify(otp)
}
if err := dw.fixAndBreakChainsOne(objectsToPack, base); err != nil {
return err
}
otp.SetDelta(base, otp.Object)
return nil
}
func (dw *deltaSelector) restoreOriginal(otp *ObjectToPack) error {
if otp.Original != nil {
return nil
}
if !otp.Object.Type().IsDelta() {
return nil
}
obj, err := dw.encodedObject(otp.Hash())
if err != nil {
return err
}
otp.SetOriginal(obj)
return nil
}
// undeltify undeltifies an *ObjectToPack by retrieving the original object from
// the storer and resetting it.
func (dw *deltaSelector) undeltify(otp *ObjectToPack) error {
if err := dw.restoreOriginal(otp); err != nil {
return err
}
otp.Object = otp.Original
otp.Depth = 0
return nil
}
func (dw *deltaSelector) sort(objectsToPack []*ObjectToPack) {
sort.Sort(byTypeAndSize(objectsToPack))
}
func (dw *deltaSelector) walk(
objectsToPack []*ObjectToPack,
packWindow uint,
) error {
indexMap := make(map[plumbing.Hash]*deltaIndex)
for i := 0; i < len(objectsToPack); i++ {
// Clean up the index map and reconstructed delta objects for anything
// outside our pack window, to save memory.
if i > int(packWindow) {
obj := objectsToPack[i-int(packWindow)]
delete(indexMap, obj.Hash())
if obj.IsDelta() {
obj.SaveOriginalMetadata()
obj.CleanOriginal()
}
}
target := objectsToPack[i]
// If we already have a delta, we don't try to find a new one for this
// object. This happens when a delta is set to be reused from an existing
// packfile.
if target.IsDelta() {
continue
}
// We only want to create deltas from specific types.
if !applyDelta[target.Type()] {
continue
}
for j := i - 1; j >= 0 && i-j < int(packWindow); j-- {
base := objectsToPack[j]
// Objects must use only the same type as their delta base.
// Since objectsToPack is sorted by type and size, once we find
// a different type, we know we won't find more of them.
if base.Type() != target.Type() {
break
}
if err := dw.tryToDeltify(indexMap, base, target); err != nil {
return err
}
}
}
return nil
}
func (dw *deltaSelector) tryToDeltify(indexMap map[plumbing.Hash]*deltaIndex, base, target *ObjectToPack) error {
// Original object might not be present if we're reusing a delta, so we
// ensure it is restored.
if err := dw.restoreOriginal(target); err != nil {
return err
}
if err := dw.restoreOriginal(base); err != nil {
return err
}
// If the sizes are radically different, this is a bad pairing.
if target.Size() < base.Size()>>4 {
return nil
}
msz := dw.deltaSizeLimit(
target.Object.Size(),
base.Depth,
target.Depth,
target.IsDelta(),
)
// Nearly impossible to fit useful delta.
if msz <= 8 {
return nil
}
// If we have to insert a lot to make this work, find another.
if base.Size()-target.Size() > msz {
return nil
}
if _, ok := indexMap[base.Hash()]; !ok {
indexMap[base.Hash()] = new(deltaIndex)
}
// Now we can generate the delta using originals
delta, err := getDelta(indexMap[base.Hash()], base.Original, target.Original)
if err != nil {
return err
}
// if delta better than target
if delta.Size() < msz {
target.SetDelta(base, delta)
}
return nil
}
func (dw *deltaSelector) deltaSizeLimit(targetSize int64, baseDepth int,
targetDepth int, targetDelta bool) int64 {
if !targetDelta {
// Any delta should be no more than 50% of the original size
// (for text files deflate of whole form should shrink 50%).
n := targetSize >> 1
// Evenly distribute delta size limits over allowed depth.
// If src is non-delta (depth = 0), delta <= 50% of original.
// If src is almost at limit (9/10), delta <= 10% of original.
return n * (maxDepth - int64(baseDepth)) / maxDepth
}
// With a delta base chosen any new delta must be "better".
// Retain the distribution described above.
d := int64(targetDepth)
n := targetSize
// If target depth is bigger than maxDepth, this delta is not suitable to be used.
if d >= maxDepth {
return 0
}
// If src is whole (depth=0) and base is near limit (depth=9/10)
// any delta using src can be 10x larger and still be better.
//
// If src is near limit (depth=9/10) and base is whole (depth=0)
// a new delta dependent on src must be 1/10th the size.
return n * (maxDepth - int64(baseDepth)) / (maxDepth - d)
}
type byTypeAndSize []*ObjectToPack
func (a byTypeAndSize) Len() int { return len(a) }
func (a byTypeAndSize) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a byTypeAndSize) Less(i, j int) bool {
if a[i].Type() < a[j].Type() {
return false
}
if a[i].Type() > a[j].Type() {
return true
}
return a[i].Size() > a[j].Size()
}

View File

@ -0,0 +1,204 @@
package packfile
import (
"bytes"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/sync"
)
// See https://github.com/jelmer/dulwich/blob/master/dulwich/pack.py and
// https://github.com/tarruda/node-git-core/blob/master/src/js/delta.js
// for more info
const (
// Standard chunk size used to generate fingerprints
s = 16
// https://github.com/git/git/blob/f7466e94375b3be27f229c78873f0acf8301c0a5/diff-delta.c#L428
// Max size of a copy operation (64KB).
maxCopySize = 64 * 1024
// Min size of a copy operation.
minCopySize = 4
)
// GetDelta returns an EncodedObject of type OFSDeltaObject. Base and Target object,
// will be loaded into memory to be able to create the delta object.
// To generate target again, you will need the obtained object and "base" one.
// Error will be returned if base or target object cannot be read.
func GetDelta(base, target plumbing.EncodedObject) (plumbing.EncodedObject, error) {
return getDelta(new(deltaIndex), base, target)
}
func getDelta(index *deltaIndex, base, target plumbing.EncodedObject) (o plumbing.EncodedObject, err error) {
br, err := base.Reader()
if err != nil {
return nil, err
}
defer ioutil.CheckClose(br, &err)
tr, err := target.Reader()
if err != nil {
return nil, err
}
defer ioutil.CheckClose(tr, &err)
bb := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(bb)
_, err = bb.ReadFrom(br)
if err != nil {
return nil, err
}
tb := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(tb)
_, err = tb.ReadFrom(tr)
if err != nil {
return nil, err
}
db := diffDelta(index, bb.Bytes(), tb.Bytes())
delta := &plumbing.MemoryObject{}
_, err = delta.Write(db)
if err != nil {
return nil, err
}
delta.SetSize(int64(len(db)))
delta.SetType(plumbing.OFSDeltaObject)
return delta, nil
}
// DiffDelta returns the delta that transforms src into tgt.
func DiffDelta(src, tgt []byte) []byte {
return diffDelta(new(deltaIndex), src, tgt)
}
func diffDelta(index *deltaIndex, src []byte, tgt []byte) []byte {
buf := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(buf)
buf.Write(deltaEncodeSize(len(src)))
buf.Write(deltaEncodeSize(len(tgt)))
if len(index.entries) == 0 {
index.init(src)
}
ibuf := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(ibuf)
for i := 0; i < len(tgt); i++ {
offset, l := index.findMatch(src, tgt, i)
if l == 0 {
// couldn't find a match, just write the current byte and continue
ibuf.WriteByte(tgt[i])
} else if l < 0 {
// src is less than blksz, copy the rest of the target to avoid
// calls to findMatch
for ; i < len(tgt); i++ {
ibuf.WriteByte(tgt[i])
}
} else if l < s {
// remaining target is less than blksz, copy what's left of it
// and avoid calls to findMatch
for j := i; j < i+l; j++ {
ibuf.WriteByte(tgt[j])
}
i += l - 1
} else {
encodeInsertOperation(ibuf, buf)
rl := l
aOffset := offset
for rl > 0 {
if rl < maxCopySize {
buf.Write(encodeCopyOperation(aOffset, rl))
break
}
buf.Write(encodeCopyOperation(aOffset, maxCopySize))
rl -= maxCopySize
aOffset += maxCopySize
}
i += l - 1
}
}
encodeInsertOperation(ibuf, buf)
// buf.Bytes() is only valid until the next modifying operation on the buffer. Copy it.
return append([]byte{}, buf.Bytes()...)
}
func encodeInsertOperation(ibuf, buf *bytes.Buffer) {
if ibuf.Len() == 0 {
return
}
b := ibuf.Bytes()
s := ibuf.Len()
o := 0
for {
if s <= 127 {
break
}
buf.WriteByte(byte(127))
buf.Write(b[o : o+127])
s -= 127
o += 127
}
buf.WriteByte(byte(s))
buf.Write(b[o : o+s])
ibuf.Reset()
}
func deltaEncodeSize(size int) []byte {
var ret []byte
c := size & 0x7f
size >>= 7
for {
if size == 0 {
break
}
ret = append(ret, byte(c|0x80))
c = size & 0x7f
size >>= 7
}
ret = append(ret, byte(c))
return ret
}
func encodeCopyOperation(offset, length int) []byte {
code := 0x80
var opcodes []byte
var i uint
for i = 0; i < 4; i++ {
f := 0xff << (i * 8)
if offset&f != 0 {
opcodes = append(opcodes, byte(offset&f>>(i*8)))
code |= 0x01 << i
}
}
for i = 0; i < 3; i++ {
f := 0xff << (i * 8)
if length&f != 0 {
opcodes = append(opcodes, byte(length&f>>(i*8)))
code |= 0x10 << i
}
}
return append([]byte{byte(code)}, opcodes...)
}

View File

@ -0,0 +1,38 @@
// Package packfile implements encoding and decoding of packfile format.
//
// == pack-*.pack files have the following format:
//
// - A header appears at the beginning and consists of the following:
//
// 4-byte signature:
// The signature is: {'P', 'A', 'C', 'K'}
//
// 4-byte version number (network byte order):
// GIT currently accepts version number 2 or 3 but
// generates version 2 only.
//
// 4-byte number of objects contained in the pack (network byte order)
//
// Observation: we cannot have more than 4G versions ;-) and
// more than 4G objects in a pack.
//
// - The header is followed by number of object entries, each of
// which looks like this:
//
// (undeltified representation)
// n-byte type and length (3-bit type, (n-1)*7+4-bit length)
// compressed data
//
// (deltified representation)
// n-byte type and length (3-bit type, (n-1)*7+4-bit length)
// 20-byte base object name
// compressed delta data
//
// Observation: length of each object is encoded in a variable
// length format and is not constrained to 32-bit or anything.
//
// - The trailer records 20-byte SHA1 checksum of all of the above.
//
// Source:
// https://www.kernel.org/pub/software/scm/git/docs/v1.7.5/technical/pack-protocol.txt
package packfile

View File

@ -0,0 +1,221 @@
package packfile
import (
"compress/zlib"
"fmt"
"io"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/hash"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/binary"
"github.com/go-git/go-git/v5/utils/ioutil"
)
// Encoder gets the data from the storage and write it into the writer in PACK
// format
type Encoder struct {
selector *deltaSelector
w *offsetWriter
zw *zlib.Writer
hasher plumbing.Hasher
useRefDeltas bool
}
// NewEncoder creates a new packfile encoder using a specific Writer and
// EncodedObjectStorer. By default deltas used to generate the packfile will be
// OFSDeltaObject. To use Reference deltas, set useRefDeltas to true.
func NewEncoder(w io.Writer, s storer.EncodedObjectStorer, useRefDeltas bool) *Encoder {
h := plumbing.Hasher{
Hash: hash.New(hash.CryptoType),
}
mw := io.MultiWriter(w, h)
ow := newOffsetWriter(mw)
zw := zlib.NewWriter(mw)
return &Encoder{
selector: newDeltaSelector(s),
w: ow,
zw: zw,
hasher: h,
useRefDeltas: useRefDeltas,
}
}
// Encode creates a packfile containing all the objects referenced in
// hashes and writes it to the writer in the Encoder. `packWindow`
// specifies the size of the sliding window used to compare objects
// for delta compression; 0 turns off delta compression entirely.
func (e *Encoder) Encode(
hashes []plumbing.Hash,
packWindow uint,
) (plumbing.Hash, error) {
objects, err := e.selector.ObjectsToPack(hashes, packWindow)
if err != nil {
return plumbing.ZeroHash, err
}
return e.encode(objects)
}
func (e *Encoder) encode(objects []*ObjectToPack) (plumbing.Hash, error) {
if err := e.head(len(objects)); err != nil {
return plumbing.ZeroHash, err
}
for _, o := range objects {
if err := e.entry(o); err != nil {
return plumbing.ZeroHash, err
}
}
return e.footer()
}
func (e *Encoder) head(numEntries int) error {
return binary.Write(
e.w,
signature,
int32(VersionSupported),
int32(numEntries),
)
}
func (e *Encoder) entry(o *ObjectToPack) (err error) {
if o.WantWrite() {
// A cycle exists in this delta chain. This should only occur if a
// selected object representation disappeared during writing
// (for example due to a concurrent repack) and a different base
// was chosen, forcing a cycle. Select something other than a
// delta, and write this object.
e.selector.restoreOriginal(o)
o.BackToOriginal()
}
if o.IsWritten() {
return nil
}
o.MarkWantWrite()
if err := e.writeBaseIfDelta(o); err != nil {
return err
}
// We need to check if we already write that object due a cyclic delta chain
if o.IsWritten() {
return nil
}
o.Offset = e.w.Offset()
if o.IsDelta() {
if err := e.writeDeltaHeader(o); err != nil {
return err
}
} else {
if err := e.entryHead(o.Type(), o.Size()); err != nil {
return err
}
}
e.zw.Reset(e.w)
defer ioutil.CheckClose(e.zw, &err)
or, err := o.Object.Reader()
if err != nil {
return err
}
defer ioutil.CheckClose(or, &err)
_, err = io.Copy(e.zw, or)
return err
}
func (e *Encoder) writeBaseIfDelta(o *ObjectToPack) error {
if o.IsDelta() && !o.Base.IsWritten() {
// We must write base first
return e.entry(o.Base)
}
return nil
}
func (e *Encoder) writeDeltaHeader(o *ObjectToPack) error {
// Write offset deltas by default
t := plumbing.OFSDeltaObject
if e.useRefDeltas {
t = plumbing.REFDeltaObject
}
if err := e.entryHead(t, o.Object.Size()); err != nil {
return err
}
if e.useRefDeltas {
return e.writeRefDeltaHeader(o.Base.Hash())
} else {
return e.writeOfsDeltaHeader(o)
}
}
func (e *Encoder) writeRefDeltaHeader(base plumbing.Hash) error {
return binary.Write(e.w, base)
}
func (e *Encoder) writeOfsDeltaHeader(o *ObjectToPack) error {
// for OFS_DELTA, offset of the base is interpreted as negative offset
// relative to the type-byte of the header of the ofs-delta entry.
relativeOffset := o.Offset - o.Base.Offset
if relativeOffset <= 0 {
return fmt.Errorf("bad offset for OFS_DELTA entry: %d", relativeOffset)
}
return binary.WriteVariableWidthInt(e.w, relativeOffset)
}
func (e *Encoder) entryHead(typeNum plumbing.ObjectType, size int64) error {
t := int64(typeNum)
header := []byte{}
c := (t << firstLengthBits) | (size & maskFirstLength)
size >>= firstLengthBits
for {
if size == 0 {
break
}
header = append(header, byte(c|maskContinue))
c = size & int64(maskLength)
size >>= lengthBits
}
header = append(header, byte(c))
_, err := e.w.Write(header)
return err
}
func (e *Encoder) footer() (plumbing.Hash, error) {
h := e.hasher.Sum()
return h, binary.Write(e.w, h)
}
type offsetWriter struct {
w io.Writer
offset int64
}
func newOffsetWriter(w io.Writer) *offsetWriter {
return &offsetWriter{w: w}
}
func (ow *offsetWriter) Write(p []byte) (n int, err error) {
n, err = ow.w.Write(p)
ow.offset += int64(n)
return n, err
}
func (ow *offsetWriter) Offset() int64 {
return ow.offset
}

View File

@ -0,0 +1,30 @@
package packfile
import "fmt"
// Error specifies errors returned during packfile parsing.
type Error struct {
reason, details string
}
// NewError returns a new error.
func NewError(reason string) *Error {
return &Error{reason: reason}
}
// Error returns a text representation of the error.
func (e *Error) Error() string {
if e.details == "" {
return e.reason
}
return fmt.Sprintf("%s: %s", e.reason, e.details)
}
// AddDetails adds details to an error, with additional text.
func (e *Error) AddDetails(format string, args ...interface{}) *Error {
return &Error{
reason: e.reason,
details: fmt.Sprintf(format, args...),
}
}

View File

@ -0,0 +1,119 @@
package packfile
import (
"io"
billy "github.com/go-git/go-billy/v5"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/cache"
"github.com/go-git/go-git/v5/plumbing/format/idxfile"
"github.com/go-git/go-git/v5/utils/ioutil"
)
// FSObject is an object from the packfile on the filesystem.
type FSObject struct {
hash plumbing.Hash
offset int64
size int64
typ plumbing.ObjectType
index idxfile.Index
fs billy.Filesystem
path string
cache cache.Object
largeObjectThreshold int64
}
// NewFSObject creates a new filesystem object.
func NewFSObject(
hash plumbing.Hash,
finalType plumbing.ObjectType,
offset int64,
contentSize int64,
index idxfile.Index,
fs billy.Filesystem,
path string,
cache cache.Object,
largeObjectThreshold int64,
) *FSObject {
return &FSObject{
hash: hash,
offset: offset,
size: contentSize,
typ: finalType,
index: index,
fs: fs,
path: path,
cache: cache,
largeObjectThreshold: largeObjectThreshold,
}
}
// Reader implements the plumbing.EncodedObject interface.
func (o *FSObject) Reader() (io.ReadCloser, error) {
obj, ok := o.cache.Get(o.hash)
if ok && obj != o {
reader, err := obj.Reader()
if err != nil {
return nil, err
}
return reader, nil
}
f, err := o.fs.Open(o.path)
if err != nil {
return nil, err
}
p := NewPackfileWithCache(o.index, nil, f, o.cache, o.largeObjectThreshold)
if o.largeObjectThreshold > 0 && o.size > o.largeObjectThreshold {
// We have a big object
h, err := p.objectHeaderAtOffset(o.offset)
if err != nil {
return nil, err
}
r, err := p.getReaderDirect(h)
if err != nil {
_ = f.Close()
return nil, err
}
return ioutil.NewReadCloserWithCloser(r, f.Close), nil
}
r, err := p.getObjectContent(o.offset)
if err != nil {
_ = f.Close()
return nil, err
}
if err := f.Close(); err != nil {
return nil, err
}
return r, nil
}
// SetSize implements the plumbing.EncodedObject interface. This method
// is a noop.
func (o *FSObject) SetSize(int64) {}
// SetType implements the plumbing.EncodedObject interface. This method is
// a noop.
func (o *FSObject) SetType(plumbing.ObjectType) {}
// Hash implements the plumbing.EncodedObject interface.
func (o *FSObject) Hash() plumbing.Hash { return o.hash }
// Size implements the plumbing.EncodedObject interface.
func (o *FSObject) Size() int64 { return o.size }
// Type implements the plumbing.EncodedObject interface.
func (o *FSObject) Type() plumbing.ObjectType {
return o.typ
}
// Writer implements the plumbing.EncodedObject interface. This method always
// returns a nil writer.
func (o *FSObject) Writer() (io.WriteCloser, error) {
return nil, nil
}

View File

@ -0,0 +1,164 @@
package packfile
import (
"github.com/go-git/go-git/v5/plumbing"
)
// ObjectToPack is a representation of an object that is going to be into a
// pack file.
type ObjectToPack struct {
// The main object to pack, it could be any object, including deltas
Object plumbing.EncodedObject
// Base is the object that a delta is based on (it could be also another delta).
// If the main object is not a delta, Base will be null
Base *ObjectToPack
// Original is the object that we can generate applying the delta to
// Base, or the same object as Object in the case of a non-delta
// object.
Original plumbing.EncodedObject
// Depth is the amount of deltas needed to resolve to obtain Original
// (delta based on delta based on ...)
Depth int
// offset in pack when object has been already written, or 0 if it
// has not been written yet
Offset int64
// Information from the original object
resolvedOriginal bool
originalType plumbing.ObjectType
originalSize int64
originalHash plumbing.Hash
}
// newObjectToPack creates a correct ObjectToPack based on a non-delta object
func newObjectToPack(o plumbing.EncodedObject) *ObjectToPack {
return &ObjectToPack{
Object: o,
Original: o,
}
}
// newDeltaObjectToPack creates a correct ObjectToPack for a delta object, based on
// his base (could be another delta), the delta target (in this case called original),
// and the delta Object itself
func newDeltaObjectToPack(base *ObjectToPack, original, delta plumbing.EncodedObject) *ObjectToPack {
return &ObjectToPack{
Object: delta,
Base: base,
Original: original,
Depth: base.Depth + 1,
}
}
// BackToOriginal converts that ObjectToPack to a non-deltified object if it was one
func (o *ObjectToPack) BackToOriginal() {
if o.IsDelta() && o.Original != nil {
o.Object = o.Original
o.Base = nil
o.Depth = 0
}
}
// IsWritten returns if that ObjectToPack was
// already written into the packfile or not
func (o *ObjectToPack) IsWritten() bool {
return o.Offset > 1
}
// MarkWantWrite marks this ObjectToPack as WantWrite
// to avoid delta chain loops
func (o *ObjectToPack) MarkWantWrite() {
o.Offset = 1
}
// WantWrite checks if this ObjectToPack was marked as WantWrite before
func (o *ObjectToPack) WantWrite() bool {
return o.Offset == 1
}
// SetOriginal sets both Original and saves size, type and hash. If object
// is nil Original is set but previous resolved values are kept
func (o *ObjectToPack) SetOriginal(obj plumbing.EncodedObject) {
o.Original = obj
o.SaveOriginalMetadata()
}
// SaveOriginalMetadata saves size, type and hash of Original object
func (o *ObjectToPack) SaveOriginalMetadata() {
if o.Original != nil {
o.originalSize = o.Original.Size()
o.originalType = o.Original.Type()
o.originalHash = o.Original.Hash()
o.resolvedOriginal = true
}
}
// CleanOriginal sets Original to nil
func (o *ObjectToPack) CleanOriginal() {
o.Original = nil
}
func (o *ObjectToPack) Type() plumbing.ObjectType {
if o.Original != nil {
return o.Original.Type()
}
if o.resolvedOriginal {
return o.originalType
}
if o.Base != nil {
return o.Base.Type()
}
if o.Object != nil {
return o.Object.Type()
}
panic("cannot get type")
}
func (o *ObjectToPack) Hash() plumbing.Hash {
if o.Original != nil {
return o.Original.Hash()
}
if o.resolvedOriginal {
return o.originalHash
}
do, ok := o.Object.(plumbing.DeltaObject)
if ok {
return do.ActualHash()
}
panic("cannot get hash")
}
func (o *ObjectToPack) Size() int64 {
if o.Original != nil {
return o.Original.Size()
}
if o.resolvedOriginal {
return o.originalSize
}
do, ok := o.Object.(plumbing.DeltaObject)
if ok {
return do.ActualSize()
}
panic("cannot get ObjectToPack size")
}
func (o *ObjectToPack) IsDelta() bool {
return o.Base != nil
}
func (o *ObjectToPack) SetDelta(base *ObjectToPack, delta plumbing.EncodedObject) {
o.Object = delta
o.Base = base
o.Depth = base.Depth + 1
}

View File

@ -0,0 +1,641 @@
package packfile
import (
"bytes"
"fmt"
"io"
"os"
billy "github.com/go-git/go-billy/v5"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/cache"
"github.com/go-git/go-git/v5/plumbing/format/idxfile"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/sync"
)
var (
// ErrInvalidObject is returned by Decode when an invalid object is
// found in the packfile.
ErrInvalidObject = NewError("invalid git object")
// ErrZLib is returned by Decode when there was an error unzipping
// the packfile contents.
ErrZLib = NewError("zlib reading error")
)
// When reading small objects from packfile it is beneficial to do so at
// once to exploit the buffered I/O. In many cases the objects are so small
// that they were already loaded to memory when the object header was
// loaded from the packfile. Wrapping in FSObject would cause this buffered
// data to be thrown away and then re-read later, with the additional
// seeking causing reloads from disk. Objects smaller than this threshold
// are now always read into memory and stored in cache instead of being
// wrapped in FSObject.
const smallObjectThreshold = 16 * 1024
// Packfile allows retrieving information from inside a packfile.
type Packfile struct {
idxfile.Index
fs billy.Filesystem
file billy.File
s *Scanner
deltaBaseCache cache.Object
offsetToType map[int64]plumbing.ObjectType
largeObjectThreshold int64
}
// NewPackfileWithCache creates a new Packfile with the given object cache.
// If the filesystem is provided, the packfile will return FSObjects, otherwise
// it will return MemoryObjects.
func NewPackfileWithCache(
index idxfile.Index,
fs billy.Filesystem,
file billy.File,
cache cache.Object,
largeObjectThreshold int64,
) *Packfile {
s := NewScanner(file)
return &Packfile{
index,
fs,
file,
s,
cache,
make(map[int64]plumbing.ObjectType),
largeObjectThreshold,
}
}
// NewPackfile returns a packfile representation for the given packfile file
// and packfile idx.
// If the filesystem is provided, the packfile will return FSObjects, otherwise
// it will return MemoryObjects.
func NewPackfile(index idxfile.Index, fs billy.Filesystem, file billy.File, largeObjectThreshold int64) *Packfile {
return NewPackfileWithCache(index, fs, file, cache.NewObjectLRUDefault(), largeObjectThreshold)
}
// Get retrieves the encoded object in the packfile with the given hash.
func (p *Packfile) Get(h plumbing.Hash) (plumbing.EncodedObject, error) {
offset, err := p.FindOffset(h)
if err != nil {
return nil, err
}
return p.objectAtOffset(offset, h)
}
// GetByOffset retrieves the encoded object from the packfile at the given
// offset.
func (p *Packfile) GetByOffset(o int64) (plumbing.EncodedObject, error) {
hash, err := p.FindHash(o)
if err != nil {
return nil, err
}
return p.objectAtOffset(o, hash)
}
// GetSizeByOffset retrieves the size of the encoded object from the
// packfile with the given offset.
func (p *Packfile) GetSizeByOffset(o int64) (size int64, err error) {
if _, err := p.s.SeekFromStart(o); err != nil {
if err == io.EOF || isInvalid(err) {
return 0, plumbing.ErrObjectNotFound
}
return 0, err
}
h, err := p.nextObjectHeader()
if err != nil {
return 0, err
}
return p.getObjectSize(h)
}
func (p *Packfile) objectHeaderAtOffset(offset int64) (*ObjectHeader, error) {
h, err := p.s.SeekObjectHeader(offset)
p.s.pendingObject = nil
return h, err
}
func (p *Packfile) nextObjectHeader() (*ObjectHeader, error) {
h, err := p.s.NextObjectHeader()
p.s.pendingObject = nil
return h, err
}
func (p *Packfile) getDeltaObjectSize(buf *bytes.Buffer) int64 {
delta := buf.Bytes()
_, delta = decodeLEB128(delta) // skip src size
sz, _ := decodeLEB128(delta)
return int64(sz)
}
func (p *Packfile) getObjectSize(h *ObjectHeader) (int64, error) {
switch h.Type {
case plumbing.CommitObject, plumbing.TreeObject, plumbing.BlobObject, plumbing.TagObject:
return h.Length, nil
case plumbing.REFDeltaObject, plumbing.OFSDeltaObject:
buf := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(buf)
if _, _, err := p.s.NextObject(buf); err != nil {
return 0, err
}
return p.getDeltaObjectSize(buf), nil
default:
return 0, ErrInvalidObject.AddDetails("type %q", h.Type)
}
}
func (p *Packfile) getObjectType(h *ObjectHeader) (typ plumbing.ObjectType, err error) {
switch h.Type {
case plumbing.CommitObject, plumbing.TreeObject, plumbing.BlobObject, plumbing.TagObject:
return h.Type, nil
case plumbing.REFDeltaObject, plumbing.OFSDeltaObject:
var offset int64
if h.Type == plumbing.REFDeltaObject {
offset, err = p.FindOffset(h.Reference)
if err != nil {
return
}
} else {
offset = h.OffsetReference
}
if baseType, ok := p.offsetToType[offset]; ok {
typ = baseType
} else {
h, err = p.objectHeaderAtOffset(offset)
if err != nil {
return
}
typ, err = p.getObjectType(h)
if err != nil {
return
}
}
default:
err = ErrInvalidObject.AddDetails("type %q", h.Type)
}
p.offsetToType[h.Offset] = typ
return
}
func (p *Packfile) objectAtOffset(offset int64, hash plumbing.Hash) (plumbing.EncodedObject, error) {
if obj, ok := p.cacheGet(hash); ok {
return obj, nil
}
h, err := p.objectHeaderAtOffset(offset)
if err != nil {
if err == io.EOF || isInvalid(err) {
return nil, plumbing.ErrObjectNotFound
}
return nil, err
}
return p.getNextObject(h, hash)
}
func (p *Packfile) getNextObject(h *ObjectHeader, hash plumbing.Hash) (plumbing.EncodedObject, error) {
var err error
// If we have no filesystem, we will return a MemoryObject instead
// of an FSObject.
if p.fs == nil {
return p.getNextMemoryObject(h)
}
// If the object is small enough then read it completely into memory now since
// it is already read from disk into buffer anyway. For delta objects we want
// to perform the optimization too, but we have to be careful about applying
// small deltas on big objects.
var size int64
if h.Length <= smallObjectThreshold {
if h.Type != plumbing.OFSDeltaObject && h.Type != plumbing.REFDeltaObject {
return p.getNextMemoryObject(h)
}
// For delta objects we read the delta data and apply the small object
// optimization only if the expanded version of the object still meets
// the small object threshold condition.
buf := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(buf)
if _, _, err := p.s.NextObject(buf); err != nil {
return nil, err
}
size = p.getDeltaObjectSize(buf)
if size <= smallObjectThreshold {
var obj = new(plumbing.MemoryObject)
obj.SetSize(size)
if h.Type == plumbing.REFDeltaObject {
err = p.fillREFDeltaObjectContentWithBuffer(obj, h.Reference, buf)
} else {
err = p.fillOFSDeltaObjectContentWithBuffer(obj, h.OffsetReference, buf)
}
return obj, err
}
} else {
size, err = p.getObjectSize(h)
if err != nil {
return nil, err
}
}
typ, err := p.getObjectType(h)
if err != nil {
return nil, err
}
p.offsetToType[h.Offset] = typ
return NewFSObject(
hash,
typ,
h.Offset,
size,
p.Index,
p.fs,
p.file.Name(),
p.deltaBaseCache,
p.largeObjectThreshold,
), nil
}
func (p *Packfile) getObjectContent(offset int64) (io.ReadCloser, error) {
h, err := p.objectHeaderAtOffset(offset)
if err != nil {
return nil, err
}
// getObjectContent is called from FSObject, so we have to explicitly
// get memory object here to avoid recursive cycle
obj, err := p.getNextMemoryObject(h)
if err != nil {
return nil, err
}
return obj.Reader()
}
func asyncReader(p *Packfile) (io.ReadCloser, error) {
reader := ioutil.NewReaderUsingReaderAt(p.file, p.s.r.offset)
zr, err := sync.GetZlibReader(reader)
if err != nil {
return nil, fmt.Errorf("zlib reset error: %s", err)
}
return ioutil.NewReadCloserWithCloser(zr.Reader, func() error {
sync.PutZlibReader(zr)
return nil
}), nil
}
func (p *Packfile) getReaderDirect(h *ObjectHeader) (io.ReadCloser, error) {
switch h.Type {
case plumbing.CommitObject, plumbing.TreeObject, plumbing.BlobObject, plumbing.TagObject:
return asyncReader(p)
case plumbing.REFDeltaObject:
deltaRc, err := asyncReader(p)
if err != nil {
return nil, err
}
r, err := p.readREFDeltaObjectContent(h, deltaRc)
if err != nil {
return nil, err
}
return r, nil
case plumbing.OFSDeltaObject:
deltaRc, err := asyncReader(p)
if err != nil {
return nil, err
}
r, err := p.readOFSDeltaObjectContent(h, deltaRc)
if err != nil {
return nil, err
}
return r, nil
default:
return nil, ErrInvalidObject.AddDetails("type %q", h.Type)
}
}
func (p *Packfile) getNextMemoryObject(h *ObjectHeader) (plumbing.EncodedObject, error) {
var obj = new(plumbing.MemoryObject)
obj.SetSize(h.Length)
obj.SetType(h.Type)
var err error
switch h.Type {
case plumbing.CommitObject, plumbing.TreeObject, plumbing.BlobObject, plumbing.TagObject:
err = p.fillRegularObjectContent(obj)
case plumbing.REFDeltaObject:
err = p.fillREFDeltaObjectContent(obj, h.Reference)
case plumbing.OFSDeltaObject:
err = p.fillOFSDeltaObjectContent(obj, h.OffsetReference)
default:
err = ErrInvalidObject.AddDetails("type %q", h.Type)
}
if err != nil {
return nil, err
}
p.offsetToType[h.Offset] = obj.Type()
return obj, nil
}
func (p *Packfile) fillRegularObjectContent(obj plumbing.EncodedObject) (err error) {
w, err := obj.Writer()
if err != nil {
return err
}
defer ioutil.CheckClose(w, &err)
_, _, err = p.s.NextObject(w)
p.cachePut(obj)
return err
}
func (p *Packfile) fillREFDeltaObjectContent(obj plumbing.EncodedObject, ref plumbing.Hash) error {
buf := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(buf)
_, _, err := p.s.NextObject(buf)
if err != nil {
return err
}
return p.fillREFDeltaObjectContentWithBuffer(obj, ref, buf)
}
func (p *Packfile) readREFDeltaObjectContent(h *ObjectHeader, deltaRC io.Reader) (io.ReadCloser, error) {
var err error
base, ok := p.cacheGet(h.Reference)
if !ok {
base, err = p.Get(h.Reference)
if err != nil {
return nil, err
}
}
return ReaderFromDelta(base, deltaRC)
}
func (p *Packfile) fillREFDeltaObjectContentWithBuffer(obj plumbing.EncodedObject, ref plumbing.Hash, buf *bytes.Buffer) error {
var err error
base, ok := p.cacheGet(ref)
if !ok {
base, err = p.Get(ref)
if err != nil {
return err
}
}
obj.SetType(base.Type())
err = ApplyDelta(obj, base, buf.Bytes())
p.cachePut(obj)
return err
}
func (p *Packfile) fillOFSDeltaObjectContent(obj plumbing.EncodedObject, offset int64) error {
buf := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(buf)
_, _, err := p.s.NextObject(buf)
if err != nil {
return err
}
return p.fillOFSDeltaObjectContentWithBuffer(obj, offset, buf)
}
func (p *Packfile) readOFSDeltaObjectContent(h *ObjectHeader, deltaRC io.Reader) (io.ReadCloser, error) {
hash, err := p.FindHash(h.OffsetReference)
if err != nil {
return nil, err
}
base, err := p.objectAtOffset(h.OffsetReference, hash)
if err != nil {
return nil, err
}
return ReaderFromDelta(base, deltaRC)
}
func (p *Packfile) fillOFSDeltaObjectContentWithBuffer(obj plumbing.EncodedObject, offset int64, buf *bytes.Buffer) error {
hash, err := p.FindHash(offset)
if err != nil {
return err
}
base, err := p.objectAtOffset(offset, hash)
if err != nil {
return err
}
obj.SetType(base.Type())
err = ApplyDelta(obj, base, buf.Bytes())
p.cachePut(obj)
return err
}
func (p *Packfile) cacheGet(h plumbing.Hash) (plumbing.EncodedObject, bool) {
if p.deltaBaseCache == nil {
return nil, false
}
return p.deltaBaseCache.Get(h)
}
func (p *Packfile) cachePut(obj plumbing.EncodedObject) {
if p.deltaBaseCache == nil {
return
}
p.deltaBaseCache.Put(obj)
}
// GetAll returns an iterator with all encoded objects in the packfile.
// The iterator returned is not thread-safe, it should be used in the same
// thread as the Packfile instance.
func (p *Packfile) GetAll() (storer.EncodedObjectIter, error) {
return p.GetByType(plumbing.AnyObject)
}
// GetByType returns all the objects of the given type.
func (p *Packfile) GetByType(typ plumbing.ObjectType) (storer.EncodedObjectIter, error) {
switch typ {
case plumbing.AnyObject,
plumbing.BlobObject,
plumbing.TreeObject,
plumbing.CommitObject,
plumbing.TagObject:
entries, err := p.EntriesByOffset()
if err != nil {
return nil, err
}
return &objectIter{
// Easiest way to provide an object decoder is just to pass a Packfile
// instance. To not mess with the seeks, it's a new instance with a
// different scanner but the same cache and offset to hash map for
// reusing as much cache as possible.
p: p,
iter: entries,
typ: typ,
}, nil
default:
return nil, plumbing.ErrInvalidType
}
}
// ID returns the ID of the packfile, which is the checksum at the end of it.
func (p *Packfile) ID() (plumbing.Hash, error) {
prev, err := p.file.Seek(-20, io.SeekEnd)
if err != nil {
return plumbing.ZeroHash, err
}
var hash plumbing.Hash
if _, err := io.ReadFull(p.file, hash[:]); err != nil {
return plumbing.ZeroHash, err
}
if _, err := p.file.Seek(prev, io.SeekStart); err != nil {
return plumbing.ZeroHash, err
}
return hash, nil
}
// Scanner returns the packfile's Scanner
func (p *Packfile) Scanner() *Scanner {
return p.s
}
// Close the packfile and its resources.
func (p *Packfile) Close() error {
closer, ok := p.file.(io.Closer)
if !ok {
return nil
}
return closer.Close()
}
type objectIter struct {
p *Packfile
typ plumbing.ObjectType
iter idxfile.EntryIter
}
func (i *objectIter) Next() (plumbing.EncodedObject, error) {
for {
e, err := i.iter.Next()
if err != nil {
return nil, err
}
if i.typ != plumbing.AnyObject {
if typ, ok := i.p.offsetToType[int64(e.Offset)]; ok {
if typ != i.typ {
continue
}
} else if obj, ok := i.p.cacheGet(e.Hash); ok {
if obj.Type() != i.typ {
i.p.offsetToType[int64(e.Offset)] = obj.Type()
continue
}
return obj, nil
} else {
h, err := i.p.objectHeaderAtOffset(int64(e.Offset))
if err != nil {
return nil, err
}
if h.Type == plumbing.REFDeltaObject || h.Type == plumbing.OFSDeltaObject {
typ, err := i.p.getObjectType(h)
if err != nil {
return nil, err
}
if typ != i.typ {
i.p.offsetToType[int64(e.Offset)] = typ
continue
}
// getObjectType will seek in the file so we cannot use getNextObject safely
return i.p.objectAtOffset(int64(e.Offset), e.Hash)
} else {
if h.Type != i.typ {
i.p.offsetToType[int64(e.Offset)] = h.Type
continue
}
return i.p.getNextObject(h, e.Hash)
}
}
}
obj, err := i.p.objectAtOffset(int64(e.Offset), e.Hash)
if err != nil {
return nil, err
}
return obj, nil
}
}
func (i *objectIter) ForEach(f func(plumbing.EncodedObject) error) error {
for {
o, err := i.Next()
if err != nil {
if err == io.EOF {
return nil
}
return err
}
if err := f(o); err != nil {
return err
}
}
}
func (i *objectIter) Close() {
i.iter.Close()
}
// isInvalid checks whether an error is an os.PathError with an os.ErrInvalid
// error inside. It also checks for the windows error, which is different from
// os.ErrInvalid.
func isInvalid(err error) bool {
pe, ok := err.(*os.PathError)
if !ok {
return false
}
errstr := pe.Err.Error()
return errstr == errInvalidUnix || errstr == errInvalidWindows
}
// errInvalidWindows is the Windows equivalent to os.ErrInvalid
const errInvalidWindows = "The parameter is incorrect."
var errInvalidUnix = os.ErrInvalid.Error()

View File

@ -0,0 +1,611 @@
package packfile
import (
"bytes"
"errors"
"fmt"
"io"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/cache"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/sync"
)
var (
// ErrReferenceDeltaNotFound is returned when the reference delta is not
// found.
ErrReferenceDeltaNotFound = errors.New("reference delta not found")
// ErrNotSeekableSource is returned when the source for the parser is not
// seekable and a storage was not provided, so it can't be parsed.
ErrNotSeekableSource = errors.New("parser source is not seekable and storage was not provided")
// ErrDeltaNotCached is returned when the delta could not be found in cache.
ErrDeltaNotCached = errors.New("delta could not be found in cache")
)
// Observer interface is implemented by index encoders.
type Observer interface {
// OnHeader is called when a new packfile is opened.
OnHeader(count uint32) error
// OnInflatedObjectHeader is called for each object header read.
OnInflatedObjectHeader(t plumbing.ObjectType, objSize int64, pos int64) error
// OnInflatedObjectContent is called for each decoded object.
OnInflatedObjectContent(h plumbing.Hash, pos int64, crc uint32, content []byte) error
// OnFooter is called when decoding is done.
OnFooter(h plumbing.Hash) error
}
// Parser decodes a packfile and calls any observer associated to it. Is used
// to generate indexes.
type Parser struct {
storage storer.EncodedObjectStorer
scanner *Scanner
count uint32
oi []*objectInfo
oiByHash map[plumbing.Hash]*objectInfo
oiByOffset map[int64]*objectInfo
checksum plumbing.Hash
cache *cache.BufferLRU
// delta content by offset, only used if source is not seekable
deltas map[int64][]byte
ob []Observer
}
// NewParser creates a new Parser. The Scanner source must be seekable.
// If it's not, NewParserWithStorage should be used instead.
func NewParser(scanner *Scanner, ob ...Observer) (*Parser, error) {
return NewParserWithStorage(scanner, nil, ob...)
}
// NewParserWithStorage creates a new Parser. The scanner source must either
// be seekable or a storage must be provided.
func NewParserWithStorage(
scanner *Scanner,
storage storer.EncodedObjectStorer,
ob ...Observer,
) (*Parser, error) {
if !scanner.IsSeekable && storage == nil {
return nil, ErrNotSeekableSource
}
var deltas map[int64][]byte
if !scanner.IsSeekable {
deltas = make(map[int64][]byte)
}
return &Parser{
storage: storage,
scanner: scanner,
ob: ob,
count: 0,
cache: cache.NewBufferLRUDefault(),
deltas: deltas,
}, nil
}
func (p *Parser) forEachObserver(f func(o Observer) error) error {
for _, o := range p.ob {
if err := f(o); err != nil {
return err
}
}
return nil
}
func (p *Parser) onHeader(count uint32) error {
return p.forEachObserver(func(o Observer) error {
return o.OnHeader(count)
})
}
func (p *Parser) onInflatedObjectHeader(
t plumbing.ObjectType,
objSize int64,
pos int64,
) error {
return p.forEachObserver(func(o Observer) error {
return o.OnInflatedObjectHeader(t, objSize, pos)
})
}
func (p *Parser) onInflatedObjectContent(
h plumbing.Hash,
pos int64,
crc uint32,
content []byte,
) error {
return p.forEachObserver(func(o Observer) error {
return o.OnInflatedObjectContent(h, pos, crc, content)
})
}
func (p *Parser) onFooter(h plumbing.Hash) error {
return p.forEachObserver(func(o Observer) error {
return o.OnFooter(h)
})
}
// Parse start decoding phase of the packfile.
func (p *Parser) Parse() (plumbing.Hash, error) {
if err := p.init(); err != nil {
return plumbing.ZeroHash, err
}
if err := p.indexObjects(); err != nil {
return plumbing.ZeroHash, err
}
var err error
p.checksum, err = p.scanner.Checksum()
if err != nil && err != io.EOF {
return plumbing.ZeroHash, err
}
if err := p.resolveDeltas(); err != nil {
return plumbing.ZeroHash, err
}
if err := p.onFooter(p.checksum); err != nil {
return plumbing.ZeroHash, err
}
return p.checksum, nil
}
func (p *Parser) init() error {
_, c, err := p.scanner.Header()
if err != nil {
return err
}
if err := p.onHeader(c); err != nil {
return err
}
p.count = c
p.oiByHash = make(map[plumbing.Hash]*objectInfo, p.count)
p.oiByOffset = make(map[int64]*objectInfo, p.count)
p.oi = make([]*objectInfo, p.count)
return nil
}
type objectHeaderWriter func(typ plumbing.ObjectType, sz int64) error
type lazyObjectWriter interface {
// LazyWriter enables an object to be lazily written.
// It returns:
// - w: a writer to receive the object's content.
// - lwh: a func to write the object header.
// - err: any error from the initial writer creation process.
//
// Note that if the object header is not written BEFORE the writer
// is used, this will result in an invalid object.
LazyWriter() (w io.WriteCloser, lwh objectHeaderWriter, err error)
}
func (p *Parser) indexObjects() error {
buf := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(buf)
for i := uint32(0); i < p.count; i++ {
oh, err := p.scanner.NextObjectHeader()
if err != nil {
return err
}
delta := false
var ota *objectInfo
switch t := oh.Type; t {
case plumbing.OFSDeltaObject:
delta = true
parent, ok := p.oiByOffset[oh.OffsetReference]
if !ok {
return plumbing.ErrObjectNotFound
}
ota = newDeltaObject(oh.Offset, oh.Length, t, parent)
parent.Children = append(parent.Children, ota)
case plumbing.REFDeltaObject:
delta = true
parent, ok := p.oiByHash[oh.Reference]
if !ok {
// can't find referenced object in this pack file
// this must be a "thin" pack.
parent = &objectInfo{ //Placeholder parent
SHA1: oh.Reference,
ExternalRef: true, // mark as an external reference that must be resolved
Type: plumbing.AnyObject,
DiskType: plumbing.AnyObject,
}
p.oiByHash[oh.Reference] = parent
}
ota = newDeltaObject(oh.Offset, oh.Length, t, parent)
parent.Children = append(parent.Children, ota)
default:
ota = newBaseObject(oh.Offset, oh.Length, t)
}
hasher := plumbing.NewHasher(oh.Type, oh.Length)
writers := []io.Writer{hasher}
var obj *plumbing.MemoryObject
// Lazy writing is only available for non-delta objects.
if p.storage != nil && !delta {
// When a storage is set and supports lazy writing,
// use that instead of creating a memory object.
if low, ok := p.storage.(lazyObjectWriter); ok {
ow, lwh, err := low.LazyWriter()
if err != nil {
return err
}
if err = lwh(oh.Type, oh.Length); err != nil {
return err
}
defer ow.Close()
writers = append(writers, ow)
} else {
obj = new(plumbing.MemoryObject)
obj.SetSize(oh.Length)
obj.SetType(oh.Type)
writers = append(writers, obj)
}
}
if delta && !p.scanner.IsSeekable {
buf.Reset()
buf.Grow(int(oh.Length))
writers = append(writers, buf)
}
mw := io.MultiWriter(writers...)
_, crc, err := p.scanner.NextObject(mw)
if err != nil {
return err
}
// Non delta objects needs to be added into the storage. This
// is only required when lazy writing is not supported.
if obj != nil {
if _, err := p.storage.SetEncodedObject(obj); err != nil {
return err
}
}
ota.Crc32 = crc
ota.Length = oh.Length
if !delta {
sha1 := hasher.Sum()
// Move children of placeholder parent into actual parent, in case this
// was a non-external delta reference.
if placeholder, ok := p.oiByHash[sha1]; ok {
ota.Children = placeholder.Children
for _, c := range ota.Children {
c.Parent = ota
}
}
ota.SHA1 = sha1
p.oiByHash[ota.SHA1] = ota
}
if delta && !p.scanner.IsSeekable {
data := buf.Bytes()
p.deltas[oh.Offset] = make([]byte, len(data))
copy(p.deltas[oh.Offset], data)
}
p.oiByOffset[oh.Offset] = ota
p.oi[i] = ota
}
return nil
}
func (p *Parser) resolveDeltas() error {
buf := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(buf)
for _, obj := range p.oi {
buf.Reset()
buf.Grow(int(obj.Length))
err := p.get(obj, buf)
if err != nil {
return err
}
if err := p.onInflatedObjectHeader(obj.Type, obj.Length, obj.Offset); err != nil {
return err
}
if err := p.onInflatedObjectContent(obj.SHA1, obj.Offset, obj.Crc32, nil); err != nil {
return err
}
if !obj.IsDelta() && len(obj.Children) > 0 {
// Dealing with an io.ReaderAt object, means we can
// create it once and reuse across all children.
r := bytes.NewReader(buf.Bytes())
for _, child := range obj.Children {
// Even though we are discarding the output, we still need to read it to
// so that the scanner can advance to the next object, and the SHA1 can be
// calculated.
if err := p.resolveObject(io.Discard, child, r); err != nil {
return err
}
p.resolveExternalRef(child)
}
// Remove the delta from the cache.
if obj.DiskType.IsDelta() && !p.scanner.IsSeekable {
delete(p.deltas, obj.Offset)
}
}
}
return nil
}
func (p *Parser) resolveExternalRef(o *objectInfo) {
if ref, ok := p.oiByHash[o.SHA1]; ok && ref.ExternalRef {
p.oiByHash[o.SHA1] = o
o.Children = ref.Children
for _, c := range o.Children {
c.Parent = o
}
}
}
func (p *Parser) get(o *objectInfo, buf *bytes.Buffer) (err error) {
if !o.ExternalRef { // skip cache check for placeholder parents
b, ok := p.cache.Get(o.Offset)
if ok {
_, err := buf.Write(b)
return err
}
}
// If it's not on the cache and is not a delta we can try to find it in the
// storage, if there's one. External refs must enter here.
if p.storage != nil && !o.Type.IsDelta() {
var e plumbing.EncodedObject
e, err = p.storage.EncodedObject(plumbing.AnyObject, o.SHA1)
if err != nil {
return err
}
o.Type = e.Type()
var r io.ReadCloser
r, err = e.Reader()
if err != nil {
return err
}
defer ioutil.CheckClose(r, &err)
_, err = buf.ReadFrom(io.LimitReader(r, e.Size()))
return err
}
if o.ExternalRef {
// we were not able to resolve a ref in a thin pack
return ErrReferenceDeltaNotFound
}
if o.DiskType.IsDelta() {
b := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(b)
buf.Grow(int(o.Length))
err := p.get(o.Parent, b)
if err != nil {
return err
}
err = p.resolveObject(buf, o, bytes.NewReader(b.Bytes()))
if err != nil {
return err
}
} else {
err := p.readData(buf, o)
if err != nil {
return err
}
}
// If the scanner is seekable, caching this data into
// memory by offset seems wasteful.
// There is a trade-off to be considered here in terms
// of execution time vs memory consumption.
//
// TODO: improve seekable execution time, so that we can
// skip this cache.
if len(o.Children) > 0 {
data := make([]byte, buf.Len())
copy(data, buf.Bytes())
p.cache.Put(o.Offset, data)
}
return nil
}
// resolveObject resolves an object from base, using information
// provided by o.
//
// This call has the side-effect of changing field values
// from the object info o:
// - Type: OFSDeltaObject may become the target type (e.g. Blob).
// - Size: The size may be update with the target size.
// - Hash: Zero hashes will be calculated as part of the object
// resolution. Hence why this process can't be avoided even when w
// is an io.Discard.
//
// base must be an io.ReaderAt, which is a requirement from
// patchDeltaStream. The main reason being that reversing an
// delta object may lead to going backs and forths within base,
// which is not supported by io.Reader.
func (p *Parser) resolveObject(
w io.Writer,
o *objectInfo,
base io.ReaderAt,
) error {
if !o.DiskType.IsDelta() {
return nil
}
buf := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(buf)
err := p.readData(buf, o)
if err != nil {
return err
}
writers := []io.Writer{w}
var obj *plumbing.MemoryObject
var lwh objectHeaderWriter
if p.storage != nil {
if low, ok := p.storage.(lazyObjectWriter); ok {
ow, wh, err := low.LazyWriter()
if err != nil {
return err
}
lwh = wh
defer ow.Close()
writers = append(writers, ow)
} else {
obj = new(plumbing.MemoryObject)
ow, err := obj.Writer()
if err != nil {
return err
}
writers = append(writers, ow)
}
}
mw := io.MultiWriter(writers...)
err = applyPatchBase(o, base, buf, mw, lwh)
if err != nil {
return err
}
if obj != nil {
obj.SetType(o.Type)
obj.SetSize(o.Size()) // Size here is correct as it was populated by applyPatchBase.
if _, err := p.storage.SetEncodedObject(obj); err != nil {
return err
}
}
return err
}
func (p *Parser) readData(w io.Writer, o *objectInfo) error {
if !p.scanner.IsSeekable && o.DiskType.IsDelta() {
data, ok := p.deltas[o.Offset]
if !ok {
return ErrDeltaNotCached
}
_, err := w.Write(data)
return err
}
if _, err := p.scanner.SeekObjectHeader(o.Offset); err != nil {
return err
}
if _, _, err := p.scanner.NextObject(w); err != nil {
return err
}
return nil
}
// applyPatchBase applies the patch to target.
//
// Note that ota will be updated based on the description in resolveObject.
func applyPatchBase(ota *objectInfo, base io.ReaderAt, delta io.Reader, target io.Writer, wh objectHeaderWriter) error {
if target == nil {
return fmt.Errorf("cannot apply patch against nil target")
}
typ := ota.Type
if ota.SHA1 == plumbing.ZeroHash {
typ = ota.Parent.Type
}
sz, h, err := patchDeltaWriter(target, base, delta, typ, wh)
if err != nil {
return err
}
if ota.SHA1 == plumbing.ZeroHash {
ota.Type = typ
ota.Length = int64(sz)
ota.SHA1 = h
}
return nil
}
func getSHA1(t plumbing.ObjectType, data []byte) (plumbing.Hash, error) {
hasher := plumbing.NewHasher(t, int64(len(data)))
if _, err := hasher.Write(data); err != nil {
return plumbing.ZeroHash, err
}
return hasher.Sum(), nil
}
type objectInfo struct {
Offset int64
Length int64
Type plumbing.ObjectType
DiskType plumbing.ObjectType
ExternalRef bool // indicates this is an external reference in a thin pack file
Crc32 uint32
Parent *objectInfo
Children []*objectInfo
SHA1 plumbing.Hash
}
func newBaseObject(offset, length int64, t plumbing.ObjectType) *objectInfo {
return newDeltaObject(offset, length, t, nil)
}
func newDeltaObject(
offset, length int64,
t plumbing.ObjectType,
parent *objectInfo,
) *objectInfo {
obj := &objectInfo{
Offset: offset,
Length: length,
Type: t,
DiskType: t,
Crc32: 0,
Parent: parent,
}
return obj
}
func (o *objectInfo) IsDelta() bool {
return o.Type.IsDelta()
}
func (o *objectInfo) Size() int64 {
return o.Length
}

View File

@ -0,0 +1,526 @@
package packfile
import (
"bufio"
"bytes"
"errors"
"fmt"
"io"
"math"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/sync"
)
// See https://github.com/git/git/blob/49fa3dc76179e04b0833542fa52d0f287a4955ac/delta.h
// https://github.com/git/git/blob/c2c5f6b1e479f2c38e0e01345350620944e3527f/patch-delta.c,
// and https://github.com/tarruda/node-git-core/blob/master/src/js/delta.js
// for details about the delta format.
var (
ErrInvalidDelta = errors.New("invalid delta")
ErrDeltaCmd = errors.New("wrong delta command")
)
const (
payload = 0x7f // 0111 1111
continuation = 0x80 // 1000 0000
)
type offset struct {
mask byte
shift uint
}
var offsets = []offset{
{mask: 0x01, shift: 0},
{mask: 0x02, shift: 8},
{mask: 0x04, shift: 16},
{mask: 0x08, shift: 24},
}
var sizes = []offset{
{mask: 0x10, shift: 0},
{mask: 0x20, shift: 8},
{mask: 0x40, shift: 16},
}
// ApplyDelta writes to target the result of applying the modification deltas in delta to base.
func ApplyDelta(target, base plumbing.EncodedObject, delta []byte) (err error) {
r, err := base.Reader()
if err != nil {
return err
}
defer ioutil.CheckClose(r, &err)
w, err := target.Writer()
if err != nil {
return err
}
defer ioutil.CheckClose(w, &err)
buf := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(buf)
_, err = buf.ReadFrom(r)
if err != nil {
return err
}
src := buf.Bytes()
dst := sync.GetBytesBuffer()
defer sync.PutBytesBuffer(dst)
err = patchDelta(dst, src, delta)
if err != nil {
return err
}
target.SetSize(int64(dst.Len()))
b := sync.GetByteSlice()
_, err = io.CopyBuffer(w, dst, *b)
sync.PutByteSlice(b)
return err
}
// PatchDelta returns the result of applying the modification deltas in delta to src.
// An error will be returned if delta is corrupted (ErrDeltaLen) or an action command
// is not copy from source or copy from delta (ErrDeltaCmd).
func PatchDelta(src, delta []byte) ([]byte, error) {
b := &bytes.Buffer{}
if err := patchDelta(b, src, delta); err != nil {
return nil, err
}
return b.Bytes(), nil
}
func ReaderFromDelta(base plumbing.EncodedObject, deltaRC io.Reader) (io.ReadCloser, error) {
deltaBuf := bufio.NewReaderSize(deltaRC, 1024)
srcSz, err := decodeLEB128ByteReader(deltaBuf)
if err != nil {
if err == io.EOF {
return nil, ErrInvalidDelta
}
return nil, err
}
if srcSz != uint(base.Size()) {
return nil, ErrInvalidDelta
}
targetSz, err := decodeLEB128ByteReader(deltaBuf)
if err != nil {
if err == io.EOF {
return nil, ErrInvalidDelta
}
return nil, err
}
remainingTargetSz := targetSz
dstRd, dstWr := io.Pipe()
go func() {
baseRd, err := base.Reader()
if err != nil {
_ = dstWr.CloseWithError(ErrInvalidDelta)
return
}
defer baseRd.Close()
baseBuf := bufio.NewReader(baseRd)
basePos := uint(0)
for {
cmd, err := deltaBuf.ReadByte()
if err == io.EOF {
_ = dstWr.CloseWithError(ErrInvalidDelta)
return
}
if err != nil {
_ = dstWr.CloseWithError(err)
return
}
switch {
case isCopyFromSrc(cmd):
offset, err := decodeOffsetByteReader(cmd, deltaBuf)
if err != nil {
_ = dstWr.CloseWithError(err)
return
}
sz, err := decodeSizeByteReader(cmd, deltaBuf)
if err != nil {
_ = dstWr.CloseWithError(err)
return
}
if invalidSize(sz, targetSz) ||
invalidOffsetSize(offset, sz, srcSz) {
_ = dstWr.Close()
return
}
discard := offset - basePos
if basePos > offset {
_ = baseRd.Close()
baseRd, err = base.Reader()
if err != nil {
_ = dstWr.CloseWithError(ErrInvalidDelta)
return
}
baseBuf.Reset(baseRd)
discard = offset
}
for discard > math.MaxInt32 {
n, err := baseBuf.Discard(math.MaxInt32)
if err != nil {
_ = dstWr.CloseWithError(err)
return
}
basePos += uint(n)
discard -= uint(n)
}
for discard > 0 {
n, err := baseBuf.Discard(int(discard))
if err != nil {
_ = dstWr.CloseWithError(err)
return
}
basePos += uint(n)
discard -= uint(n)
}
if _, err := io.Copy(dstWr, io.LimitReader(baseBuf, int64(sz))); err != nil {
_ = dstWr.CloseWithError(err)
return
}
remainingTargetSz -= sz
basePos += sz
case isCopyFromDelta(cmd):
sz := uint(cmd) // cmd is the size itself
if invalidSize(sz, targetSz) {
_ = dstWr.CloseWithError(ErrInvalidDelta)
return
}
if _, err := io.Copy(dstWr, io.LimitReader(deltaBuf, int64(sz))); err != nil {
_ = dstWr.CloseWithError(err)
return
}
remainingTargetSz -= sz
default:
_ = dstWr.CloseWithError(ErrDeltaCmd)
return
}
if remainingTargetSz <= 0 {
_ = dstWr.Close()
return
}
}
}()
return dstRd, nil
}
func patchDelta(dst *bytes.Buffer, src, delta []byte) error {
if len(delta) < minCopySize {
return ErrInvalidDelta
}
srcSz, delta := decodeLEB128(delta)
if srcSz != uint(len(src)) {
return ErrInvalidDelta
}
targetSz, delta := decodeLEB128(delta)
remainingTargetSz := targetSz
var cmd byte
dst.Grow(int(targetSz))
for {
if len(delta) == 0 {
return ErrInvalidDelta
}
cmd = delta[0]
delta = delta[1:]
switch {
case isCopyFromSrc(cmd):
var offset, sz uint
var err error
offset, delta, err = decodeOffset(cmd, delta)
if err != nil {
return err
}
sz, delta, err = decodeSize(cmd, delta)
if err != nil {
return err
}
if invalidSize(sz, targetSz) ||
invalidOffsetSize(offset, sz, srcSz) {
break
}
dst.Write(src[offset : offset+sz])
remainingTargetSz -= sz
case isCopyFromDelta(cmd):
sz := uint(cmd) // cmd is the size itself
if invalidSize(sz, targetSz) {
return ErrInvalidDelta
}
if uint(len(delta)) < sz {
return ErrInvalidDelta
}
dst.Write(delta[0:sz])
remainingTargetSz -= sz
delta = delta[sz:]
default:
return ErrDeltaCmd
}
if remainingTargetSz <= 0 {
break
}
}
return nil
}
func patchDeltaWriter(dst io.Writer, base io.ReaderAt, delta io.Reader,
typ plumbing.ObjectType, writeHeader objectHeaderWriter) (uint, plumbing.Hash, error) {
deltaBuf := bufio.NewReaderSize(delta, 1024)
srcSz, err := decodeLEB128ByteReader(deltaBuf)
if err != nil {
if err == io.EOF {
return 0, plumbing.ZeroHash, ErrInvalidDelta
}
return 0, plumbing.ZeroHash, err
}
if r, ok := base.(*bytes.Reader); ok && srcSz != uint(r.Size()) {
return 0, plumbing.ZeroHash, ErrInvalidDelta
}
targetSz, err := decodeLEB128ByteReader(deltaBuf)
if err != nil {
if err == io.EOF {
return 0, plumbing.ZeroHash, ErrInvalidDelta
}
return 0, plumbing.ZeroHash, err
}
// If header still needs to be written, caller will provide
// a LazyObjectWriterHeader. This seems to be the case when
// dealing with thin-packs.
if writeHeader != nil {
err = writeHeader(typ, int64(targetSz))
if err != nil {
return 0, plumbing.ZeroHash, fmt.Errorf("could not lazy write header: %w", err)
}
}
remainingTargetSz := targetSz
hasher := plumbing.NewHasher(typ, int64(targetSz))
mw := io.MultiWriter(dst, hasher)
bufp := sync.GetByteSlice()
defer sync.PutByteSlice(bufp)
sr := io.NewSectionReader(base, int64(0), int64(srcSz))
// Keep both the io.LimitedReader types, so we can reset N.
baselr := io.LimitReader(sr, 0).(*io.LimitedReader)
deltalr := io.LimitReader(deltaBuf, 0).(*io.LimitedReader)
for {
buf := *bufp
cmd, err := deltaBuf.ReadByte()
if err == io.EOF {
return 0, plumbing.ZeroHash, ErrInvalidDelta
}
if err != nil {
return 0, plumbing.ZeroHash, err
}
if isCopyFromSrc(cmd) {
offset, err := decodeOffsetByteReader(cmd, deltaBuf)
if err != nil {
return 0, plumbing.ZeroHash, err
}
sz, err := decodeSizeByteReader(cmd, deltaBuf)
if err != nil {
return 0, plumbing.ZeroHash, err
}
if invalidSize(sz, targetSz) ||
invalidOffsetSize(offset, sz, srcSz) {
return 0, plumbing.ZeroHash, err
}
if _, err := sr.Seek(int64(offset), io.SeekStart); err != nil {
return 0, plumbing.ZeroHash, err
}
baselr.N = int64(sz)
if _, err := io.CopyBuffer(mw, baselr, buf); err != nil {
return 0, plumbing.ZeroHash, err
}
remainingTargetSz -= sz
} else if isCopyFromDelta(cmd) {
sz := uint(cmd) // cmd is the size itself
if invalidSize(sz, targetSz) {
return 0, plumbing.ZeroHash, ErrInvalidDelta
}
deltalr.N = int64(sz)
if _, err := io.CopyBuffer(mw, deltalr, buf); err != nil {
return 0, plumbing.ZeroHash, err
}
remainingTargetSz -= sz
} else {
return 0, plumbing.ZeroHash, err
}
if remainingTargetSz <= 0 {
break
}
}
return targetSz, hasher.Sum(), nil
}
// Decodes a number encoded as an unsigned LEB128 at the start of some
// binary data and returns the decoded number and the rest of the
// stream.
//
// This must be called twice on the delta data buffer, first to get the
// expected source buffer size, and again to get the target buffer size.
func decodeLEB128(input []byte) (uint, []byte) {
var num, sz uint
var b byte
for {
b = input[sz]
num |= (uint(b) & payload) << (sz * 7) // concats 7 bits chunks
sz++
if uint(b)&continuation == 0 || sz == uint(len(input)) {
break
}
}
return num, input[sz:]
}
func decodeLEB128ByteReader(input io.ByteReader) (uint, error) {
var num, sz uint
for {
b, err := input.ReadByte()
if err != nil {
return 0, err
}
num |= (uint(b) & payload) << (sz * 7) // concats 7 bits chunks
sz++
if uint(b)&continuation == 0 {
break
}
}
return num, nil
}
func isCopyFromSrc(cmd byte) bool {
return (cmd & continuation) != 0
}
func isCopyFromDelta(cmd byte) bool {
return (cmd&continuation) == 0 && cmd != 0
}
func decodeOffsetByteReader(cmd byte, delta io.ByteReader) (uint, error) {
var offset uint
for _, o := range offsets {
if (cmd & o.mask) != 0 {
next, err := delta.ReadByte()
if err != nil {
return 0, err
}
offset |= uint(next) << o.shift
}
}
return offset, nil
}
func decodeOffset(cmd byte, delta []byte) (uint, []byte, error) {
var offset uint
for _, o := range offsets {
if (cmd & o.mask) != 0 {
if len(delta) == 0 {
return 0, nil, ErrInvalidDelta
}
offset |= uint(delta[0]) << o.shift
delta = delta[1:]
}
}
return offset, delta, nil
}
func decodeSizeByteReader(cmd byte, delta io.ByteReader) (uint, error) {
var sz uint
for _, s := range sizes {
if (cmd & s.mask) != 0 {
next, err := delta.ReadByte()
if err != nil {
return 0, err
}
sz |= uint(next) << s.shift
}
}
if sz == 0 {
sz = maxCopySize
}
return sz, nil
}
func decodeSize(cmd byte, delta []byte) (uint, []byte, error) {
var sz uint
for _, s := range sizes {
if (cmd & s.mask) != 0 {
if len(delta) == 0 {
return 0, nil, ErrInvalidDelta
}
sz |= uint(delta[0]) << s.shift
delta = delta[1:]
}
}
if sz == 0 {
sz = maxCopySize
}
return sz, delta, nil
}
func invalidSize(sz, targetSz uint) bool {
return sz > targetSz
}
func invalidOffsetSize(offset, sz, srcSz uint) bool {
return sumOverflows(offset, sz) ||
offset+sz > srcSz
}
func sumOverflows(a, b uint) bool {
return a+b < a
}

View File

@ -0,0 +1,474 @@
package packfile
import (
"bufio"
"bytes"
"fmt"
"hash"
"hash/crc32"
"io"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/utils/binary"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/sync"
)
var (
// ErrEmptyPackfile is returned by ReadHeader when no data is found in the packfile
ErrEmptyPackfile = NewError("empty packfile")
// ErrBadSignature is returned by ReadHeader when the signature in the packfile is incorrect.
ErrBadSignature = NewError("malformed pack file signature")
// ErrUnsupportedVersion is returned by ReadHeader when the packfile version is
// different than VersionSupported.
ErrUnsupportedVersion = NewError("unsupported packfile version")
// ErrSeekNotSupported returned if seek is not support
ErrSeekNotSupported = NewError("not seek support")
)
// ObjectHeader contains the information related to the object, this information
// is collected from the previous bytes to the content of the object.
type ObjectHeader struct {
Type plumbing.ObjectType
Offset int64
Length int64
Reference plumbing.Hash
OffsetReference int64
}
type Scanner struct {
r *scannerReader
crc hash.Hash32
// pendingObject is used to detect if an object has been read, or still
// is waiting to be read
pendingObject *ObjectHeader
version, objects uint32
// lsSeekable says if this scanner can do Seek or not, to have a Scanner
// seekable a r implementing io.Seeker is required
IsSeekable bool
}
// NewScanner returns a new Scanner based on a reader, if the given reader
// implements io.ReadSeeker the Scanner will be also Seekable
func NewScanner(r io.Reader) *Scanner {
_, ok := r.(io.ReadSeeker)
crc := crc32.NewIEEE()
return &Scanner{
r: newScannerReader(r, crc),
crc: crc,
IsSeekable: ok,
}
}
func (s *Scanner) Reset(r io.Reader) {
_, ok := r.(io.ReadSeeker)
s.r.Reset(r)
s.crc.Reset()
s.IsSeekable = ok
s.pendingObject = nil
s.version = 0
s.objects = 0
}
// Header reads the whole packfile header (signature, version and object count).
// It returns the version and the object count and performs checks on the
// validity of the signature and the version fields.
func (s *Scanner) Header() (version, objects uint32, err error) {
if s.version != 0 {
return s.version, s.objects, nil
}
sig, err := s.readSignature()
if err != nil {
if err == io.EOF {
err = ErrEmptyPackfile
}
return
}
if !s.isValidSignature(sig) {
err = ErrBadSignature
return
}
version, err = s.readVersion()
s.version = version
if err != nil {
return
}
if !s.isSupportedVersion(version) {
err = ErrUnsupportedVersion.AddDetails("%d", version)
return
}
objects, err = s.readCount()
s.objects = objects
return
}
// readSignature reads a returns the signature field in the packfile.
func (s *Scanner) readSignature() ([]byte, error) {
var sig = make([]byte, 4)
if _, err := io.ReadFull(s.r, sig); err != nil {
return []byte{}, err
}
return sig, nil
}
// isValidSignature returns if sig is a valid packfile signature.
func (s *Scanner) isValidSignature(sig []byte) bool {
return bytes.Equal(sig, signature)
}
// readVersion reads and returns the version field of a packfile.
func (s *Scanner) readVersion() (uint32, error) {
return binary.ReadUint32(s.r)
}
// isSupportedVersion returns whether version v is supported by the parser.
// The current supported version is VersionSupported, defined above.
func (s *Scanner) isSupportedVersion(v uint32) bool {
return v == VersionSupported
}
// readCount reads and returns the count of objects field of a packfile.
func (s *Scanner) readCount() (uint32, error) {
return binary.ReadUint32(s.r)
}
// SeekObjectHeader seeks to specified offset and returns the ObjectHeader
// for the next object in the reader
func (s *Scanner) SeekObjectHeader(offset int64) (*ObjectHeader, error) {
// if seeking we assume that you are not interested in the header
if s.version == 0 {
s.version = VersionSupported
}
if _, err := s.r.Seek(offset, io.SeekStart); err != nil {
return nil, err
}
h, err := s.nextObjectHeader()
if err != nil {
return nil, err
}
h.Offset = offset
return h, nil
}
// NextObjectHeader returns the ObjectHeader for the next object in the reader
func (s *Scanner) NextObjectHeader() (*ObjectHeader, error) {
if err := s.doPending(); err != nil {
return nil, err
}
offset, err := s.r.Seek(0, io.SeekCurrent)
if err != nil {
return nil, err
}
h, err := s.nextObjectHeader()
if err != nil {
return nil, err
}
h.Offset = offset
return h, nil
}
// nextObjectHeader returns the ObjectHeader for the next object in the reader
// without the Offset field
func (s *Scanner) nextObjectHeader() (*ObjectHeader, error) {
s.r.Flush()
s.crc.Reset()
h := &ObjectHeader{}
s.pendingObject = h
var err error
h.Offset, err = s.r.Seek(0, io.SeekCurrent)
if err != nil {
return nil, err
}
h.Type, h.Length, err = s.readObjectTypeAndLength()
if err != nil {
return nil, err
}
switch h.Type {
case plumbing.OFSDeltaObject:
no, err := binary.ReadVariableWidthInt(s.r)
if err != nil {
return nil, err
}
h.OffsetReference = h.Offset - no
case plumbing.REFDeltaObject:
var err error
h.Reference, err = binary.ReadHash(s.r)
if err != nil {
return nil, err
}
}
return h, nil
}
func (s *Scanner) doPending() error {
if s.version == 0 {
var err error
s.version, s.objects, err = s.Header()
if err != nil {
return err
}
}
return s.discardObjectIfNeeded()
}
func (s *Scanner) discardObjectIfNeeded() error {
if s.pendingObject == nil {
return nil
}
h := s.pendingObject
n, _, err := s.NextObject(io.Discard)
if err != nil {
return err
}
if n != h.Length {
return fmt.Errorf(
"error discarding object, discarded %d, expected %d",
n, h.Length,
)
}
return nil
}
// ReadObjectTypeAndLength reads and returns the object type and the
// length field from an object entry in a packfile.
func (s *Scanner) readObjectTypeAndLength() (plumbing.ObjectType, int64, error) {
t, c, err := s.readType()
if err != nil {
return t, 0, err
}
l, err := s.readLength(c)
return t, l, err
}
func (s *Scanner) readType() (plumbing.ObjectType, byte, error) {
var c byte
var err error
if c, err = s.r.ReadByte(); err != nil {
return plumbing.ObjectType(0), 0, err
}
typ := parseType(c)
return typ, c, nil
}
func parseType(b byte) plumbing.ObjectType {
return plumbing.ObjectType((b & maskType) >> firstLengthBits)
}
// the length is codified in the last 4 bits of the first byte and in
// the last 7 bits of subsequent bytes. Last byte has a 0 MSB.
func (s *Scanner) readLength(first byte) (int64, error) {
length := int64(first & maskFirstLength)
c := first
shift := firstLengthBits
var err error
for c&maskContinue > 0 {
if c, err = s.r.ReadByte(); err != nil {
return 0, err
}
length += int64(c&maskLength) << shift
shift += lengthBits
}
return length, nil
}
// NextObject writes the content of the next object into the reader, returns
// the number of bytes written, the CRC32 of the content and an error, if any
func (s *Scanner) NextObject(w io.Writer) (written int64, crc32 uint32, err error) {
s.pendingObject = nil
written, err = s.copyObject(w)
s.r.Flush()
crc32 = s.crc.Sum32()
s.crc.Reset()
return
}
// ReadObject returns a reader for the object content and an error
func (s *Scanner) ReadObject() (io.ReadCloser, error) {
s.pendingObject = nil
zr, err := sync.GetZlibReader(s.r)
if err != nil {
return nil, fmt.Errorf("zlib reset error: %s", err)
}
return ioutil.NewReadCloserWithCloser(zr.Reader, func() error {
sync.PutZlibReader(zr)
return nil
}), nil
}
// ReadRegularObject reads and write a non-deltified object
// from it zlib stream in an object entry in the packfile.
func (s *Scanner) copyObject(w io.Writer) (n int64, err error) {
zr, err := sync.GetZlibReader(s.r)
defer sync.PutZlibReader(zr)
if err != nil {
return 0, fmt.Errorf("zlib reset error: %s", err)
}
defer ioutil.CheckClose(zr.Reader, &err)
buf := sync.GetByteSlice()
n, err = io.CopyBuffer(w, zr.Reader, *buf)
sync.PutByteSlice(buf)
return
}
// SeekFromStart sets a new offset from start, returns the old position before
// the change.
func (s *Scanner) SeekFromStart(offset int64) (previous int64, err error) {
// if seeking we assume that you are not interested in the header
if s.version == 0 {
s.version = VersionSupported
}
previous, err = s.r.Seek(0, io.SeekCurrent)
if err != nil {
return -1, err
}
_, err = s.r.Seek(offset, io.SeekStart)
return previous, err
}
// Checksum returns the checksum of the packfile
func (s *Scanner) Checksum() (plumbing.Hash, error) {
err := s.discardObjectIfNeeded()
if err != nil {
return plumbing.ZeroHash, err
}
return binary.ReadHash(s.r)
}
// Close reads the reader until io.EOF
func (s *Scanner) Close() error {
buf := sync.GetByteSlice()
_, err := io.CopyBuffer(io.Discard, s.r, *buf)
sync.PutByteSlice(buf)
return err
}
// Flush is a no-op (deprecated)
func (s *Scanner) Flush() error {
return nil
}
// scannerReader has the following characteristics:
// - Provides an io.SeekReader impl for bufio.Reader, when the underlying
// reader supports it.
// - Keeps track of the current read position, for when the underlying reader
// isn't an io.SeekReader, but we still want to know the current offset.
// - Writes to the hash writer what it reads, with the aid of a smaller buffer.
// The buffer helps avoid a performance penalty for performing small writes
// to the crc32 hash writer.
type scannerReader struct {
reader io.Reader
crc io.Writer
rbuf *bufio.Reader
wbuf *bufio.Writer
offset int64
}
func newScannerReader(r io.Reader, h io.Writer) *scannerReader {
sr := &scannerReader{
rbuf: bufio.NewReader(nil),
wbuf: bufio.NewWriterSize(nil, 64),
crc: h,
}
sr.Reset(r)
return sr
}
func (r *scannerReader) Reset(reader io.Reader) {
r.reader = reader
r.rbuf.Reset(r.reader)
r.wbuf.Reset(r.crc)
r.offset = 0
if seeker, ok := r.reader.(io.ReadSeeker); ok {
r.offset, _ = seeker.Seek(0, io.SeekCurrent)
}
}
func (r *scannerReader) Read(p []byte) (n int, err error) {
n, err = r.rbuf.Read(p)
r.offset += int64(n)
if _, err := r.wbuf.Write(p[:n]); err != nil {
return n, err
}
return
}
func (r *scannerReader) ReadByte() (b byte, err error) {
b, err = r.rbuf.ReadByte()
if err == nil {
r.offset++
return b, r.wbuf.WriteByte(b)
}
return
}
func (r *scannerReader) Flush() error {
return r.wbuf.Flush()
}
// Seek seeks to a location. If the underlying reader is not an io.ReadSeeker,
// then only whence=io.SeekCurrent is supported, any other operation fails.
func (r *scannerReader) Seek(offset int64, whence int) (int64, error) {
var err error
if seeker, ok := r.reader.(io.ReadSeeker); !ok {
if whence != io.SeekCurrent || offset != 0 {
return -1, ErrSeekNotSupported
}
} else {
if whence == io.SeekCurrent && offset == 0 {
return r.offset, nil
}
r.offset, err = seeker.Seek(offset, whence)
r.rbuf.Reset(r.reader)
}
return r.offset, err
}

View File

@ -0,0 +1,126 @@
// Package pktline implements reading payloads form pkt-lines and encoding
// pkt-lines from payloads.
package pktline
import (
"bytes"
"errors"
"fmt"
"io"
"github.com/go-git/go-git/v5/utils/trace"
)
// An Encoder writes pkt-lines to an output stream.
type Encoder struct {
w io.Writer
}
const (
// MaxPayloadSize is the maximum payload size of a pkt-line in bytes.
MaxPayloadSize = 65516
// For compatibility with canonical Git implementation, accept longer pkt-lines
OversizePayloadMax = 65520
)
var (
// FlushPkt are the contents of a flush-pkt pkt-line.
FlushPkt = []byte{'0', '0', '0', '0'}
// Flush is the payload to use with the Encode method to encode a flush-pkt.
Flush = []byte{}
// FlushString is the payload to use with the EncodeString method to encode a flush-pkt.
FlushString = ""
// ErrPayloadTooLong is returned by the Encode methods when any of the
// provided payloads is bigger than MaxPayloadSize.
ErrPayloadTooLong = errors.New("payload is too long")
)
// NewEncoder returns a new encoder that writes to w.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: w,
}
}
// Flush encodes a flush-pkt to the output stream.
func (e *Encoder) Flush() error {
defer trace.Packet.Print("packet: > 0000")
_, err := e.w.Write(FlushPkt)
return err
}
// Encode encodes a pkt-line with the payload specified and write it to
// the output stream. If several payloads are specified, each of them
// will get streamed in their own pkt-lines.
func (e *Encoder) Encode(payloads ...[]byte) error {
for _, p := range payloads {
if err := e.encodeLine(p); err != nil {
return err
}
}
return nil
}
func (e *Encoder) encodeLine(p []byte) error {
if len(p) > MaxPayloadSize {
return ErrPayloadTooLong
}
if bytes.Equal(p, Flush) {
return e.Flush()
}
n := len(p) + 4
defer trace.Packet.Printf("packet: > %04x %s", n, p)
if _, err := e.w.Write(asciiHex16(n)); err != nil {
return err
}
_, err := e.w.Write(p)
return err
}
// Returns the hexadecimal ascii representation of the 16 less
// significant bits of n. The length of the returned slice will always
// be 4. Example: if n is 1234 (0x4d2), the return value will be
// []byte{'0', '4', 'd', '2'}.
func asciiHex16(n int) []byte {
var ret [4]byte
ret[0] = byteToASCIIHex(byte(n & 0xf000 >> 12))
ret[1] = byteToASCIIHex(byte(n & 0x0f00 >> 8))
ret[2] = byteToASCIIHex(byte(n & 0x00f0 >> 4))
ret[3] = byteToASCIIHex(byte(n & 0x000f))
return ret[:]
}
// turns a byte into its hexadecimal ascii representation. Example:
// from 11 (0xb) to 'b'.
func byteToASCIIHex(n byte) byte {
if n < 10 {
return '0' + n
}
return 'a' - 10 + n
}
// EncodeString works similarly as Encode but payloads are specified as strings.
func (e *Encoder) EncodeString(payloads ...string) error {
for _, p := range payloads {
if err := e.Encode([]byte(p)); err != nil {
return err
}
}
return nil
}
// Encodef encodes a single pkt-line with the payload formatted as
// the format specifier. The rest of the arguments will be used in
// the format string.
func (e *Encoder) Encodef(format string, a ...interface{}) error {
return e.EncodeString(
fmt.Sprintf(format, a...),
)
}

View File

@ -0,0 +1,51 @@
package pktline
import (
"bytes"
"errors"
"io"
"strings"
)
var (
// ErrInvalidErrorLine is returned by Decode when the packet line is not an
// error line.
ErrInvalidErrorLine = errors.New("expected an error-line")
errPrefix = []byte("ERR ")
)
// ErrorLine is a packet line that contains an error message.
// Once this packet is sent by client or server, the data transfer process is
// terminated.
// See https://git-scm.com/docs/pack-protocol#_pkt_line_format
type ErrorLine struct {
Text string
}
// Error implements the error interface.
func (e *ErrorLine) Error() string {
return e.Text
}
// Encode encodes the ErrorLine into a packet line.
func (e *ErrorLine) Encode(w io.Writer) error {
p := NewEncoder(w)
return p.Encodef("%s%s\n", string(errPrefix), e.Text)
}
// Decode decodes a packet line into an ErrorLine.
func (e *ErrorLine) Decode(r io.Reader) error {
s := NewScanner(r)
if !s.Scan() {
return s.Err()
}
line := s.Bytes()
if !bytes.HasPrefix(line, errPrefix) {
return ErrInvalidErrorLine
}
e.Text = strings.TrimSpace(string(line[4:]))
return nil
}

View File

@ -0,0 +1,146 @@
package pktline
import (
"bytes"
"errors"
"io"
"strings"
"github.com/go-git/go-git/v5/utils/trace"
)
const (
lenSize = 4
)
// ErrInvalidPktLen is returned by Err() when an invalid pkt-len is found.
var ErrInvalidPktLen = errors.New("invalid pkt-len found")
// Scanner provides a convenient interface for reading the payloads of a
// series of pkt-lines. It takes an io.Reader providing the source,
// which then can be tokenized through repeated calls to the Scan
// method.
//
// After each Scan call, the Bytes method will return the payload of the
// corresponding pkt-line on a shared buffer, which will be 65516 bytes
// or smaller. Flush pkt-lines are represented by empty byte slices.
//
// Scanning stops at EOF or the first I/O error.
type Scanner struct {
r io.Reader // The reader provided by the client
err error // Sticky error
payload []byte // Last pkt-payload
len [lenSize]byte // Last pkt-len
}
// NewScanner returns a new Scanner to read from r.
func NewScanner(r io.Reader) *Scanner {
return &Scanner{
r: r,
}
}
// Err returns the first error encountered by the Scanner.
func (s *Scanner) Err() error {
return s.err
}
// Scan advances the Scanner to the next pkt-line, whose payload will
// then be available through the Bytes method. Scanning stops at EOF
// or the first I/O error. After Scan returns false, the Err method
// will return any error that occurred during scanning, except that if
// it was io.EOF, Err will return nil.
func (s *Scanner) Scan() bool {
var l int
l, s.err = s.readPayloadLen()
if s.err == io.EOF {
s.err = nil
return false
}
if s.err != nil {
return false
}
if cap(s.payload) < l {
s.payload = make([]byte, 0, l)
}
if _, s.err = io.ReadFull(s.r, s.payload[:l]); s.err != nil {
return false
}
s.payload = s.payload[:l]
trace.Packet.Printf("packet: < %04x %s", l, s.payload)
if bytes.HasPrefix(s.payload, errPrefix) {
s.err = &ErrorLine{
Text: strings.TrimSpace(string(s.payload[4:])),
}
return false
}
return true
}
// Bytes returns the most recent payload generated by a call to Scan.
// The underlying array may point to data that will be overwritten by a
// subsequent call to Scan. It does no allocation.
func (s *Scanner) Bytes() []byte {
return s.payload
}
// Method readPayloadLen returns the payload length by reading the
// pkt-len and subtracting the pkt-len size.
func (s *Scanner) readPayloadLen() (int, error) {
if _, err := io.ReadFull(s.r, s.len[:]); err != nil {
if err == io.ErrUnexpectedEOF {
return 0, ErrInvalidPktLen
}
return 0, err
}
n, err := hexDecode(s.len)
if err != nil {
return 0, err
}
switch {
case n == 0:
return 0, nil
case n <= lenSize:
return 0, ErrInvalidPktLen
case n > OversizePayloadMax+lenSize:
return 0, ErrInvalidPktLen
default:
return n - lenSize, nil
}
}
// Turns the hexadecimal representation of a number in a byte slice into
// a number. This function substitute strconv.ParseUint(string(buf), 16,
// 16) and/or hex.Decode, to avoid generating new strings, thus helping the
// GC.
func hexDecode(buf [lenSize]byte) (int, error) {
var ret int
for i := 0; i < lenSize; i++ {
n, err := asciiHexToByte(buf[i])
if err != nil {
return 0, ErrInvalidPktLen
}
ret = 16*ret + int(n)
}
return ret, nil
}
// turns the hexadecimal ascii representation of a byte into its
// numerical value. Example: from 'b' to 11 (0xb).
func asciiHexToByte(b byte) (byte, error) {
switch {
case b >= '0' && b <= '9':
return b - '0', nil
case b >= 'a' && b <= 'f':
return b - 'a' + 10, nil
default:
return 0, ErrInvalidPktLen
}
}

84
vendor/github.com/go-git/go-git/v5/plumbing/hash.go generated vendored Normal file
View File

@ -0,0 +1,84 @@
package plumbing
import (
"bytes"
"encoding/hex"
"sort"
"strconv"
"github.com/go-git/go-git/v5/plumbing/hash"
)
// Hash SHA1 hashed content
type Hash [hash.Size]byte
// ZeroHash is Hash with value zero
var ZeroHash Hash
// ComputeHash compute the hash for a given ObjectType and content
func ComputeHash(t ObjectType, content []byte) Hash {
h := NewHasher(t, int64(len(content)))
h.Write(content)
return h.Sum()
}
// NewHash return a new Hash from a hexadecimal hash representation
func NewHash(s string) Hash {
b, _ := hex.DecodeString(s)
var h Hash
copy(h[:], b)
return h
}
func (h Hash) IsZero() bool {
var empty Hash
return h == empty
}
func (h Hash) String() string {
return hex.EncodeToString(h[:])
}
type Hasher struct {
hash.Hash
}
func NewHasher(t ObjectType, size int64) Hasher {
h := Hasher{hash.New(hash.CryptoType)}
h.Write(t.Bytes())
h.Write([]byte(" "))
h.Write([]byte(strconv.FormatInt(size, 10)))
h.Write([]byte{0})
return h
}
func (h Hasher) Sum() (hash Hash) {
copy(hash[:], h.Hash.Sum(nil))
return
}
// HashesSort sorts a slice of Hashes in increasing order.
func HashesSort(a []Hash) {
sort.Sort(HashSlice(a))
}
// HashSlice attaches the methods of sort.Interface to []Hash, sorting in
// increasing order.
type HashSlice []Hash
func (p HashSlice) Len() int { return len(p) }
func (p HashSlice) Less(i, j int) bool { return bytes.Compare(p[i][:], p[j][:]) < 0 }
func (p HashSlice) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
// IsHash returns true if the given string is a valid hash.
func IsHash(s string) bool {
switch len(s) {
case hash.HexSize:
_, err := hex.DecodeString(s)
return err == nil
default:
return false
}
}

View File

@ -0,0 +1,60 @@
// package hash provides a way for managing the
// underlying hash implementations used across go-git.
package hash
import (
"crypto"
"fmt"
"hash"
"github.com/pjbgf/sha1cd"
)
// algos is a map of hash algorithms.
var algos = map[crypto.Hash]func() hash.Hash{}
func init() {
reset()
}
// reset resets the default algos value. Can be used after running tests
// that registers new algorithms to avoid side effects.
func reset() {
algos[crypto.SHA1] = sha1cd.New
algos[crypto.SHA256] = crypto.SHA256.New
}
// RegisterHash allows for the hash algorithm used to be overridden.
// This ensures the hash selection for go-git must be explicit, when
// overriding the default value.
func RegisterHash(h crypto.Hash, f func() hash.Hash) error {
if f == nil {
return fmt.Errorf("cannot register hash: f is nil")
}
switch h {
case crypto.SHA1:
algos[h] = f
case crypto.SHA256:
algos[h] = f
default:
return fmt.Errorf("unsupported hash function: %v", h)
}
return nil
}
// Hash is the same as hash.Hash. This allows consumers
// to not having to import this package alongside "hash".
type Hash interface {
hash.Hash
}
// New returns a new Hash for the given hash function.
// It panics if the hash function is not registered.
func New(h crypto.Hash) Hash {
hh, ok := algos[h]
if !ok {
panic(fmt.Sprintf("hash algorithm not registered: %v", h))
}
return hh()
}

View File

@ -0,0 +1,15 @@
//go:build !sha256
// +build !sha256
package hash
import "crypto"
const (
// CryptoType defines what hash algorithm is being used.
CryptoType = crypto.SHA1
// Size defines the amount of bytes the hash yields.
Size = 20
// HexSize defines the strings size of the hash when represented in hexadecimal.
HexSize = 40
)

View File

@ -0,0 +1,15 @@
//go:build sha256
// +build sha256
package hash
import "crypto"
const (
// CryptoType defines what hash algorithm is being used.
CryptoType = crypto.SHA256
// Size defines the amount of bytes the hash yields.
Size = 32
// HexSize defines the strings size of the hash when represented in hexadecimal.
HexSize = 64
)

72
vendor/github.com/go-git/go-git/v5/plumbing/memory.go generated vendored Normal file
View File

@ -0,0 +1,72 @@
package plumbing
import (
"bytes"
"io"
)
// MemoryObject on memory Object implementation
type MemoryObject struct {
t ObjectType
h Hash
cont []byte
sz int64
}
// Hash returns the object Hash, the hash is calculated on-the-fly the first
// time it's called, in all subsequent calls the same Hash is returned even
// if the type or the content have changed. The Hash is only generated if the
// size of the content is exactly the object size.
func (o *MemoryObject) Hash() Hash {
if o.h == ZeroHash && int64(len(o.cont)) == o.sz {
o.h = ComputeHash(o.t, o.cont)
}
return o.h
}
// Type returns the ObjectType
func (o *MemoryObject) Type() ObjectType { return o.t }
// SetType sets the ObjectType
func (o *MemoryObject) SetType(t ObjectType) { o.t = t }
// Size returns the size of the object
func (o *MemoryObject) Size() int64 { return o.sz }
// SetSize set the object size, a content of the given size should be written
// afterwards
func (o *MemoryObject) SetSize(s int64) { o.sz = s }
// Reader returns an io.ReadCloser used to read the object's content.
//
// For a MemoryObject, this reader is seekable.
func (o *MemoryObject) Reader() (io.ReadCloser, error) {
return nopCloser{bytes.NewReader(o.cont)}, nil
}
// Writer returns a ObjectWriter used to write the object's content.
func (o *MemoryObject) Writer() (io.WriteCloser, error) {
return o, nil
}
func (o *MemoryObject) Write(p []byte) (n int, err error) {
o.cont = append(o.cont, p...)
o.sz = int64(len(o.cont))
return len(p), nil
}
// Close releases any resources consumed by the object when it is acting as a
// ObjectWriter.
func (o *MemoryObject) Close() error { return nil }
// nopCloser exposes the extra methods of bytes.Reader while nopping Close().
//
// This allows clients to attempt seeking in a cached Blob's Reader.
type nopCloser struct {
*bytes.Reader
}
// Close does nothing.
func (nc nopCloser) Close() error { return nil }

111
vendor/github.com/go-git/go-git/v5/plumbing/object.go generated vendored Normal file
View File

@ -0,0 +1,111 @@
// package plumbing implement the core interfaces and structs used by go-git
package plumbing
import (
"errors"
"io"
)
var (
ErrObjectNotFound = errors.New("object not found")
// ErrInvalidType is returned when an invalid object type is provided.
ErrInvalidType = errors.New("invalid object type")
)
// Object is a generic representation of any git object
type EncodedObject interface {
Hash() Hash
Type() ObjectType
SetType(ObjectType)
Size() int64
SetSize(int64)
Reader() (io.ReadCloser, error)
Writer() (io.WriteCloser, error)
}
// DeltaObject is an EncodedObject representing a delta.
type DeltaObject interface {
EncodedObject
// BaseHash returns the hash of the object used as base for this delta.
BaseHash() Hash
// ActualHash returns the hash of the object after applying the delta.
ActualHash() Hash
// Size returns the size of the object after applying the delta.
ActualSize() int64
}
// ObjectType internal object type
// Integer values from 0 to 7 map to those exposed by git.
// AnyObject is used to represent any from 0 to 7.
type ObjectType int8
const (
InvalidObject ObjectType = 0
CommitObject ObjectType = 1
TreeObject ObjectType = 2
BlobObject ObjectType = 3
TagObject ObjectType = 4
// 5 reserved for future expansion
OFSDeltaObject ObjectType = 6
REFDeltaObject ObjectType = 7
AnyObject ObjectType = -127
)
func (t ObjectType) String() string {
switch t {
case CommitObject:
return "commit"
case TreeObject:
return "tree"
case BlobObject:
return "blob"
case TagObject:
return "tag"
case OFSDeltaObject:
return "ofs-delta"
case REFDeltaObject:
return "ref-delta"
case AnyObject:
return "any"
default:
return "unknown"
}
}
func (t ObjectType) Bytes() []byte {
return []byte(t.String())
}
// Valid returns true if t is a valid ObjectType.
func (t ObjectType) Valid() bool {
return t >= CommitObject && t <= REFDeltaObject
}
// IsDelta returns true for any ObjectType that represents a delta (i.e.
// REFDeltaObject or OFSDeltaObject).
func (t ObjectType) IsDelta() bool {
return t == REFDeltaObject || t == OFSDeltaObject
}
// ParseObjectType parses a string representation of ObjectType. It returns an
// error on parse failure.
func ParseObjectType(value string) (typ ObjectType, err error) {
switch value {
case "commit":
typ = CommitObject
case "tree":
typ = TreeObject
case "blob":
typ = BlobObject
case "tag":
typ = TagObject
case "ofs-delta":
typ = OFSDeltaObject
case "ref-delta":
typ = REFDeltaObject
default:
err = ErrInvalidType
}
return
}

View File

@ -0,0 +1,144 @@
package object
import (
"io"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/ioutil"
)
// Blob is used to store arbitrary data - it is generally a file.
type Blob struct {
// Hash of the blob.
Hash plumbing.Hash
// Size of the (uncompressed) blob.
Size int64
obj plumbing.EncodedObject
}
// GetBlob gets a blob from an object storer and decodes it.
func GetBlob(s storer.EncodedObjectStorer, h plumbing.Hash) (*Blob, error) {
o, err := s.EncodedObject(plumbing.BlobObject, h)
if err != nil {
return nil, err
}
return DecodeBlob(o)
}
// DecodeObject decodes an encoded object into a *Blob.
func DecodeBlob(o plumbing.EncodedObject) (*Blob, error) {
b := &Blob{}
if err := b.Decode(o); err != nil {
return nil, err
}
return b, nil
}
// ID returns the object ID of the blob. The returned value will always match
// the current value of Blob.Hash.
//
// ID is present to fulfill the Object interface.
func (b *Blob) ID() plumbing.Hash {
return b.Hash
}
// Type returns the type of object. It always returns plumbing.BlobObject.
//
// Type is present to fulfill the Object interface.
func (b *Blob) Type() plumbing.ObjectType {
return plumbing.BlobObject
}
// Decode transforms a plumbing.EncodedObject into a Blob struct.
func (b *Blob) Decode(o plumbing.EncodedObject) error {
if o.Type() != plumbing.BlobObject {
return ErrUnsupportedObject
}
b.Hash = o.Hash()
b.Size = o.Size()
b.obj = o
return nil
}
// Encode transforms a Blob into a plumbing.EncodedObject.
func (b *Blob) Encode(o plumbing.EncodedObject) (err error) {
o.SetType(plumbing.BlobObject)
w, err := o.Writer()
if err != nil {
return err
}
defer ioutil.CheckClose(w, &err)
r, err := b.Reader()
if err != nil {
return err
}
defer ioutil.CheckClose(r, &err)
_, err = io.Copy(w, r)
return err
}
// Reader returns a reader allow the access to the content of the blob
func (b *Blob) Reader() (io.ReadCloser, error) {
return b.obj.Reader()
}
// BlobIter provides an iterator for a set of blobs.
type BlobIter struct {
storer.EncodedObjectIter
s storer.EncodedObjectStorer
}
// NewBlobIter takes a storer.EncodedObjectStorer and a
// storer.EncodedObjectIter and returns a *BlobIter that iterates over all
// blobs contained in the storer.EncodedObjectIter.
//
// Any non-blob object returned by the storer.EncodedObjectIter is skipped.
func NewBlobIter(s storer.EncodedObjectStorer, iter storer.EncodedObjectIter) *BlobIter {
return &BlobIter{iter, s}
}
// Next moves the iterator to the next blob and returns a pointer to it. If
// there are no more blobs, it returns io.EOF.
func (iter *BlobIter) Next() (*Blob, error) {
for {
obj, err := iter.EncodedObjectIter.Next()
if err != nil {
return nil, err
}
if obj.Type() != plumbing.BlobObject {
continue
}
return DecodeBlob(obj)
}
}
// ForEach call the cb function for each blob contained on this iter until
// an error happens or the end of the iter is reached. If ErrStop is sent
// the iteration is stop but no error is returned. The iterator is closed.
func (iter *BlobIter) ForEach(cb func(*Blob) error) error {
return iter.EncodedObjectIter.ForEach(func(obj plumbing.EncodedObject) error {
if obj.Type() != plumbing.BlobObject {
return nil
}
b, err := DecodeBlob(obj)
if err != nil {
return err
}
return cb(b)
})
}

View File

@ -0,0 +1,159 @@
package object
import (
"bytes"
"context"
"fmt"
"strings"
"github.com/go-git/go-git/v5/utils/merkletrie"
)
// Change values represent a detected change between two git trees. For
// modifications, From is the original status of the node and To is its
// final status. For insertions, From is the zero value and for
// deletions To is the zero value.
type Change struct {
From ChangeEntry
To ChangeEntry
}
var empty ChangeEntry
// Action returns the kind of action represented by the change, an
// insertion, a deletion or a modification.
func (c *Change) Action() (merkletrie.Action, error) {
if c.From == empty && c.To == empty {
return merkletrie.Action(0),
fmt.Errorf("malformed change: empty from and to")
}
if c.From == empty {
return merkletrie.Insert, nil
}
if c.To == empty {
return merkletrie.Delete, nil
}
return merkletrie.Modify, nil
}
// Files returns the files before and after a change.
// For insertions from will be nil. For deletions to will be nil.
func (c *Change) Files() (from, to *File, err error) {
action, err := c.Action()
if err != nil {
return
}
if action == merkletrie.Insert || action == merkletrie.Modify {
to, err = c.To.Tree.TreeEntryFile(&c.To.TreeEntry)
if !c.To.TreeEntry.Mode.IsFile() {
return nil, nil, nil
}
if err != nil {
return
}
}
if action == merkletrie.Delete || action == merkletrie.Modify {
from, err = c.From.Tree.TreeEntryFile(&c.From.TreeEntry)
if !c.From.TreeEntry.Mode.IsFile() {
return nil, nil, nil
}
if err != nil {
return
}
}
return
}
func (c *Change) String() string {
action, err := c.Action()
if err != nil {
return "malformed change"
}
return fmt.Sprintf("<Action: %s, Path: %s>", action, c.name())
}
// Patch returns a Patch with all the file changes in chunks. This
// representation can be used to create several diff outputs.
func (c *Change) Patch() (*Patch, error) {
return c.PatchContext(context.Background())
}
// Patch returns a Patch with all the file changes in chunks. This
// representation can be used to create several diff outputs.
// If context expires, an non-nil error will be returned
// Provided context must be non-nil
func (c *Change) PatchContext(ctx context.Context) (*Patch, error) {
return getPatchContext(ctx, "", c)
}
func (c *Change) name() string {
if c.From != empty {
return c.From.Name
}
return c.To.Name
}
// ChangeEntry values represent a node that has suffered a change.
type ChangeEntry struct {
// Full path of the node using "/" as separator.
Name string
// Parent tree of the node that has changed.
Tree *Tree
// The entry of the node.
TreeEntry TreeEntry
}
// Changes represents a collection of changes between two git trees.
// Implements sort.Interface lexicographically over the path of the
// changed files.
type Changes []*Change
func (c Changes) Len() int {
return len(c)
}
func (c Changes) Swap(i, j int) {
c[i], c[j] = c[j], c[i]
}
func (c Changes) Less(i, j int) bool {
return strings.Compare(c[i].name(), c[j].name()) < 0
}
func (c Changes) String() string {
var buffer bytes.Buffer
buffer.WriteString("[")
comma := ""
for _, v := range c {
buffer.WriteString(comma)
buffer.WriteString(v.String())
comma = ", "
}
buffer.WriteString("]")
return buffer.String()
}
// Patch returns a Patch with all the changes in chunks. This
// representation can be used to create several diff outputs.
func (c Changes) Patch() (*Patch, error) {
return c.PatchContext(context.Background())
}
// Patch returns a Patch with all the changes in chunks. This
// representation can be used to create several diff outputs.
// If context expires, an non-nil error will be returned
// Provided context must be non-nil
func (c Changes) PatchContext(ctx context.Context) (*Patch, error) {
return getPatchContext(ctx, "", c...)
}

View File

@ -0,0 +1,61 @@
package object
import (
"errors"
"fmt"
"github.com/go-git/go-git/v5/utils/merkletrie"
"github.com/go-git/go-git/v5/utils/merkletrie/noder"
)
// The following functions transform changes types form the merkletrie
// package to changes types from this package.
func newChange(c merkletrie.Change) (*Change, error) {
ret := &Change{}
var err error
if ret.From, err = newChangeEntry(c.From); err != nil {
return nil, fmt.Errorf("from field: %s", err)
}
if ret.To, err = newChangeEntry(c.To); err != nil {
return nil, fmt.Errorf("to field: %s", err)
}
return ret, nil
}
func newChangeEntry(p noder.Path) (ChangeEntry, error) {
if p == nil {
return empty, nil
}
asTreeNoder, ok := p.Last().(*treeNoder)
if !ok {
return ChangeEntry{}, errors.New("cannot transform non-TreeNoders")
}
return ChangeEntry{
Name: p.String(),
Tree: asTreeNoder.parent,
TreeEntry: TreeEntry{
Name: asTreeNoder.name,
Mode: asTreeNoder.mode,
Hash: asTreeNoder.hash,
},
}, nil
}
func newChanges(src merkletrie.Changes) (Changes, error) {
ret := make(Changes, len(src))
var err error
for i, e := range src {
ret[i], err = newChange(e)
if err != nil {
return nil, fmt.Errorf("change #%d: %s", i, err)
}
}
return ret, nil
}

View File

@ -0,0 +1,507 @@
package object
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"strings"
"github.com/ProtonMail/go-crypto/openpgp"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/sync"
)
const (
beginpgp string = "-----BEGIN PGP SIGNATURE-----"
endpgp string = "-----END PGP SIGNATURE-----"
headerpgp string = "gpgsig"
headerencoding string = "encoding"
// https://github.com/git/git/blob/bcb6cae2966cc407ca1afc77413b3ef11103c175/Documentation/gitformat-signature.txt#L153
// When a merge commit is created from a signed tag, the tag is embedded in
// the commit with the "mergetag" header.
headermergetag string = "mergetag"
defaultUtf8CommitMessageEncoding MessageEncoding = "UTF-8"
)
// Hash represents the hash of an object
type Hash plumbing.Hash
// MessageEncoding represents the encoding of a commit
type MessageEncoding string
// Commit points to a single tree, marking it as what the project looked like
// at a certain point in time. It contains meta-information about that point
// in time, such as a timestamp, the author of the changes since the last
// commit, a pointer to the previous commit(s), etc.
// http://shafiulazam.com/gitbook/1_the_git_object_model.html
type Commit struct {
// Hash of the commit object.
Hash plumbing.Hash
// Author is the original author of the commit.
Author Signature
// Committer is the one performing the commit, might be different from
// Author.
Committer Signature
// MergeTag is the embedded tag object when a merge commit is created by
// merging a signed tag.
MergeTag string
// PGPSignature is the PGP signature of the commit.
PGPSignature string
// Message is the commit message, contains arbitrary text.
Message string
// TreeHash is the hash of the root tree of the commit.
TreeHash plumbing.Hash
// ParentHashes are the hashes of the parent commits of the commit.
ParentHashes []plumbing.Hash
// Encoding is the encoding of the commit.
Encoding MessageEncoding
s storer.EncodedObjectStorer
}
// GetCommit gets a commit from an object storer and decodes it.
func GetCommit(s storer.EncodedObjectStorer, h plumbing.Hash) (*Commit, error) {
o, err := s.EncodedObject(plumbing.CommitObject, h)
if err != nil {
return nil, err
}
return DecodeCommit(s, o)
}
// DecodeCommit decodes an encoded object into a *Commit and associates it to
// the given object storer.
func DecodeCommit(s storer.EncodedObjectStorer, o plumbing.EncodedObject) (*Commit, error) {
c := &Commit{s: s}
if err := c.Decode(o); err != nil {
return nil, err
}
return c, nil
}
// Tree returns the Tree from the commit.
func (c *Commit) Tree() (*Tree, error) {
return GetTree(c.s, c.TreeHash)
}
// PatchContext returns the Patch between the actual commit and the provided one.
// Error will be return if context expires. Provided context must be non-nil.
//
// NOTE: Since version 5.1.0 the renames are correctly handled, the settings
// used are the recommended options DefaultDiffTreeOptions.
func (c *Commit) PatchContext(ctx context.Context, to *Commit) (*Patch, error) {
fromTree, err := c.Tree()
if err != nil {
return nil, err
}
var toTree *Tree
if to != nil {
toTree, err = to.Tree()
if err != nil {
return nil, err
}
}
return fromTree.PatchContext(ctx, toTree)
}
// Patch returns the Patch between the actual commit and the provided one.
//
// NOTE: Since version 5.1.0 the renames are correctly handled, the settings
// used are the recommended options DefaultDiffTreeOptions.
func (c *Commit) Patch(to *Commit) (*Patch, error) {
return c.PatchContext(context.Background(), to)
}
// Parents return a CommitIter to the parent Commits.
func (c *Commit) Parents() CommitIter {
return NewCommitIter(c.s,
storer.NewEncodedObjectLookupIter(c.s, plumbing.CommitObject, c.ParentHashes),
)
}
// NumParents returns the number of parents in a commit.
func (c *Commit) NumParents() int {
return len(c.ParentHashes)
}
var ErrParentNotFound = errors.New("commit parent not found")
// Parent returns the ith parent of a commit.
func (c *Commit) Parent(i int) (*Commit, error) {
if len(c.ParentHashes) == 0 || i > len(c.ParentHashes)-1 {
return nil, ErrParentNotFound
}
return GetCommit(c.s, c.ParentHashes[i])
}
// File returns the file with the specified "path" in the commit and a
// nil error if the file exists. If the file does not exist, it returns
// a nil file and the ErrFileNotFound error.
func (c *Commit) File(path string) (*File, error) {
tree, err := c.Tree()
if err != nil {
return nil, err
}
return tree.File(path)
}
// Files returns a FileIter allowing to iterate over the Tree
func (c *Commit) Files() (*FileIter, error) {
tree, err := c.Tree()
if err != nil {
return nil, err
}
return tree.Files(), nil
}
// ID returns the object ID of the commit. The returned value will always match
// the current value of Commit.Hash.
//
// ID is present to fulfill the Object interface.
func (c *Commit) ID() plumbing.Hash {
return c.Hash
}
// Type returns the type of object. It always returns plumbing.CommitObject.
//
// Type is present to fulfill the Object interface.
func (c *Commit) Type() plumbing.ObjectType {
return plumbing.CommitObject
}
// Decode transforms a plumbing.EncodedObject into a Commit struct.
func (c *Commit) Decode(o plumbing.EncodedObject) (err error) {
if o.Type() != plumbing.CommitObject {
return ErrUnsupportedObject
}
c.Hash = o.Hash()
c.Encoding = defaultUtf8CommitMessageEncoding
reader, err := o.Reader()
if err != nil {
return err
}
defer ioutil.CheckClose(reader, &err)
r := sync.GetBufioReader(reader)
defer sync.PutBufioReader(r)
var message bool
var mergetag bool
var pgpsig bool
var msgbuf bytes.Buffer
for {
line, err := r.ReadBytes('\n')
if err != nil && err != io.EOF {
return err
}
if mergetag {
if len(line) > 0 && line[0] == ' ' {
line = bytes.TrimLeft(line, " ")
c.MergeTag += string(line)
continue
} else {
mergetag = false
}
}
if pgpsig {
if len(line) > 0 && line[0] == ' ' {
line = bytes.TrimLeft(line, " ")
c.PGPSignature += string(line)
continue
} else {
pgpsig = false
}
}
if !message {
line = bytes.TrimSpace(line)
if len(line) == 0 {
message = true
continue
}
split := bytes.SplitN(line, []byte{' '}, 2)
var data []byte
if len(split) == 2 {
data = split[1]
}
switch string(split[0]) {
case "tree":
c.TreeHash = plumbing.NewHash(string(data))
case "parent":
c.ParentHashes = append(c.ParentHashes, plumbing.NewHash(string(data)))
case "author":
c.Author.Decode(data)
case "committer":
c.Committer.Decode(data)
case headermergetag:
c.MergeTag += string(data) + "\n"
mergetag = true
case headerencoding:
c.Encoding = MessageEncoding(data)
case headerpgp:
c.PGPSignature += string(data) + "\n"
pgpsig = true
}
} else {
msgbuf.Write(line)
}
if err == io.EOF {
break
}
}
c.Message = msgbuf.String()
return nil
}
// Encode transforms a Commit into a plumbing.EncodedObject.
func (c *Commit) Encode(o plumbing.EncodedObject) error {
return c.encode(o, true)
}
// EncodeWithoutSignature export a Commit into a plumbing.EncodedObject without the signature (correspond to the payload of the PGP signature).
func (c *Commit) EncodeWithoutSignature(o plumbing.EncodedObject) error {
return c.encode(o, false)
}
func (c *Commit) encode(o plumbing.EncodedObject, includeSig bool) (err error) {
o.SetType(plumbing.CommitObject)
w, err := o.Writer()
if err != nil {
return err
}
defer ioutil.CheckClose(w, &err)
if _, err = fmt.Fprintf(w, "tree %s\n", c.TreeHash.String()); err != nil {
return err
}
for _, parent := range c.ParentHashes {
if _, err = fmt.Fprintf(w, "parent %s\n", parent.String()); err != nil {
return err
}
}
if _, err = fmt.Fprint(w, "author "); err != nil {
return err
}
if err = c.Author.Encode(w); err != nil {
return err
}
if _, err = fmt.Fprint(w, "\ncommitter "); err != nil {
return err
}
if err = c.Committer.Encode(w); err != nil {
return err
}
if c.MergeTag != "" {
if _, err = fmt.Fprint(w, "\n"+headermergetag+" "); err != nil {
return err
}
// Split tag information lines and re-write with a left padding and
// newline. Use join for this so it's clear that a newline should not be
// added after this section. The newline will be added either as part of
// the PGP signature or the commit message.
mergetag := strings.TrimSuffix(c.MergeTag, "\n")
lines := strings.Split(mergetag, "\n")
if _, err = fmt.Fprint(w, strings.Join(lines, "\n ")); err != nil {
return err
}
}
if string(c.Encoding) != "" && c.Encoding != defaultUtf8CommitMessageEncoding {
if _, err = fmt.Fprintf(w, "\n%s %s", headerencoding, c.Encoding); err != nil {
return err
}
}
if c.PGPSignature != "" && includeSig {
if _, err = fmt.Fprint(w, "\n"+headerpgp+" "); err != nil {
return err
}
// Split all the signature lines and re-write with a left padding and
// newline. Use join for this so it's clear that a newline should not be
// added after this section, as it will be added when the message is
// printed.
signature := strings.TrimSuffix(c.PGPSignature, "\n")
lines := strings.Split(signature, "\n")
if _, err = fmt.Fprint(w, strings.Join(lines, "\n ")); err != nil {
return err
}
}
if _, err = fmt.Fprintf(w, "\n\n%s", c.Message); err != nil {
return err
}
return err
}
// Stats returns the stats of a commit.
func (c *Commit) Stats() (FileStats, error) {
return c.StatsContext(context.Background())
}
// StatsContext returns the stats of a commit. Error will be return if context
// expires. Provided context must be non-nil.
func (c *Commit) StatsContext(ctx context.Context) (FileStats, error) {
fromTree, err := c.Tree()
if err != nil {
return nil, err
}
toTree := &Tree{}
if c.NumParents() != 0 {
firstParent, err := c.Parents().Next()
if err != nil {
return nil, err
}
toTree, err = firstParent.Tree()
if err != nil {
return nil, err
}
}
patch, err := toTree.PatchContext(ctx, fromTree)
if err != nil {
return nil, err
}
return getFileStatsFromFilePatches(patch.FilePatches()), nil
}
func (c *Commit) String() string {
return fmt.Sprintf(
"%s %s\nAuthor: %s\nDate: %s\n\n%s\n",
plumbing.CommitObject, c.Hash, c.Author.String(),
c.Author.When.Format(DateFormat), indent(c.Message),
)
}
// Verify performs PGP verification of the commit with a provided armored
// keyring and returns openpgp.Entity associated with verifying key on success.
func (c *Commit) Verify(armoredKeyRing string) (*openpgp.Entity, error) {
keyRingReader := strings.NewReader(armoredKeyRing)
keyring, err := openpgp.ReadArmoredKeyRing(keyRingReader)
if err != nil {
return nil, err
}
// Extract signature.
signature := strings.NewReader(c.PGPSignature)
encoded := &plumbing.MemoryObject{}
// Encode commit components, excluding signature and get a reader object.
if err := c.EncodeWithoutSignature(encoded); err != nil {
return nil, err
}
er, err := encoded.Reader()
if err != nil {
return nil, err
}
return openpgp.CheckArmoredDetachedSignature(keyring, er, signature, nil)
}
// Less defines a compare function to determine which commit is 'earlier' by:
// - First use Committer.When
// - If Committer.When are equal then use Author.When
// - If Author.When also equal then compare the string value of the hash
func (c *Commit) Less(rhs *Commit) bool {
return c.Committer.When.Before(rhs.Committer.When) ||
(c.Committer.When.Equal(rhs.Committer.When) &&
(c.Author.When.Before(rhs.Author.When) ||
(c.Author.When.Equal(rhs.Author.When) && bytes.Compare(c.Hash[:], rhs.Hash[:]) < 0)))
}
func indent(t string) string {
var output []string
for _, line := range strings.Split(t, "\n") {
if len(line) != 0 {
line = " " + line
}
output = append(output, line)
}
return strings.Join(output, "\n")
}
// CommitIter is a generic closable interface for iterating over commits.
type CommitIter interface {
Next() (*Commit, error)
ForEach(func(*Commit) error) error
Close()
}
// storerCommitIter provides an iterator from commits in an EncodedObjectStorer.
type storerCommitIter struct {
storer.EncodedObjectIter
s storer.EncodedObjectStorer
}
// NewCommitIter takes a storer.EncodedObjectStorer and a
// storer.EncodedObjectIter and returns a CommitIter that iterates over all
// commits contained in the storer.EncodedObjectIter.
//
// Any non-commit object returned by the storer.EncodedObjectIter is skipped.
func NewCommitIter(s storer.EncodedObjectStorer, iter storer.EncodedObjectIter) CommitIter {
return &storerCommitIter{iter, s}
}
// Next moves the iterator to the next commit and returns a pointer to it. If
// there are no more commits, it returns io.EOF.
func (iter *storerCommitIter) Next() (*Commit, error) {
obj, err := iter.EncodedObjectIter.Next()
if err != nil {
return nil, err
}
return DecodeCommit(iter.s, obj)
}
// ForEach call the cb function for each commit contained on this iter until
// an error appends or the end of the iter is reached. If ErrStop is sent
// the iteration is stopped but no error is returned. The iterator is closed.
func (iter *storerCommitIter) ForEach(cb func(*Commit) error) error {
return iter.EncodedObjectIter.ForEach(func(obj plumbing.EncodedObject) error {
c, err := DecodeCommit(iter.s, obj)
if err != nil {
return err
}
return cb(c)
})
}
func (iter *storerCommitIter) Close() {
iter.EncodedObjectIter.Close()
}

View File

@ -0,0 +1,327 @@
package object
import (
"container/list"
"io"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/storage"
)
type commitPreIterator struct {
seenExternal map[plumbing.Hash]bool
seen map[plumbing.Hash]bool
stack []CommitIter
start *Commit
}
// NewCommitPreorderIter returns a CommitIter that walks the commit history,
// starting at the given commit and visiting its parents in pre-order.
// The given callback will be called for each visited commit. Each commit will
// be visited only once. If the callback returns an error, walking will stop
// and will return the error. Other errors might be returned if the history
// cannot be traversed (e.g. missing objects). Ignore allows to skip some
// commits from being iterated.
func NewCommitPreorderIter(
c *Commit,
seenExternal map[plumbing.Hash]bool,
ignore []plumbing.Hash,
) CommitIter {
seen := make(map[plumbing.Hash]bool)
for _, h := range ignore {
seen[h] = true
}
return &commitPreIterator{
seenExternal: seenExternal,
seen: seen,
stack: make([]CommitIter, 0),
start: c,
}
}
func (w *commitPreIterator) Next() (*Commit, error) {
var c *Commit
for {
if w.start != nil {
c = w.start
w.start = nil
} else {
current := len(w.stack) - 1
if current < 0 {
return nil, io.EOF
}
var err error
c, err = w.stack[current].Next()
if err == io.EOF {
w.stack = w.stack[:current]
continue
}
if err != nil {
return nil, err
}
}
if w.seen[c.Hash] || w.seenExternal[c.Hash] {
continue
}
w.seen[c.Hash] = true
if c.NumParents() > 0 {
w.stack = append(w.stack, filteredParentIter(c, w.seen))
}
return c, nil
}
}
func filteredParentIter(c *Commit, seen map[plumbing.Hash]bool) CommitIter {
var hashes []plumbing.Hash
for _, h := range c.ParentHashes {
if !seen[h] {
hashes = append(hashes, h)
}
}
return NewCommitIter(c.s,
storer.NewEncodedObjectLookupIter(c.s, plumbing.CommitObject, hashes),
)
}
func (w *commitPreIterator) ForEach(cb func(*Commit) error) error {
for {
c, err := w.Next()
if err == io.EOF {
break
}
if err != nil {
return err
}
err = cb(c)
if err == storer.ErrStop {
break
}
if err != nil {
return err
}
}
return nil
}
func (w *commitPreIterator) Close() {}
type commitPostIterator struct {
stack []*Commit
seen map[plumbing.Hash]bool
}
// NewCommitPostorderIter returns a CommitIter that walks the commit
// history like WalkCommitHistory but in post-order. This means that after
// walking a merge commit, the merged commit will be walked before the base
// it was merged on. This can be useful if you wish to see the history in
// chronological order. Ignore allows to skip some commits from being iterated.
func NewCommitPostorderIter(c *Commit, ignore []plumbing.Hash) CommitIter {
seen := make(map[plumbing.Hash]bool)
for _, h := range ignore {
seen[h] = true
}
return &commitPostIterator{
stack: []*Commit{c},
seen: seen,
}
}
func (w *commitPostIterator) Next() (*Commit, error) {
for {
if len(w.stack) == 0 {
return nil, io.EOF
}
c := w.stack[len(w.stack)-1]
w.stack = w.stack[:len(w.stack)-1]
if w.seen[c.Hash] {
continue
}
w.seen[c.Hash] = true
return c, c.Parents().ForEach(func(p *Commit) error {
w.stack = append(w.stack, p)
return nil
})
}
}
func (w *commitPostIterator) ForEach(cb func(*Commit) error) error {
for {
c, err := w.Next()
if err == io.EOF {
break
}
if err != nil {
return err
}
err = cb(c)
if err == storer.ErrStop {
break
}
if err != nil {
return err
}
}
return nil
}
func (w *commitPostIterator) Close() {}
// commitAllIterator stands for commit iterator for all refs.
type commitAllIterator struct {
// currCommit points to the current commit.
currCommit *list.Element
}
// NewCommitAllIter returns a new commit iterator for all refs.
// repoStorer is a repo Storer used to get commits and references.
// commitIterFunc is a commit iterator function, used to iterate through ref commits in chosen order
func NewCommitAllIter(repoStorer storage.Storer, commitIterFunc func(*Commit) CommitIter) (CommitIter, error) {
commitsPath := list.New()
commitsLookup := make(map[plumbing.Hash]*list.Element)
head, err := storer.ResolveReference(repoStorer, plumbing.HEAD)
if err == nil {
err = addReference(repoStorer, commitIterFunc, head, commitsPath, commitsLookup)
}
if err != nil && err != plumbing.ErrReferenceNotFound {
return nil, err
}
// add all references along with the HEAD
refIter, err := repoStorer.IterReferences()
if err != nil {
return nil, err
}
defer refIter.Close()
for {
ref, err := refIter.Next()
if err == io.EOF {
break
}
if err == plumbing.ErrReferenceNotFound {
continue
}
if err != nil {
return nil, err
}
if err = addReference(repoStorer, commitIterFunc, ref, commitsPath, commitsLookup); err != nil {
return nil, err
}
}
return &commitAllIterator{commitsPath.Front()}, nil
}
func addReference(
repoStorer storage.Storer,
commitIterFunc func(*Commit) CommitIter,
ref *plumbing.Reference,
commitsPath *list.List,
commitsLookup map[plumbing.Hash]*list.Element) error {
_, exists := commitsLookup[ref.Hash()]
if exists {
// we already have it - skip the reference.
return nil
}
refCommit, _ := GetCommit(repoStorer, ref.Hash())
if refCommit == nil {
// if it's not a commit - skip it.
return nil
}
var (
refCommits []*Commit
parent *list.Element
)
// collect all ref commits to add
commitIter := commitIterFunc(refCommit)
for c, e := commitIter.Next(); e == nil; {
parent, exists = commitsLookup[c.Hash]
if exists {
break
}
refCommits = append(refCommits, c)
c, e = commitIter.Next()
}
commitIter.Close()
if parent == nil {
// common parent - not found
// add all commits to the path from this ref (maybe it's a HEAD and we don't have anything, yet)
for _, c := range refCommits {
parent = commitsPath.PushBack(c)
commitsLookup[c.Hash] = parent
}
} else {
// add ref's commits to the path in reverse order (from the latest)
for i := len(refCommits) - 1; i >= 0; i-- {
c := refCommits[i]
// insert before found common parent
parent = commitsPath.InsertBefore(c, parent)
commitsLookup[c.Hash] = parent
}
}
return nil
}
func (it *commitAllIterator) Next() (*Commit, error) {
if it.currCommit == nil {
return nil, io.EOF
}
c := it.currCommit.Value.(*Commit)
it.currCommit = it.currCommit.Next()
return c, nil
}
func (it *commitAllIterator) ForEach(cb func(*Commit) error) error {
for {
c, err := it.Next()
if err == io.EOF {
break
}
if err != nil {
return err
}
err = cb(c)
if err == storer.ErrStop {
break
}
if err != nil {
return err
}
}
return nil
}
func (it *commitAllIterator) Close() {
it.currCommit = nil
}

View File

@ -0,0 +1,100 @@
package object
import (
"io"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/storer"
)
type bfsCommitIterator struct {
seenExternal map[plumbing.Hash]bool
seen map[plumbing.Hash]bool
queue []*Commit
}
// NewCommitIterBSF returns a CommitIter that walks the commit history,
// starting at the given commit and visiting its parents in pre-order.
// The given callback will be called for each visited commit. Each commit will
// be visited only once. If the callback returns an error, walking will stop
// and will return the error. Other errors might be returned if the history
// cannot be traversed (e.g. missing objects). Ignore allows to skip some
// commits from being iterated.
func NewCommitIterBSF(
c *Commit,
seenExternal map[plumbing.Hash]bool,
ignore []plumbing.Hash,
) CommitIter {
seen := make(map[plumbing.Hash]bool)
for _, h := range ignore {
seen[h] = true
}
return &bfsCommitIterator{
seenExternal: seenExternal,
seen: seen,
queue: []*Commit{c},
}
}
func (w *bfsCommitIterator) appendHash(store storer.EncodedObjectStorer, h plumbing.Hash) error {
if w.seen[h] || w.seenExternal[h] {
return nil
}
c, err := GetCommit(store, h)
if err != nil {
return err
}
w.queue = append(w.queue, c)
return nil
}
func (w *bfsCommitIterator) Next() (*Commit, error) {
var c *Commit
for {
if len(w.queue) == 0 {
return nil, io.EOF
}
c = w.queue[0]
w.queue = w.queue[1:]
if w.seen[c.Hash] || w.seenExternal[c.Hash] {
continue
}
w.seen[c.Hash] = true
for _, h := range c.ParentHashes {
err := w.appendHash(c.s, h)
if err != nil {
return nil, err
}
}
return c, nil
}
}
func (w *bfsCommitIterator) ForEach(cb func(*Commit) error) error {
for {
c, err := w.Next()
if err == io.EOF {
break
}
if err != nil {
return err
}
err = cb(c)
if err == storer.ErrStop {
break
}
if err != nil {
return err
}
}
return nil
}
func (w *bfsCommitIterator) Close() {}

View File

@ -0,0 +1,175 @@
package object
import (
"io"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/storer"
)
// NewFilterCommitIter returns a CommitIter that walks the commit history,
// starting at the passed commit and visiting its parents in Breadth-first order.
// The commits returned by the CommitIter will validate the passed CommitFilter.
// The history won't be transversed beyond a commit if isLimit is true for it.
// Each commit will be visited only once.
// If the commit history can not be traversed, or the Close() method is called,
// the CommitIter won't return more commits.
// If no isValid is passed, all ancestors of from commit will be valid.
// If no isLimit is limit, all ancestors of all commits will be visited.
func NewFilterCommitIter(
from *Commit,
isValid *CommitFilter,
isLimit *CommitFilter,
) CommitIter {
var validFilter CommitFilter
if isValid == nil {
validFilter = func(_ *Commit) bool {
return true
}
} else {
validFilter = *isValid
}
var limitFilter CommitFilter
if isLimit == nil {
limitFilter = func(_ *Commit) bool {
return false
}
} else {
limitFilter = *isLimit
}
return &filterCommitIter{
isValid: validFilter,
isLimit: limitFilter,
visited: map[plumbing.Hash]struct{}{},
queue: []*Commit{from},
}
}
// CommitFilter returns a boolean for the passed Commit
type CommitFilter func(*Commit) bool
// filterCommitIter implements CommitIter
type filterCommitIter struct {
isValid CommitFilter
isLimit CommitFilter
visited map[plumbing.Hash]struct{}
queue []*Commit
lastErr error
}
// Next returns the next commit of the CommitIter.
// It will return io.EOF if there are no more commits to visit,
// or an error if the history could not be traversed.
func (w *filterCommitIter) Next() (*Commit, error) {
var commit *Commit
var err error
for {
commit, err = w.popNewFromQueue()
if err != nil {
return nil, w.close(err)
}
w.visited[commit.Hash] = struct{}{}
if !w.isLimit(commit) {
err = w.addToQueue(commit.s, commit.ParentHashes...)
if err != nil {
return nil, w.close(err)
}
}
if w.isValid(commit) {
return commit, nil
}
}
}
// ForEach runs the passed callback over each Commit returned by the CommitIter
// until the callback returns an error or there is no more commits to traverse.
func (w *filterCommitIter) ForEach(cb func(*Commit) error) error {
for {
commit, err := w.Next()
if err == io.EOF {
break
}
if err != nil {
return err
}
if err := cb(commit); err == storer.ErrStop {
break
} else if err != nil {
return err
}
}
return nil
}
// Error returns the error that caused that the CommitIter is no longer returning commits
func (w *filterCommitIter) Error() error {
return w.lastErr
}
// Close closes the CommitIter
func (w *filterCommitIter) Close() {
w.visited = map[plumbing.Hash]struct{}{}
w.queue = []*Commit{}
w.isLimit = nil
w.isValid = nil
}
// close closes the CommitIter with an error
func (w *filterCommitIter) close(err error) error {
w.Close()
w.lastErr = err
return err
}
// popNewFromQueue returns the first new commit from the internal fifo queue,
// or an io.EOF error if the queue is empty
func (w *filterCommitIter) popNewFromQueue() (*Commit, error) {
var first *Commit
for {
if len(w.queue) == 0 {
if w.lastErr != nil {
return nil, w.lastErr
}
return nil, io.EOF
}
first = w.queue[0]
w.queue = w.queue[1:]
if _, ok := w.visited[first.Hash]; ok {
continue
}
return first, nil
}
}
// addToQueue adds the passed commits to the internal fifo queue if they weren't seen
// or returns an error if the passed hashes could not be used to get valid commits
func (w *filterCommitIter) addToQueue(
store storer.EncodedObjectStorer,
hashes ...plumbing.Hash,
) error {
for _, hash := range hashes {
if _, ok := w.visited[hash]; ok {
continue
}
commit, err := GetCommit(store, hash)
if err != nil {
return err
}
w.queue = append(w.queue, commit)
}
return nil
}

View File

@ -0,0 +1,103 @@
package object
import (
"io"
"github.com/emirpasic/gods/trees/binaryheap"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/storer"
)
type commitIteratorByCTime struct {
seenExternal map[plumbing.Hash]bool
seen map[plumbing.Hash]bool
heap *binaryheap.Heap
}
// NewCommitIterCTime returns a CommitIter that walks the commit history,
// starting at the given commit and visiting its parents while preserving Committer Time order.
// this appears to be the closest order to `git log`
// The given callback will be called for each visited commit. Each commit will
// be visited only once. If the callback returns an error, walking will stop
// and will return the error. Other errors might be returned if the history
// cannot be traversed (e.g. missing objects). Ignore allows to skip some
// commits from being iterated.
func NewCommitIterCTime(
c *Commit,
seenExternal map[plumbing.Hash]bool,
ignore []plumbing.Hash,
) CommitIter {
seen := make(map[plumbing.Hash]bool)
for _, h := range ignore {
seen[h] = true
}
heap := binaryheap.NewWith(func(a, b interface{}) int {
if a.(*Commit).Committer.When.Before(b.(*Commit).Committer.When) {
return 1
}
return -1
})
heap.Push(c)
return &commitIteratorByCTime{
seenExternal: seenExternal,
seen: seen,
heap: heap,
}
}
func (w *commitIteratorByCTime) Next() (*Commit, error) {
var c *Commit
for {
cIn, ok := w.heap.Pop()
if !ok {
return nil, io.EOF
}
c = cIn.(*Commit)
if w.seen[c.Hash] || w.seenExternal[c.Hash] {
continue
}
w.seen[c.Hash] = true
for _, h := range c.ParentHashes {
if w.seen[h] || w.seenExternal[h] {
continue
}
pc, err := GetCommit(c.s, h)
if err != nil {
return nil, err
}
w.heap.Push(pc)
}
return c, nil
}
}
func (w *commitIteratorByCTime) ForEach(cb func(*Commit) error) error {
for {
c, err := w.Next()
if err == io.EOF {
break
}
if err != nil {
return err
}
err = cb(c)
if err == storer.ErrStop {
break
}
if err != nil {
return err
}
}
return nil
}
func (w *commitIteratorByCTime) Close() {}

View File

@ -0,0 +1,65 @@
package object
import (
"io"
"time"
"github.com/go-git/go-git/v5/plumbing/storer"
)
type commitLimitIter struct {
sourceIter CommitIter
limitOptions LogLimitOptions
}
type LogLimitOptions struct {
Since *time.Time
Until *time.Time
}
func NewCommitLimitIterFromIter(commitIter CommitIter, limitOptions LogLimitOptions) CommitIter {
iterator := new(commitLimitIter)
iterator.sourceIter = commitIter
iterator.limitOptions = limitOptions
return iterator
}
func (c *commitLimitIter) Next() (*Commit, error) {
for {
commit, err := c.sourceIter.Next()
if err != nil {
return nil, err
}
if c.limitOptions.Since != nil && commit.Committer.When.Before(*c.limitOptions.Since) {
continue
}
if c.limitOptions.Until != nil && commit.Committer.When.After(*c.limitOptions.Until) {
continue
}
return commit, nil
}
}
func (c *commitLimitIter) ForEach(cb func(*Commit) error) error {
for {
commit, nextErr := c.Next()
if nextErr == io.EOF {
break
}
if nextErr != nil {
return nextErr
}
err := cb(commit)
if err == storer.ErrStop {
return nil
} else if err != nil {
return err
}
}
return nil
}
func (c *commitLimitIter) Close() {
c.sourceIter.Close()
}

View File

@ -0,0 +1,167 @@
package object
import (
"io"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/storer"
)
type commitPathIter struct {
pathFilter func(string) bool
sourceIter CommitIter
currentCommit *Commit
checkParent bool
}
// NewCommitPathIterFromIter returns a commit iterator which performs diffTree between
// successive trees returned from the commit iterator from the argument. The purpose of this is
// to find the commits that explain how the files that match the path came to be.
// If checkParent is true then the function double checks if potential parent (next commit in a path)
// is one of the parents in the tree (it's used by `git log --all`).
// pathFilter is a function that takes path of file as argument and returns true if we want it
func NewCommitPathIterFromIter(pathFilter func(string) bool, commitIter CommitIter, checkParent bool) CommitIter {
iterator := new(commitPathIter)
iterator.sourceIter = commitIter
iterator.pathFilter = pathFilter
iterator.checkParent = checkParent
return iterator
}
// NewCommitFileIterFromIter is kept for compatibility, can be replaced with NewCommitPathIterFromIter
func NewCommitFileIterFromIter(fileName string, commitIter CommitIter, checkParent bool) CommitIter {
return NewCommitPathIterFromIter(
func(path string) bool {
return path == fileName
},
commitIter,
checkParent,
)
}
func (c *commitPathIter) Next() (*Commit, error) {
if c.currentCommit == nil {
var err error
c.currentCommit, err = c.sourceIter.Next()
if err != nil {
return nil, err
}
}
commit, commitErr := c.getNextFileCommit()
// Setting current-commit to nil to prevent unwanted states when errors are raised
if commitErr != nil {
c.currentCommit = nil
}
return commit, commitErr
}
func (c *commitPathIter) getNextFileCommit() (*Commit, error) {
var parentTree, currentTree *Tree
for {
// Parent-commit can be nil if the current-commit is the initial commit
parentCommit, parentCommitErr := c.sourceIter.Next()
if parentCommitErr != nil {
// If the parent-commit is beyond the initial commit, keep it nil
if parentCommitErr != io.EOF {
return nil, parentCommitErr
}
parentCommit = nil
}
if parentTree == nil {
var currTreeErr error
currentTree, currTreeErr = c.currentCommit.Tree()
if currTreeErr != nil {
return nil, currTreeErr
}
} else {
currentTree = parentTree
parentTree = nil
}
if parentCommit != nil {
var parentTreeErr error
parentTree, parentTreeErr = parentCommit.Tree()
if parentTreeErr != nil {
return nil, parentTreeErr
}
}
// Find diff between current and parent trees
changes, diffErr := DiffTree(currentTree, parentTree)
if diffErr != nil {
return nil, diffErr
}
found := c.hasFileChange(changes, parentCommit)
// Storing the current-commit in-case a change is found, and
// Updating the current-commit for the next-iteration
prevCommit := c.currentCommit
c.currentCommit = parentCommit
if found {
return prevCommit, nil
}
// If not matches found and if parent-commit is beyond the initial commit, then return with EOF
if parentCommit == nil {
return nil, io.EOF
}
}
}
func (c *commitPathIter) hasFileChange(changes Changes, parent *Commit) bool {
for _, change := range changes {
if !c.pathFilter(change.name()) {
continue
}
// filename matches, now check if source iterator contains all commits (from all refs)
if c.checkParent {
// Check if parent is beyond the initial commit
if parent == nil || isParentHash(parent.Hash, c.currentCommit) {
return true
}
continue
}
return true
}
return false
}
func isParentHash(hash plumbing.Hash, commit *Commit) bool {
for _, h := range commit.ParentHashes {
if h == hash {
return true
}
}
return false
}
func (c *commitPathIter) ForEach(cb func(*Commit) error) error {
for {
commit, nextErr := c.Next()
if nextErr == io.EOF {
break
}
if nextErr != nil {
return nextErr
}
err := cb(commit)
if err == storer.ErrStop {
return nil
} else if err != nil {
return err
}
}
return nil
}
func (c *commitPathIter) Close() {
c.sourceIter.Close()
}

View File

@ -0,0 +1,98 @@
package object
import (
"bytes"
"context"
"github.com/go-git/go-git/v5/utils/merkletrie"
"github.com/go-git/go-git/v5/utils/merkletrie/noder"
)
// DiffTree compares the content and mode of the blobs found via two
// tree objects.
// DiffTree does not perform rename detection, use DiffTreeWithOptions
// instead to detect renames.
func DiffTree(a, b *Tree) (Changes, error) {
return DiffTreeContext(context.Background(), a, b)
}
// DiffTreeContext compares the content and mode of the blobs found via two
// tree objects. Provided context must be non-nil.
// An error will be returned if context expires.
func DiffTreeContext(ctx context.Context, a, b *Tree) (Changes, error) {
return DiffTreeWithOptions(ctx, a, b, nil)
}
// DiffTreeOptions are the configurable options when performing a diff tree.
type DiffTreeOptions struct {
// DetectRenames is whether the diff tree will use rename detection.
DetectRenames bool
// RenameScore is the threshold to of similarity between files to consider
// that a pair of delete and insert are a rename. The number must be
// exactly between 0 and 100.
RenameScore uint
// RenameLimit is the maximum amount of files that can be compared when
// detecting renames. The number of comparisons that have to be performed
// is equal to the number of deleted files * the number of added files.
// That means, that if 100 files were deleted and 50 files were added, 5000
// file comparisons may be needed. So, if the rename limit is 50, the number
// of both deleted and added needs to be equal or less than 50.
// A value of 0 means no limit.
RenameLimit uint
// OnlyExactRenames performs only detection of exact renames and will not perform
// any detection of renames based on file similarity.
OnlyExactRenames bool
}
// DefaultDiffTreeOptions are the default and recommended options for the
// diff tree.
var DefaultDiffTreeOptions = &DiffTreeOptions{
DetectRenames: true,
RenameScore: 60,
RenameLimit: 0,
OnlyExactRenames: false,
}
// DiffTreeWithOptions compares the content and mode of the blobs found
// via two tree objects with the given options. The provided context
// must be non-nil.
// If no options are passed, no rename detection will be performed. The
// recommended options are DefaultDiffTreeOptions.
// An error will be returned if the context expires.
// This function will be deprecated and removed in v6 so the default
// behaviour of DiffTree is to detect renames.
func DiffTreeWithOptions(
ctx context.Context,
a, b *Tree,
opts *DiffTreeOptions,
) (Changes, error) {
from := NewTreeRootNode(a)
to := NewTreeRootNode(b)
hashEqual := func(a, b noder.Hasher) bool {
return bytes.Equal(a.Hash(), b.Hash())
}
merkletrieChanges, err := merkletrie.DiffTreeContext(ctx, from, to, hashEqual)
if err != nil {
if err == merkletrie.ErrCanceled {
return nil, ErrCanceled
}
return nil, err
}
changes, err := newChanges(merkletrieChanges)
if err != nil {
return nil, err
}
if opts == nil {
opts = new(DiffTreeOptions)
}
if opts.DetectRenames {
return DetectRenames(changes, opts)
}
return changes, nil
}

View File

@ -0,0 +1,137 @@
package object
import (
"bytes"
"io"
"strings"
"github.com/go-git/go-git/v5/plumbing/filemode"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/binary"
"github.com/go-git/go-git/v5/utils/ioutil"
)
// File represents git file objects.
type File struct {
// Name is the path of the file. It might be relative to a tree,
// depending of the function that generates it.
Name string
// Mode is the file mode.
Mode filemode.FileMode
// Blob with the contents of the file.
Blob
}
// NewFile returns a File based on the given blob object
func NewFile(name string, m filemode.FileMode, b *Blob) *File {
return &File{Name: name, Mode: m, Blob: *b}
}
// Contents returns the contents of a file as a string.
func (f *File) Contents() (content string, err error) {
reader, err := f.Reader()
if err != nil {
return "", err
}
defer ioutil.CheckClose(reader, &err)
buf := new(bytes.Buffer)
if _, err := buf.ReadFrom(reader); err != nil {
return "", err
}
return buf.String(), nil
}
// IsBinary returns if the file is binary or not
func (f *File) IsBinary() (bin bool, err error) {
reader, err := f.Reader()
if err != nil {
return false, err
}
defer ioutil.CheckClose(reader, &err)
return binary.IsBinary(reader)
}
// Lines returns a slice of lines from the contents of a file, stripping
// all end of line characters. If the last line is empty (does not end
// in an end of line), it is also stripped.
func (f *File) Lines() ([]string, error) {
content, err := f.Contents()
if err != nil {
return nil, err
}
splits := strings.Split(content, "\n")
// remove the last line if it is empty
if splits[len(splits)-1] == "" {
return splits[:len(splits)-1], nil
}
return splits, nil
}
// FileIter provides an iterator for the files in a tree.
type FileIter struct {
s storer.EncodedObjectStorer
w TreeWalker
}
// NewFileIter takes a storer.EncodedObjectStorer and a Tree and returns a
// *FileIter that iterates over all files contained in the tree, recursively.
func NewFileIter(s storer.EncodedObjectStorer, t *Tree) *FileIter {
return &FileIter{s: s, w: *NewTreeWalker(t, true, nil)}
}
// Next moves the iterator to the next file and returns a pointer to it. If
// there are no more files, it returns io.EOF.
func (iter *FileIter) Next() (*File, error) {
for {
name, entry, err := iter.w.Next()
if err != nil {
return nil, err
}
if entry.Mode == filemode.Dir || entry.Mode == filemode.Submodule {
continue
}
blob, err := GetBlob(iter.s, entry.Hash)
if err != nil {
return nil, err
}
return NewFile(name, entry.Mode, blob), nil
}
}
// ForEach call the cb function for each file contained in this iter until
// an error happens or the end of the iter is reached. If plumbing.ErrStop is sent
// the iteration is stop but no error is returned. The iterator is closed.
func (iter *FileIter) ForEach(cb func(*File) error) error {
defer iter.Close()
for {
f, err := iter.Next()
if err != nil {
if err == io.EOF {
return nil
}
return err
}
if err := cb(f); err != nil {
if err == storer.ErrStop {
return nil
}
return err
}
}
}
func (iter *FileIter) Close() {
iter.w.Close()
}

View File

@ -0,0 +1,210 @@
package object
import (
"fmt"
"sort"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/storer"
)
// errIsReachable is thrown when first commit is an ancestor of the second
var errIsReachable = fmt.Errorf("first is reachable from second")
// MergeBase mimics the behavior of `git merge-base actual other`, returning the
// best common ancestor between the actual and the passed one.
// The best common ancestors can not be reached from other common ancestors.
func (c *Commit) MergeBase(other *Commit) ([]*Commit, error) {
// use sortedByCommitDateDesc strategy
sorted := sortByCommitDateDesc(c, other)
newer := sorted[0]
older := sorted[1]
newerHistory, err := ancestorsIndex(older, newer)
if err == errIsReachable {
return []*Commit{older}, nil
}
if err != nil {
return nil, err
}
var res []*Commit
inNewerHistory := isInIndexCommitFilter(newerHistory)
resIter := NewFilterCommitIter(older, &inNewerHistory, &inNewerHistory)
_ = resIter.ForEach(func(commit *Commit) error {
res = append(res, commit)
return nil
})
return Independents(res)
}
// IsAncestor returns true if the actual commit is ancestor of the passed one.
// It returns an error if the history is not transversable
// It mimics the behavior of `git merge --is-ancestor actual other`
func (c *Commit) IsAncestor(other *Commit) (bool, error) {
found := false
iter := NewCommitPreorderIter(other, nil, nil)
err := iter.ForEach(func(comm *Commit) error {
if comm.Hash != c.Hash {
return nil
}
found = true
return storer.ErrStop
})
return found, err
}
// ancestorsIndex returns a map with the ancestors of the starting commit if the
// excluded one is not one of them. It returns errIsReachable if the excluded commit
// is ancestor of the starting, or another error if the history is not traversable.
func ancestorsIndex(excluded, starting *Commit) (map[plumbing.Hash]struct{}, error) {
if excluded.Hash.String() == starting.Hash.String() {
return nil, errIsReachable
}
startingHistory := map[plumbing.Hash]struct{}{}
startingIter := NewCommitIterBSF(starting, nil, nil)
err := startingIter.ForEach(func(commit *Commit) error {
if commit.Hash == excluded.Hash {
return errIsReachable
}
startingHistory[commit.Hash] = struct{}{}
return nil
})
if err != nil {
return nil, err
}
return startingHistory, nil
}
// Independents returns a subset of the passed commits, that are not reachable the others
// It mimics the behavior of `git merge-base --independent commit...`.
func Independents(commits []*Commit) ([]*Commit, error) {
// use sortedByCommitDateDesc strategy
candidates := sortByCommitDateDesc(commits...)
candidates = removeDuplicated(candidates)
seen := map[plumbing.Hash]struct{}{}
var isLimit CommitFilter = func(commit *Commit) bool {
_, ok := seen[commit.Hash]
return ok
}
if len(candidates) < 2 {
return candidates, nil
}
pos := 0
for {
from := candidates[pos]
others := remove(candidates, from)
fromHistoryIter := NewFilterCommitIter(from, nil, &isLimit)
err := fromHistoryIter.ForEach(func(fromAncestor *Commit) error {
for _, other := range others {
if fromAncestor.Hash == other.Hash {
candidates = remove(candidates, other)
others = remove(others, other)
}
}
if len(candidates) == 1 {
return storer.ErrStop
}
seen[fromAncestor.Hash] = struct{}{}
return nil
})
if err != nil {
return nil, err
}
nextPos := indexOf(candidates, from) + 1
if nextPos >= len(candidates) {
break
}
pos = nextPos
}
return candidates, nil
}
// sortByCommitDateDesc returns the passed commits, sorted by `committer.When desc`
//
// Following this strategy, it is tried to reduce the time needed when walking
// the history from one commit to reach the others. It is assumed that ancestors
// use to be committed before its descendant;
// That way `Independents(A^, A)` will be processed as being `Independents(A, A^)`;
// so starting by `A` it will be reached `A^` way sooner than walking from `A^`
// to the initial commit, and then from `A` to `A^`.
func sortByCommitDateDesc(commits ...*Commit) []*Commit {
sorted := make([]*Commit, len(commits))
copy(sorted, commits)
sort.Slice(sorted, func(i, j int) bool {
return sorted[i].Committer.When.After(sorted[j].Committer.When)
})
return sorted
}
// indexOf returns the first position where target was found in the passed commits
func indexOf(commits []*Commit, target *Commit) int {
for i, commit := range commits {
if target.Hash == commit.Hash {
return i
}
}
return -1
}
// remove returns the passed commits excluding the commit toDelete
func remove(commits []*Commit, toDelete *Commit) []*Commit {
res := make([]*Commit, len(commits))
j := 0
for _, commit := range commits {
if commit.Hash == toDelete.Hash {
continue
}
res[j] = commit
j++
}
return res[:j]
}
// removeDuplicated removes duplicated commits from the passed slice of commits
func removeDuplicated(commits []*Commit) []*Commit {
seen := make(map[plumbing.Hash]struct{}, len(commits))
res := make([]*Commit, len(commits))
j := 0
for _, commit := range commits {
if _, ok := seen[commit.Hash]; ok {
continue
}
seen[commit.Hash] = struct{}{}
res[j] = commit
j++
}
return res[:j]
}
// isInIndexCommitFilter returns a commitFilter that returns true
// if the commit is in the passed index.
func isInIndexCommitFilter(index map[plumbing.Hash]struct{}) CommitFilter {
return func(c *Commit) bool {
_, ok := index[c.Hash]
return ok
}
}

View File

@ -0,0 +1,239 @@
// Package object contains implementations of all Git objects and utility
// functions to work with them.
package object
import (
"bytes"
"errors"
"fmt"
"io"
"strconv"
"time"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/storer"
)
// ErrUnsupportedObject trigger when a non-supported object is being decoded.
var ErrUnsupportedObject = errors.New("unsupported object type")
// Object is a generic representation of any git object. It is implemented by
// Commit, Tree, Blob, and Tag, and includes the functions that are common to
// them.
//
// Object is returned when an object can be of any type. It is frequently used
// with a type cast to acquire the specific type of object:
//
// func process(obj Object) {
// switch o := obj.(type) {
// case *Commit:
// // o is a Commit
// case *Tree:
// // o is a Tree
// case *Blob:
// // o is a Blob
// case *Tag:
// // o is a Tag
// }
// }
//
// This interface is intentionally different from plumbing.EncodedObject, which
// is a lower level interface used by storage implementations to read and write
// objects in its encoded form.
type Object interface {
ID() plumbing.Hash
Type() plumbing.ObjectType
Decode(plumbing.EncodedObject) error
Encode(plumbing.EncodedObject) error
}
// GetObject gets an object from an object storer and decodes it.
func GetObject(s storer.EncodedObjectStorer, h plumbing.Hash) (Object, error) {
o, err := s.EncodedObject(plumbing.AnyObject, h)
if err != nil {
return nil, err
}
return DecodeObject(s, o)
}
// DecodeObject decodes an encoded object into an Object and associates it to
// the given object storer.
func DecodeObject(s storer.EncodedObjectStorer, o plumbing.EncodedObject) (Object, error) {
switch o.Type() {
case plumbing.CommitObject:
return DecodeCommit(s, o)
case plumbing.TreeObject:
return DecodeTree(s, o)
case plumbing.BlobObject:
return DecodeBlob(o)
case plumbing.TagObject:
return DecodeTag(s, o)
default:
return nil, plumbing.ErrInvalidType
}
}
// DateFormat is the format being used in the original git implementation
const DateFormat = "Mon Jan 02 15:04:05 2006 -0700"
// Signature is used to identify who and when created a commit or tag.
type Signature struct {
// Name represents a person name. It is an arbitrary string.
Name string
// Email is an email, but it cannot be assumed to be well-formed.
Email string
// When is the timestamp of the signature.
When time.Time
}
// Decode decodes a byte slice into a signature
func (s *Signature) Decode(b []byte) {
open := bytes.LastIndexByte(b, '<')
close := bytes.LastIndexByte(b, '>')
if open == -1 || close == -1 {
return
}
if close < open {
return
}
s.Name = string(bytes.Trim(b[:open], " "))
s.Email = string(b[open+1 : close])
hasTime := close+2 < len(b)
if hasTime {
s.decodeTimeAndTimeZone(b[close+2:])
}
}
// Encode encodes a Signature into a writer.
func (s *Signature) Encode(w io.Writer) error {
if _, err := fmt.Fprintf(w, "%s <%s> ", s.Name, s.Email); err != nil {
return err
}
if err := s.encodeTimeAndTimeZone(w); err != nil {
return err
}
return nil
}
var timeZoneLength = 5
func (s *Signature) decodeTimeAndTimeZone(b []byte) {
space := bytes.IndexByte(b, ' ')
if space == -1 {
space = len(b)
}
ts, err := strconv.ParseInt(string(b[:space]), 10, 64)
if err != nil {
return
}
s.When = time.Unix(ts, 0).In(time.UTC)
var tzStart = space + 1
if tzStart >= len(b) || tzStart+timeZoneLength > len(b) {
return
}
timezone := string(b[tzStart : tzStart+timeZoneLength])
tzhours, err1 := strconv.ParseInt(timezone[0:3], 10, 64)
tzmins, err2 := strconv.ParseInt(timezone[3:], 10, 64)
if err1 != nil || err2 != nil {
return
}
if tzhours < 0 {
tzmins *= -1
}
tz := time.FixedZone("", int(tzhours*60*60+tzmins*60))
s.When = s.When.In(tz)
}
func (s *Signature) encodeTimeAndTimeZone(w io.Writer) error {
u := s.When.Unix()
if u < 0 {
u = 0
}
_, err := fmt.Fprintf(w, "%d %s", u, s.When.Format("-0700"))
return err
}
func (s *Signature) String() string {
return fmt.Sprintf("%s <%s>", s.Name, s.Email)
}
// ObjectIter provides an iterator for a set of objects.
type ObjectIter struct {
storer.EncodedObjectIter
s storer.EncodedObjectStorer
}
// NewObjectIter takes a storer.EncodedObjectStorer and a
// storer.EncodedObjectIter and returns an *ObjectIter that iterates over all
// objects contained in the storer.EncodedObjectIter.
func NewObjectIter(s storer.EncodedObjectStorer, iter storer.EncodedObjectIter) *ObjectIter {
return &ObjectIter{iter, s}
}
// Next moves the iterator to the next object and returns a pointer to it. If
// there are no more objects, it returns io.EOF.
func (iter *ObjectIter) Next() (Object, error) {
for {
obj, err := iter.EncodedObjectIter.Next()
if err != nil {
return nil, err
}
o, err := iter.toObject(obj)
if err == plumbing.ErrInvalidType {
continue
}
if err != nil {
return nil, err
}
return o, nil
}
}
// ForEach call the cb function for each object contained on this iter until
// an error happens or the end of the iter is reached. If ErrStop is sent
// the iteration is stop but no error is returned. The iterator is closed.
func (iter *ObjectIter) ForEach(cb func(Object) error) error {
return iter.EncodedObjectIter.ForEach(func(obj plumbing.EncodedObject) error {
o, err := iter.toObject(obj)
if err == plumbing.ErrInvalidType {
return nil
}
if err != nil {
return err
}
return cb(o)
})
}
func (iter *ObjectIter) toObject(obj plumbing.EncodedObject) (Object, error) {
switch obj.Type() {
case plumbing.BlobObject:
blob := &Blob{}
return blob, blob.Decode(obj)
case plumbing.TreeObject:
tree := &Tree{s: iter.s}
return tree, tree.Decode(obj)
case plumbing.CommitObject:
commit := &Commit{}
return commit, commit.Decode(obj)
case plumbing.TagObject:
tag := &Tag{}
return tag, tag.Decode(obj)
default:
return nil, plumbing.ErrInvalidType
}
}

View File

@ -0,0 +1,337 @@
package object
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"strconv"
"strings"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/filemode"
fdiff "github.com/go-git/go-git/v5/plumbing/format/diff"
"github.com/go-git/go-git/v5/utils/diff"
dmp "github.com/sergi/go-diff/diffmatchpatch"
)
var (
ErrCanceled = errors.New("operation canceled")
)
func getPatch(message string, changes ...*Change) (*Patch, error) {
ctx := context.Background()
return getPatchContext(ctx, message, changes...)
}
func getPatchContext(ctx context.Context, message string, changes ...*Change) (*Patch, error) {
var filePatches []fdiff.FilePatch
for _, c := range changes {
select {
case <-ctx.Done():
return nil, ErrCanceled
default:
}
fp, err := filePatchWithContext(ctx, c)
if err != nil {
return nil, err
}
filePatches = append(filePatches, fp)
}
return &Patch{message, filePatches}, nil
}
func filePatchWithContext(ctx context.Context, c *Change) (fdiff.FilePatch, error) {
from, to, err := c.Files()
if err != nil {
return nil, err
}
fromContent, fIsBinary, err := fileContent(from)
if err != nil {
return nil, err
}
toContent, tIsBinary, err := fileContent(to)
if err != nil {
return nil, err
}
if fIsBinary || tIsBinary {
return &textFilePatch{from: c.From, to: c.To}, nil
}
diffs := diff.Do(fromContent, toContent)
var chunks []fdiff.Chunk
for _, d := range diffs {
select {
case <-ctx.Done():
return nil, ErrCanceled
default:
}
var op fdiff.Operation
switch d.Type {
case dmp.DiffEqual:
op = fdiff.Equal
case dmp.DiffDelete:
op = fdiff.Delete
case dmp.DiffInsert:
op = fdiff.Add
}
chunks = append(chunks, &textChunk{d.Text, op})
}
return &textFilePatch{
chunks: chunks,
from: c.From,
to: c.To,
}, nil
}
func fileContent(f *File) (content string, isBinary bool, err error) {
if f == nil {
return
}
isBinary, err = f.IsBinary()
if err != nil || isBinary {
return
}
content, err = f.Contents()
return
}
// Patch is an implementation of fdiff.Patch interface
type Patch struct {
message string
filePatches []fdiff.FilePatch
}
func (p *Patch) FilePatches() []fdiff.FilePatch {
return p.filePatches
}
func (p *Patch) Message() string {
return p.message
}
func (p *Patch) Encode(w io.Writer) error {
ue := fdiff.NewUnifiedEncoder(w, fdiff.DefaultContextLines)
return ue.Encode(p)
}
func (p *Patch) Stats() FileStats {
return getFileStatsFromFilePatches(p.FilePatches())
}
func (p *Patch) String() string {
buf := bytes.NewBuffer(nil)
err := p.Encode(buf)
if err != nil {
return fmt.Sprintf("malformed patch: %s", err.Error())
}
return buf.String()
}
// changeEntryWrapper is an implementation of fdiff.File interface
type changeEntryWrapper struct {
ce ChangeEntry
}
func (f *changeEntryWrapper) Hash() plumbing.Hash {
if !f.ce.TreeEntry.Mode.IsFile() {
return plumbing.ZeroHash
}
return f.ce.TreeEntry.Hash
}
func (f *changeEntryWrapper) Mode() filemode.FileMode {
return f.ce.TreeEntry.Mode
}
func (f *changeEntryWrapper) Path() string {
if !f.ce.TreeEntry.Mode.IsFile() {
return ""
}
return f.ce.Name
}
func (f *changeEntryWrapper) Empty() bool {
return !f.ce.TreeEntry.Mode.IsFile()
}
// textFilePatch is an implementation of fdiff.FilePatch interface
type textFilePatch struct {
chunks []fdiff.Chunk
from, to ChangeEntry
}
func (tf *textFilePatch) Files() (from fdiff.File, to fdiff.File) {
f := &changeEntryWrapper{tf.from}
t := &changeEntryWrapper{tf.to}
if !f.Empty() {
from = f
}
if !t.Empty() {
to = t
}
return
}
func (tf *textFilePatch) IsBinary() bool {
return len(tf.chunks) == 0
}
func (tf *textFilePatch) Chunks() []fdiff.Chunk {
return tf.chunks
}
// textChunk is an implementation of fdiff.Chunk interface
type textChunk struct {
content string
op fdiff.Operation
}
func (t *textChunk) Content() string {
return t.content
}
func (t *textChunk) Type() fdiff.Operation {
return t.op
}
// FileStat stores the status of changes in content of a file.
type FileStat struct {
Name string
Addition int
Deletion int
}
func (fs FileStat) String() string {
return printStat([]FileStat{fs})
}
// FileStats is a collection of FileStat.
type FileStats []FileStat
func (fileStats FileStats) String() string {
return printStat(fileStats)
}
// printStat prints the stats of changes in content of files.
// Original implementation: https://github.com/git/git/blob/1a87c842ece327d03d08096395969aca5e0a6996/diff.c#L2615
// Parts of the output:
// <pad><filename><pad>|<pad><changeNumber><pad><+++/---><newline>
// example: " main.go | 10 +++++++--- "
func printStat(fileStats []FileStat) string {
maxGraphWidth := uint(53)
maxNameLen := 0
maxChangeLen := 0
scaleLinear := func(it, width, max uint) uint {
if it == 0 || max == 0 {
return 0
}
return 1 + (it * (width - 1) / max)
}
for _, fs := range fileStats {
if len(fs.Name) > maxNameLen {
maxNameLen = len(fs.Name)
}
changes := strconv.Itoa(fs.Addition + fs.Deletion)
if len(changes) > maxChangeLen {
maxChangeLen = len(changes)
}
}
result := ""
for _, fs := range fileStats {
add := uint(fs.Addition)
del := uint(fs.Deletion)
np := maxNameLen - len(fs.Name)
cp := maxChangeLen - len(strconv.Itoa(fs.Addition+fs.Deletion))
total := add + del
if total > maxGraphWidth {
add = scaleLinear(add, maxGraphWidth, total)
del = scaleLinear(del, maxGraphWidth, total)
}
adds := strings.Repeat("+", int(add))
dels := strings.Repeat("-", int(del))
namePad := strings.Repeat(" ", np)
changePad := strings.Repeat(" ", cp)
result += fmt.Sprintf(" %s%s | %s%d %s%s\n", fs.Name, namePad, changePad, total, adds, dels)
}
return result
}
func getFileStatsFromFilePatches(filePatches []fdiff.FilePatch) FileStats {
var fileStats FileStats
for _, fp := range filePatches {
// ignore empty patches (binary files, submodule refs updates)
if len(fp.Chunks()) == 0 {
continue
}
cs := FileStat{}
from, to := fp.Files()
if from == nil {
// New File is created.
cs.Name = to.Path()
} else if to == nil {
// File is deleted.
cs.Name = from.Path()
} else if from.Path() != to.Path() {
// File is renamed.
cs.Name = fmt.Sprintf("%s => %s", from.Path(), to.Path())
} else {
cs.Name = from.Path()
}
for _, chunk := range fp.Chunks() {
s := chunk.Content()
if len(s) == 0 {
continue
}
switch chunk.Type() {
case fdiff.Add:
cs.Addition += strings.Count(s, "\n")
if s[len(s)-1] != '\n' {
cs.Addition++
}
case fdiff.Delete:
cs.Deletion += strings.Count(s, "\n")
if s[len(s)-1] != '\n' {
cs.Deletion++
}
}
}
fileStats = append(fileStats, cs)
}
return fileStats
}

View File

@ -0,0 +1,816 @@
package object
import (
"errors"
"io"
"sort"
"strings"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/filemode"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/merkletrie"
)
// DetectRenames detects the renames in the given changes on two trees with
// the given options. It will return the given changes grouping additions and
// deletions into modifications when possible.
// If options is nil, the default diff tree options will be used.
func DetectRenames(
changes Changes,
opts *DiffTreeOptions,
) (Changes, error) {
if opts == nil {
opts = DefaultDiffTreeOptions
}
detector := &renameDetector{
renameScore: int(opts.RenameScore),
renameLimit: int(opts.RenameLimit),
onlyExact: opts.OnlyExactRenames,
}
for _, c := range changes {
action, err := c.Action()
if err != nil {
return nil, err
}
switch action {
case merkletrie.Insert:
detector.added = append(detector.added, c)
case merkletrie.Delete:
detector.deleted = append(detector.deleted, c)
default:
detector.modified = append(detector.modified, c)
}
}
return detector.detect()
}
// renameDetector will detect and resolve renames in a set of changes.
// see: https://github.com/eclipse/jgit/blob/master/org.eclipse.jgit/src/org/eclipse/jgit/diff/RenameDetector.java
type renameDetector struct {
added []*Change
deleted []*Change
modified []*Change
renameScore int
renameLimit int
onlyExact bool
}
// detectExactRenames detects matches files that were deleted with files that
// were added where the hash is the same on both. If there are multiple targets
// the one with the most similar path will be chosen as the rename and the
// rest as either deletions or additions.
func (d *renameDetector) detectExactRenames() {
added := groupChangesByHash(d.added)
deletes := groupChangesByHash(d.deleted)
var uniqueAdds []*Change
var nonUniqueAdds [][]*Change
var addedLeft []*Change
for _, cs := range added {
if len(cs) == 1 {
uniqueAdds = append(uniqueAdds, cs[0])
} else {
nonUniqueAdds = append(nonUniqueAdds, cs)
}
}
for _, c := range uniqueAdds {
hash := changeHash(c)
deleted := deletes[hash]
if len(deleted) == 1 {
if sameMode(c, deleted[0]) {
d.modified = append(d.modified, &Change{From: deleted[0].From, To: c.To})
delete(deletes, hash)
} else {
addedLeft = append(addedLeft, c)
}
} else if len(deleted) > 1 {
bestMatch := bestNameMatch(c, deleted)
if bestMatch != nil && sameMode(c, bestMatch) {
d.modified = append(d.modified, &Change{From: bestMatch.From, To: c.To})
delete(deletes, hash)
var newDeletes = make([]*Change, 0, len(deleted)-1)
for _, d := range deleted {
if d != bestMatch {
newDeletes = append(newDeletes, d)
}
}
deletes[hash] = newDeletes
}
} else {
addedLeft = append(addedLeft, c)
}
}
for _, added := range nonUniqueAdds {
hash := changeHash(added[0])
deleted := deletes[hash]
if len(deleted) == 1 {
deleted := deleted[0]
bestMatch := bestNameMatch(deleted, added)
if bestMatch != nil && sameMode(deleted, bestMatch) {
d.modified = append(d.modified, &Change{From: deleted.From, To: bestMatch.To})
delete(deletes, hash)
for _, c := range added {
if c != bestMatch {
addedLeft = append(addedLeft, c)
}
}
} else {
addedLeft = append(addedLeft, added...)
}
} else if len(deleted) > 1 {
maxSize := len(deleted) * len(added)
if d.renameLimit > 0 && d.renameLimit < maxSize {
maxSize = d.renameLimit
}
matrix := make(similarityMatrix, 0, maxSize)
for delIdx, del := range deleted {
deletedName := changeName(del)
for addIdx, add := range added {
addedName := changeName(add)
score := nameSimilarityScore(addedName, deletedName)
matrix = append(matrix, similarityPair{added: addIdx, deleted: delIdx, score: score})
if len(matrix) >= maxSize {
break
}
}
if len(matrix) >= maxSize {
break
}
}
sort.Stable(matrix)
usedAdds := make(map[*Change]struct{})
usedDeletes := make(map[*Change]struct{})
for i := len(matrix) - 1; i >= 0; i-- {
del := deleted[matrix[i].deleted]
add := added[matrix[i].added]
if add == nil || del == nil {
// it was already matched
continue
}
usedAdds[add] = struct{}{}
usedDeletes[del] = struct{}{}
d.modified = append(d.modified, &Change{From: del.From, To: add.To})
added[matrix[i].added] = nil
deleted[matrix[i].deleted] = nil
}
for _, c := range added {
if _, ok := usedAdds[c]; !ok && c != nil {
addedLeft = append(addedLeft, c)
}
}
var newDeletes = make([]*Change, 0, len(deleted)-len(usedDeletes))
for _, c := range deleted {
if _, ok := usedDeletes[c]; !ok && c != nil {
newDeletes = append(newDeletes, c)
}
}
deletes[hash] = newDeletes
} else {
addedLeft = append(addedLeft, added...)
}
}
d.added = addedLeft
d.deleted = nil
for _, dels := range deletes {
d.deleted = append(d.deleted, dels...)
}
}
// detectContentRenames detects renames based on the similarity of the content
// in the files by building a matrix of pairs between sources and destinations
// and matching by the highest score.
// see: https://github.com/eclipse/jgit/blob/master/org.eclipse.jgit/src/org/eclipse/jgit/diff/SimilarityRenameDetector.java
func (d *renameDetector) detectContentRenames() error {
cnt := max(len(d.added), len(d.deleted))
if d.renameLimit > 0 && cnt > d.renameLimit {
return nil
}
srcs, dsts := d.deleted, d.added
matrix, err := buildSimilarityMatrix(srcs, dsts, d.renameScore)
if err != nil {
return err
}
renames := make([]*Change, 0, min(len(matrix), len(dsts)))
// Match rename pairs on a first come, first serve basis until
// we have looked at everything that is above the minimum score.
for i := len(matrix) - 1; i >= 0; i-- {
pair := matrix[i]
src := srcs[pair.deleted]
dst := dsts[pair.added]
if dst == nil || src == nil {
// It was already matched before
continue
}
renames = append(renames, &Change{From: src.From, To: dst.To})
// Claim destination and source as matched
dsts[pair.added] = nil
srcs[pair.deleted] = nil
}
d.modified = append(d.modified, renames...)
d.added = compactChanges(dsts)
d.deleted = compactChanges(srcs)
return nil
}
func (d *renameDetector) detect() (Changes, error) {
if len(d.added) > 0 && len(d.deleted) > 0 {
d.detectExactRenames()
if !d.onlyExact {
if err := d.detectContentRenames(); err != nil {
return nil, err
}
}
}
result := make(Changes, 0, len(d.added)+len(d.deleted)+len(d.modified))
result = append(result, d.added...)
result = append(result, d.deleted...)
result = append(result, d.modified...)
sort.Stable(result)
return result, nil
}
func bestNameMatch(change *Change, changes []*Change) *Change {
var best *Change
var bestScore int
cname := changeName(change)
for _, c := range changes {
score := nameSimilarityScore(cname, changeName(c))
if score > bestScore {
bestScore = score
best = c
}
}
return best
}
func nameSimilarityScore(a, b string) int {
aDirLen := strings.LastIndexByte(a, '/') + 1
bDirLen := strings.LastIndexByte(b, '/') + 1
dirMin := min(aDirLen, bDirLen)
dirMax := max(aDirLen, bDirLen)
var dirScoreLtr, dirScoreRtl int
if dirMax == 0 {
dirScoreLtr = 100
dirScoreRtl = 100
} else {
var dirSim int
for ; dirSim < dirMin; dirSim++ {
if a[dirSim] != b[dirSim] {
break
}
}
dirScoreLtr = dirSim * 100 / dirMax
if dirScoreLtr == 100 {
dirScoreRtl = 100
} else {
for dirSim = 0; dirSim < dirMin; dirSim++ {
if a[aDirLen-1-dirSim] != b[bDirLen-1-dirSim] {
break
}
}
dirScoreRtl = dirSim * 100 / dirMax
}
}
fileMin := min(len(a)-aDirLen, len(b)-bDirLen)
fileMax := max(len(a)-aDirLen, len(b)-bDirLen)
fileSim := 0
for ; fileSim < fileMin; fileSim++ {
if a[len(a)-1-fileSim] != b[len(b)-1-fileSim] {
break
}
}
fileScore := fileSim * 100 / fileMax
return (((dirScoreLtr + dirScoreRtl) * 25) + (fileScore * 50)) / 100
}
func changeName(c *Change) string {
if c.To != empty {
return c.To.Name
}
return c.From.Name
}
func changeHash(c *Change) plumbing.Hash {
if c.To != empty {
return c.To.TreeEntry.Hash
}
return c.From.TreeEntry.Hash
}
func changeMode(c *Change) filemode.FileMode {
if c.To != empty {
return c.To.TreeEntry.Mode
}
return c.From.TreeEntry.Mode
}
func sameMode(a, b *Change) bool {
return changeMode(a) == changeMode(b)
}
func groupChangesByHash(changes []*Change) map[plumbing.Hash][]*Change {
var result = make(map[plumbing.Hash][]*Change)
for _, c := range changes {
hash := changeHash(c)
result[hash] = append(result[hash], c)
}
return result
}
type similarityMatrix []similarityPair
func (m similarityMatrix) Len() int { return len(m) }
func (m similarityMatrix) Swap(i, j int) { m[i], m[j] = m[j], m[i] }
func (m similarityMatrix) Less(i, j int) bool {
if m[i].score == m[j].score {
if m[i].added == m[j].added {
return m[i].deleted < m[j].deleted
}
return m[i].added < m[j].added
}
return m[i].score < m[j].score
}
type similarityPair struct {
// index of the added file
added int
// index of the deleted file
deleted int
// similarity score
score int
}
func max(a, b int) int {
if a > b {
return a
}
return b
}
func min(a, b int) int {
if a < b {
return a
}
return b
}
const maxMatrixSize = 10000
func buildSimilarityMatrix(srcs, dsts []*Change, renameScore int) (similarityMatrix, error) {
// Allocate for the worst-case scenario where every pair has a score
// that we need to consider. We might not need that many.
matrixSize := len(srcs) * len(dsts)
if matrixSize > maxMatrixSize {
matrixSize = maxMatrixSize
}
matrix := make(similarityMatrix, 0, matrixSize)
srcSizes := make([]int64, len(srcs))
dstSizes := make([]int64, len(dsts))
dstTooLarge := make(map[int]bool)
// Consider each pair of files, if the score is above the minimum
// threshold we need to record that scoring in the matrix so we can
// later find the best matches.
outerLoop:
for srcIdx, src := range srcs {
if changeMode(src) != filemode.Regular {
continue
}
// Declare the from file and the similarity index here to be able to
// reuse it inside the inner loop. The reason to not initialize them
// here is so we can skip the initialization in case they happen to
// not be needed later. They will be initialized inside the inner
// loop if and only if they're needed and reused in subsequent passes.
var from *File
var s *similarityIndex
var err error
for dstIdx, dst := range dsts {
if changeMode(dst) != filemode.Regular {
continue
}
if dstTooLarge[dstIdx] {
continue
}
var to *File
srcSize := srcSizes[srcIdx]
if srcSize == 0 {
from, _, err = src.Files()
if err != nil {
return nil, err
}
srcSize = from.Size + 1
srcSizes[srcIdx] = srcSize
}
dstSize := dstSizes[dstIdx]
if dstSize == 0 {
_, to, err = dst.Files()
if err != nil {
return nil, err
}
dstSize = to.Size + 1
dstSizes[dstIdx] = dstSize
}
min, max := srcSize, dstSize
if dstSize < srcSize {
min = dstSize
max = srcSize
}
if int(min*100/max) < renameScore {
// File sizes are too different to be a match
continue
}
if s == nil {
s, err = fileSimilarityIndex(from)
if err != nil {
if err == errIndexFull {
continue outerLoop
}
return nil, err
}
}
if to == nil {
_, to, err = dst.Files()
if err != nil {
return nil, err
}
}
di, err := fileSimilarityIndex(to)
if err != nil {
if err == errIndexFull {
dstTooLarge[dstIdx] = true
}
return nil, err
}
contentScore := s.score(di, 10000)
// The name score returns a value between 0 and 100, so we need to
// convert it to the same range as the content score.
nameScore := nameSimilarityScore(src.From.Name, dst.To.Name) * 100
score := (contentScore*99 + nameScore*1) / 10000
if score < renameScore {
continue
}
matrix = append(matrix, similarityPair{added: dstIdx, deleted: srcIdx, score: score})
}
}
sort.Stable(matrix)
return matrix, nil
}
func compactChanges(changes []*Change) []*Change {
var result []*Change
for _, c := range changes {
if c != nil {
result = append(result, c)
}
}
return result
}
const (
keyShift = 32
maxCountValue = (1 << keyShift) - 1
)
var errIndexFull = errors.New("index is full")
// similarityIndex is an index structure of lines/blocks in one file.
// This structure can be used to compute an approximation of the similarity
// between two files.
// To save space in memory, this index uses a space efficient encoding which
// will not exceed 1MiB per instance. The index starts out at a smaller size
// (closer to 2KiB), but may grow as more distinct blocks within the scanned
// file are discovered.
// see: https://github.com/eclipse/jgit/blob/master/org.eclipse.jgit/src/org/eclipse/jgit/diff/SimilarityIndex.java
type similarityIndex struct {
hashed uint64
// number of non-zero entries in hashes
numHashes int
growAt int
hashes []keyCountPair
hashBits int
}
func fileSimilarityIndex(f *File) (*similarityIndex, error) {
idx := newSimilarityIndex()
if err := idx.hash(f); err != nil {
return nil, err
}
sort.Stable(keyCountPairs(idx.hashes))
return idx, nil
}
func newSimilarityIndex() *similarityIndex {
return &similarityIndex{
hashBits: 8,
hashes: make([]keyCountPair, 1<<8),
growAt: shouldGrowAt(8),
}
}
func (i *similarityIndex) hash(f *File) error {
isBin, err := f.IsBinary()
if err != nil {
return err
}
r, err := f.Reader()
if err != nil {
return err
}
defer ioutil.CheckClose(r, &err)
return i.hashContent(r, f.Size, isBin)
}
func (i *similarityIndex) hashContent(r io.Reader, size int64, isBin bool) error {
var buf = make([]byte, 4096)
var ptr, cnt int
remaining := size
for 0 < remaining {
hash := 5381
var blockHashedCnt uint64
// Hash one line or block, whatever happens first
n := int64(0)
for {
if ptr == cnt {
ptr = 0
var err error
cnt, err = io.ReadFull(r, buf)
if err != nil && err != io.ErrUnexpectedEOF {
return err
}
if cnt == 0 {
return io.EOF
}
}
n++
c := buf[ptr] & 0xff
ptr++
// Ignore CR in CRLF sequence if it's text
if !isBin && c == '\r' && ptr < cnt && buf[ptr] == '\n' {
continue
}
blockHashedCnt++
if c == '\n' {
break
}
hash = (hash << 5) + hash + int(c)
if n >= 64 || n >= remaining {
break
}
}
i.hashed += blockHashedCnt
if err := i.add(hash, blockHashedCnt); err != nil {
return err
}
remaining -= n
}
return nil
}
// score computes the similarity score between this index and another one.
// A region of a file is defined as a line in a text file or a fixed-size
// block in a binary file. To prepare an index, each region in the file is
// hashed; the values and counts of hashes are retained in a sorted table.
// Define the similarity fraction F as the count of matching regions between
// the two files divided between the maximum count of regions in either file.
// The similarity score is F multiplied by the maxScore constant, yielding a
// range [0, maxScore]. It is defined as maxScore for the degenerate case of
// two empty files.
// The similarity score is symmetrical; i.e. a.score(b) == b.score(a).
func (i *similarityIndex) score(other *similarityIndex, maxScore int) int {
var maxHashed = i.hashed
if maxHashed < other.hashed {
maxHashed = other.hashed
}
if maxHashed == 0 {
return maxScore
}
return int(i.common(other) * uint64(maxScore) / maxHashed)
}
func (i *similarityIndex) common(dst *similarityIndex) uint64 {
srcIdx, dstIdx := 0, 0
if i.numHashes == 0 || dst.numHashes == 0 {
return 0
}
var common uint64
srcKey, dstKey := i.hashes[srcIdx].key(), dst.hashes[dstIdx].key()
for {
if srcKey == dstKey {
srcCnt, dstCnt := i.hashes[srcIdx].count(), dst.hashes[dstIdx].count()
if srcCnt < dstCnt {
common += srcCnt
} else {
common += dstCnt
}
srcIdx++
if srcIdx == len(i.hashes) {
break
}
srcKey = i.hashes[srcIdx].key()
dstIdx++
if dstIdx == len(dst.hashes) {
break
}
dstKey = dst.hashes[dstIdx].key()
} else if srcKey < dstKey {
// Region of src that is not in dst
srcIdx++
if srcIdx == len(i.hashes) {
break
}
srcKey = i.hashes[srcIdx].key()
} else {
// Region of dst that is not in src
dstIdx++
if dstIdx == len(dst.hashes) {
break
}
dstKey = dst.hashes[dstIdx].key()
}
}
return common
}
func (i *similarityIndex) add(key int, cnt uint64) error {
key = int(uint32(key) * 0x9e370001 >> 1)
j := i.slot(key)
for {
v := i.hashes[j]
if v == 0 {
// It's an empty slot, so we can store it here.
if i.growAt <= i.numHashes {
if err := i.grow(); err != nil {
return err
}
j = i.slot(key)
continue
}
var err error
i.hashes[j], err = newKeyCountPair(key, cnt)
if err != nil {
return err
}
i.numHashes++
return nil
} else if v.key() == key {
// It's the same key, so increment the counter.
var err error
i.hashes[j], err = newKeyCountPair(key, v.count()+cnt)
return err
} else if j+1 >= len(i.hashes) {
j = 0
} else {
j++
}
}
}
type keyCountPair uint64
func newKeyCountPair(key int, cnt uint64) (keyCountPair, error) {
if cnt > maxCountValue {
return 0, errIndexFull
}
return keyCountPair((uint64(key) << keyShift) | cnt), nil
}
func (p keyCountPair) key() int {
return int(p >> keyShift)
}
func (p keyCountPair) count() uint64 {
return uint64(p) & maxCountValue
}
func (i *similarityIndex) slot(key int) int {
// We use 31 - hashBits because the upper bit was already forced
// to be 0 and we want the remaining high bits to be used as the
// table slot.
return int(uint32(key) >> uint(31-i.hashBits))
}
func shouldGrowAt(hashBits int) int {
return (1 << uint(hashBits)) * (hashBits - 3) / hashBits
}
func (i *similarityIndex) grow() error {
if i.hashBits == 30 {
return errIndexFull
}
old := i.hashes
i.hashBits++
i.growAt = shouldGrowAt(i.hashBits)
// TODO(erizocosmico): find a way to check if it will OOM and return
// errIndexFull instead.
i.hashes = make([]keyCountPair, 1<<uint(i.hashBits))
for _, v := range old {
if v != 0 {
j := i.slot(v.key())
for i.hashes[j] != 0 {
j++
if j >= len(i.hashes) {
j = 0
}
}
i.hashes[j] = v
}
}
return nil
}
type keyCountPairs []keyCountPair
func (p keyCountPairs) Len() int { return len(p) }
func (p keyCountPairs) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
func (p keyCountPairs) Less(i, j int) bool { return p[i] < p[j] }

View File

@ -0,0 +1,101 @@
package object
import "bytes"
const (
signatureTypeUnknown signatureType = iota
signatureTypeOpenPGP
signatureTypeX509
signatureTypeSSH
)
var (
// openPGPSignatureFormat is the format of an OpenPGP signature.
openPGPSignatureFormat = signatureFormat{
[]byte("-----BEGIN PGP SIGNATURE-----"),
[]byte("-----BEGIN PGP MESSAGE-----"),
}
// x509SignatureFormat is the format of an X509 signature, which is
// a PKCS#7 (S/MIME) signature.
x509SignatureFormat = signatureFormat{
[]byte("-----BEGIN CERTIFICATE-----"),
}
// sshSignatureFormat is the format of an SSH signature.
sshSignatureFormat = signatureFormat{
[]byte("-----BEGIN SSH SIGNATURE-----"),
}
)
var (
// knownSignatureFormats is a map of known signature formats, indexed by
// their signatureType.
knownSignatureFormats = map[signatureType]signatureFormat{
signatureTypeOpenPGP: openPGPSignatureFormat,
signatureTypeX509: x509SignatureFormat,
signatureTypeSSH: sshSignatureFormat,
}
)
// signatureType represents the type of the signature.
type signatureType int8
// signatureFormat represents the beginning of a signature.
type signatureFormat [][]byte
// typeForSignature returns the type of the signature based on its format.
func typeForSignature(b []byte) signatureType {
for t, i := range knownSignatureFormats {
for _, begin := range i {
if bytes.HasPrefix(b, begin) {
return t
}
}
}
return signatureTypeUnknown
}
// parseSignedBytes returns the position of the last signature block found in
// the given bytes. If no signature block is found, it returns -1.
//
// When multiple signature blocks are found, the position of the last one is
// returned. Any tailing bytes after this signature block start should be
// considered part of the signature.
//
// Given this, it would be safe to use the returned position to split the bytes
// into two parts: the first part containing the message, the second part
// containing the signature.
//
// Example:
//
// message := []byte(`Message with signature
//
// -----BEGIN SSH SIGNATURE-----
// ...`)
//
// var signature string
// if pos, _ := parseSignedBytes(message); pos != -1 {
// signature = string(message[pos:])
// message = message[:pos]
// }
//
// This logic is on par with git's gpg-interface.c:parse_signed_buffer().
// https://github.com/git/git/blob/7c2ef319c52c4997256f5807564523dfd4acdfc7/gpg-interface.c#L668
func parseSignedBytes(b []byte) (int, signatureType) {
var n, match = 0, -1
var t signatureType
for n < len(b) {
var i = b[n:]
if st := typeForSignature(i); st != signatureTypeUnknown {
match = n
t = st
}
if eol := bytes.IndexByte(i, '\n'); eol >= 0 {
n += eol + 1
continue
}
// If we reach this point, we've reached the end.
break
}
return match, t
}

View File

@ -0,0 +1,330 @@
package object
import (
"bytes"
"fmt"
"io"
"strings"
"github.com/ProtonMail/go-crypto/openpgp"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/sync"
)
// Tag represents an annotated tag object. It points to a single git object of
// any type, but tags typically are applied to commit or blob objects. It
// provides a reference that associates the target with a tag name. It also
// contains meta-information about the tag, including the tagger, tag date and
// message.
//
// Note that this is not used for lightweight tags.
//
// https://git-scm.com/book/en/v2/Git-Internals-Git-References#Tags
type Tag struct {
// Hash of the tag.
Hash plumbing.Hash
// Name of the tag.
Name string
// Tagger is the one who created the tag.
Tagger Signature
// Message is an arbitrary text message.
Message string
// PGPSignature is the PGP signature of the tag.
PGPSignature string
// TargetType is the object type of the target.
TargetType plumbing.ObjectType
// Target is the hash of the target object.
Target plumbing.Hash
s storer.EncodedObjectStorer
}
// GetTag gets a tag from an object storer and decodes it.
func GetTag(s storer.EncodedObjectStorer, h plumbing.Hash) (*Tag, error) {
o, err := s.EncodedObject(plumbing.TagObject, h)
if err != nil {
return nil, err
}
return DecodeTag(s, o)
}
// DecodeTag decodes an encoded object into a *Commit and associates it to the
// given object storer.
func DecodeTag(s storer.EncodedObjectStorer, o plumbing.EncodedObject) (*Tag, error) {
t := &Tag{s: s}
if err := t.Decode(o); err != nil {
return nil, err
}
return t, nil
}
// ID returns the object ID of the tag, not the object that the tag references.
// The returned value will always match the current value of Tag.Hash.
//
// ID is present to fulfill the Object interface.
func (t *Tag) ID() plumbing.Hash {
return t.Hash
}
// Type returns the type of object. It always returns plumbing.TagObject.
//
// Type is present to fulfill the Object interface.
func (t *Tag) Type() plumbing.ObjectType {
return plumbing.TagObject
}
// Decode transforms a plumbing.EncodedObject into a Tag struct.
func (t *Tag) Decode(o plumbing.EncodedObject) (err error) {
if o.Type() != plumbing.TagObject {
return ErrUnsupportedObject
}
t.Hash = o.Hash()
reader, err := o.Reader()
if err != nil {
return err
}
defer ioutil.CheckClose(reader, &err)
r := sync.GetBufioReader(reader)
defer sync.PutBufioReader(r)
for {
var line []byte
line, err = r.ReadBytes('\n')
if err != nil && err != io.EOF {
return err
}
line = bytes.TrimSpace(line)
if len(line) == 0 {
break // Start of message
}
split := bytes.SplitN(line, []byte{' '}, 2)
switch string(split[0]) {
case "object":
t.Target = plumbing.NewHash(string(split[1]))
case "type":
t.TargetType, err = plumbing.ParseObjectType(string(split[1]))
if err != nil {
return err
}
case "tag":
t.Name = string(split[1])
case "tagger":
t.Tagger.Decode(split[1])
}
if err == io.EOF {
return nil
}
}
data, err := io.ReadAll(r)
if err != nil {
return err
}
if sm, _ := parseSignedBytes(data); sm >= 0 {
t.PGPSignature = string(data[sm:])
data = data[:sm]
}
t.Message = string(data)
return nil
}
// Encode transforms a Tag into a plumbing.EncodedObject.
func (t *Tag) Encode(o plumbing.EncodedObject) error {
return t.encode(o, true)
}
// EncodeWithoutSignature export a Tag into a plumbing.EncodedObject without the signature (correspond to the payload of the PGP signature).
func (t *Tag) EncodeWithoutSignature(o plumbing.EncodedObject) error {
return t.encode(o, false)
}
func (t *Tag) encode(o plumbing.EncodedObject, includeSig bool) (err error) {
o.SetType(plumbing.TagObject)
w, err := o.Writer()
if err != nil {
return err
}
defer ioutil.CheckClose(w, &err)
if _, err = fmt.Fprintf(w,
"object %s\ntype %s\ntag %s\ntagger ",
t.Target.String(), t.TargetType.Bytes(), t.Name); err != nil {
return err
}
if err = t.Tagger.Encode(w); err != nil {
return err
}
if _, err = fmt.Fprint(w, "\n\n"); err != nil {
return err
}
if _, err = fmt.Fprint(w, t.Message); err != nil {
return err
}
// Note that this is highly sensitive to what it sent along in the message.
// Message *always* needs to end with a newline, or else the message and the
// signature will be concatenated into a corrupt object. Since this is a
// lower-level method, we assume you know what you are doing and have already
// done the needful on the message in the caller.
if includeSig {
if _, err = fmt.Fprint(w, t.PGPSignature); err != nil {
return err
}
}
return err
}
// Commit returns the commit pointed to by the tag. If the tag points to a
// different type of object ErrUnsupportedObject will be returned.
func (t *Tag) Commit() (*Commit, error) {
if t.TargetType != plumbing.CommitObject {
return nil, ErrUnsupportedObject
}
o, err := t.s.EncodedObject(plumbing.CommitObject, t.Target)
if err != nil {
return nil, err
}
return DecodeCommit(t.s, o)
}
// Tree returns the tree pointed to by the tag. If the tag points to a commit
// object the tree of that commit will be returned. If the tag does not point
// to a commit or tree object ErrUnsupportedObject will be returned.
func (t *Tag) Tree() (*Tree, error) {
switch t.TargetType {
case plumbing.CommitObject:
c, err := t.Commit()
if err != nil {
return nil, err
}
return c.Tree()
case plumbing.TreeObject:
return GetTree(t.s, t.Target)
default:
return nil, ErrUnsupportedObject
}
}
// Blob returns the blob pointed to by the tag. If the tag points to a
// different type of object ErrUnsupportedObject will be returned.
func (t *Tag) Blob() (*Blob, error) {
if t.TargetType != plumbing.BlobObject {
return nil, ErrUnsupportedObject
}
return GetBlob(t.s, t.Target)
}
// Object returns the object pointed to by the tag.
func (t *Tag) Object() (Object, error) {
o, err := t.s.EncodedObject(t.TargetType, t.Target)
if err != nil {
return nil, err
}
return DecodeObject(t.s, o)
}
// String returns the meta information contained in the tag as a formatted
// string.
func (t *Tag) String() string {
obj, _ := t.Object()
return fmt.Sprintf(
"%s %s\nTagger: %s\nDate: %s\n\n%s\n%s",
plumbing.TagObject, t.Name, t.Tagger.String(), t.Tagger.When.Format(DateFormat),
t.Message, objectAsString(obj),
)
}
// Verify performs PGP verification of the tag with a provided armored
// keyring and returns openpgp.Entity associated with verifying key on success.
func (t *Tag) Verify(armoredKeyRing string) (*openpgp.Entity, error) {
keyRingReader := strings.NewReader(armoredKeyRing)
keyring, err := openpgp.ReadArmoredKeyRing(keyRingReader)
if err != nil {
return nil, err
}
// Extract signature.
signature := strings.NewReader(t.PGPSignature)
encoded := &plumbing.MemoryObject{}
// Encode tag components, excluding signature and get a reader object.
if err := t.EncodeWithoutSignature(encoded); err != nil {
return nil, err
}
er, err := encoded.Reader()
if err != nil {
return nil, err
}
return openpgp.CheckArmoredDetachedSignature(keyring, er, signature, nil)
}
// TagIter provides an iterator for a set of tags.
type TagIter struct {
storer.EncodedObjectIter
s storer.EncodedObjectStorer
}
// NewTagIter takes a storer.EncodedObjectStorer and a
// storer.EncodedObjectIter and returns a *TagIter that iterates over all
// tags contained in the storer.EncodedObjectIter.
//
// Any non-tag object returned by the storer.EncodedObjectIter is skipped.
func NewTagIter(s storer.EncodedObjectStorer, iter storer.EncodedObjectIter) *TagIter {
return &TagIter{iter, s}
}
// Next moves the iterator to the next tag and returns a pointer to it. If
// there are no more tags, it returns io.EOF.
func (iter *TagIter) Next() (*Tag, error) {
obj, err := iter.EncodedObjectIter.Next()
if err != nil {
return nil, err
}
return DecodeTag(iter.s, obj)
}
// ForEach call the cb function for each tag contained on this iter until
// an error happens or the end of the iter is reached. If ErrStop is sent
// the iteration is stop but no error is returned. The iterator is closed.
func (iter *TagIter) ForEach(cb func(*Tag) error) error {
return iter.EncodedObjectIter.ForEach(func(obj plumbing.EncodedObject) error {
t, err := DecodeTag(iter.s, obj)
if err != nil {
return err
}
return cb(t)
})
}
func objectAsString(obj Object) string {
switch o := obj.(type) {
case *Commit:
return o.String()
default:
return ""
}
}

View File

@ -0,0 +1,557 @@
package object
import (
"context"
"errors"
"fmt"
"io"
"path"
"path/filepath"
"sort"
"strings"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/filemode"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/sync"
)
const (
maxTreeDepth = 1024
startingStackSize = 8
)
// New errors defined by this package.
var (
ErrMaxTreeDepth = errors.New("maximum tree depth exceeded")
ErrFileNotFound = errors.New("file not found")
ErrDirectoryNotFound = errors.New("directory not found")
ErrEntryNotFound = errors.New("entry not found")
ErrEntriesNotSorted = errors.New("entries in tree are not sorted")
)
// Tree is basically like a directory - it references a bunch of other trees
// and/or blobs (i.e. files and sub-directories)
type Tree struct {
Entries []TreeEntry
Hash plumbing.Hash
s storer.EncodedObjectStorer
m map[string]*TreeEntry
t map[string]*Tree // tree path cache
}
// GetTree gets a tree from an object storer and decodes it.
func GetTree(s storer.EncodedObjectStorer, h plumbing.Hash) (*Tree, error) {
o, err := s.EncodedObject(plumbing.TreeObject, h)
if err != nil {
return nil, err
}
return DecodeTree(s, o)
}
// DecodeTree decodes an encoded object into a *Tree and associates it to the
// given object storer.
func DecodeTree(s storer.EncodedObjectStorer, o plumbing.EncodedObject) (*Tree, error) {
t := &Tree{s: s}
if err := t.Decode(o); err != nil {
return nil, err
}
return t, nil
}
// TreeEntry represents a file
type TreeEntry struct {
Name string
Mode filemode.FileMode
Hash plumbing.Hash
}
// File returns the hash of the file identified by the `path` argument.
// The path is interpreted as relative to the tree receiver.
func (t *Tree) File(path string) (*File, error) {
e, err := t.FindEntry(path)
if err != nil {
return nil, ErrFileNotFound
}
blob, err := GetBlob(t.s, e.Hash)
if err != nil {
if err == plumbing.ErrObjectNotFound {
return nil, ErrFileNotFound
}
return nil, err
}
return NewFile(path, e.Mode, blob), nil
}
// Size returns the plaintext size of an object, without reading it
// into memory.
func (t *Tree) Size(path string) (int64, error) {
e, err := t.FindEntry(path)
if err != nil {
return 0, ErrEntryNotFound
}
return t.s.EncodedObjectSize(e.Hash)
}
// Tree returns the tree identified by the `path` argument.
// The path is interpreted as relative to the tree receiver.
func (t *Tree) Tree(path string) (*Tree, error) {
e, err := t.FindEntry(path)
if err != nil {
return nil, ErrDirectoryNotFound
}
tree, err := GetTree(t.s, e.Hash)
if err == plumbing.ErrObjectNotFound {
return nil, ErrDirectoryNotFound
}
return tree, err
}
// TreeEntryFile returns the *File for a given *TreeEntry.
func (t *Tree) TreeEntryFile(e *TreeEntry) (*File, error) {
blob, err := GetBlob(t.s, e.Hash)
if err != nil {
return nil, err
}
return NewFile(e.Name, e.Mode, blob), nil
}
// FindEntry search a TreeEntry in this tree or any subtree.
func (t *Tree) FindEntry(path string) (*TreeEntry, error) {
if t.t == nil {
t.t = make(map[string]*Tree)
}
pathParts := strings.Split(path, "/")
startingTree := t
pathCurrent := ""
// search for the longest path in the tree path cache
for i := len(pathParts) - 1; i > 1; i-- {
path := filepath.Join(pathParts[:i]...)
tree, ok := t.t[path]
if ok {
startingTree = tree
pathParts = pathParts[i:]
pathCurrent = path
break
}
}
var tree *Tree
var err error
for tree = startingTree; len(pathParts) > 1; pathParts = pathParts[1:] {
if tree, err = tree.dir(pathParts[0]); err != nil {
return nil, err
}
pathCurrent = filepath.Join(pathCurrent, pathParts[0])
t.t[pathCurrent] = tree
}
return tree.entry(pathParts[0])
}
func (t *Tree) dir(baseName string) (*Tree, error) {
entry, err := t.entry(baseName)
if err != nil {
return nil, ErrDirectoryNotFound
}
obj, err := t.s.EncodedObject(plumbing.TreeObject, entry.Hash)
if err != nil {
return nil, err
}
tree := &Tree{s: t.s}
err = tree.Decode(obj)
return tree, err
}
func (t *Tree) entry(baseName string) (*TreeEntry, error) {
if t.m == nil {
t.buildMap()
}
entry, ok := t.m[baseName]
if !ok {
return nil, ErrEntryNotFound
}
return entry, nil
}
// Files returns a FileIter allowing to iterate over the Tree
func (t *Tree) Files() *FileIter {
return NewFileIter(t.s, t)
}
// ID returns the object ID of the tree. The returned value will always match
// the current value of Tree.Hash.
//
// ID is present to fulfill the Object interface.
func (t *Tree) ID() plumbing.Hash {
return t.Hash
}
// Type returns the type of object. It always returns plumbing.TreeObject.
func (t *Tree) Type() plumbing.ObjectType {
return plumbing.TreeObject
}
// Decode transform an plumbing.EncodedObject into a Tree struct
func (t *Tree) Decode(o plumbing.EncodedObject) (err error) {
if o.Type() != plumbing.TreeObject {
return ErrUnsupportedObject
}
t.Hash = o.Hash()
if o.Size() == 0 {
return nil
}
t.Entries = nil
t.m = nil
reader, err := o.Reader()
if err != nil {
return err
}
defer ioutil.CheckClose(reader, &err)
r := sync.GetBufioReader(reader)
defer sync.PutBufioReader(r)
for {
str, err := r.ReadString(' ')
if err != nil {
if err == io.EOF {
break
}
return err
}
str = str[:len(str)-1] // strip last byte (' ')
mode, err := filemode.New(str)
if err != nil {
return err
}
name, err := r.ReadString(0)
if err != nil && err != io.EOF {
return err
}
var hash plumbing.Hash
if _, err = io.ReadFull(r, hash[:]); err != nil {
return err
}
baseName := name[:len(name)-1]
t.Entries = append(t.Entries, TreeEntry{
Hash: hash,
Mode: mode,
Name: baseName,
})
}
return nil
}
type TreeEntrySorter []TreeEntry
func (s TreeEntrySorter) Len() int {
return len(s)
}
func (s TreeEntrySorter) Less(i, j int) bool {
name1 := s[i].Name
name2 := s[j].Name
if s[i].Mode == filemode.Dir {
name1 += "/"
}
if s[j].Mode == filemode.Dir {
name2 += "/"
}
return name1 < name2
}
func (s TreeEntrySorter) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
// Encode transforms a Tree into a plumbing.EncodedObject.
func (t *Tree) Encode(o plumbing.EncodedObject) (err error) {
o.SetType(plumbing.TreeObject)
w, err := o.Writer()
if err != nil {
return err
}
defer ioutil.CheckClose(w, &err)
if !sort.IsSorted(TreeEntrySorter(t.Entries)) {
return ErrEntriesNotSorted
}
for _, entry := range t.Entries {
if strings.IndexByte(entry.Name, 0) != -1 {
return fmt.Errorf("malformed filename %q", entry.Name)
}
if _, err = fmt.Fprintf(w, "%o %s", entry.Mode, entry.Name); err != nil {
return err
}
if _, err = w.Write([]byte{0x00}); err != nil {
return err
}
if _, err = w.Write(entry.Hash[:]); err != nil {
return err
}
}
return err
}
func (t *Tree) buildMap() {
t.m = make(map[string]*TreeEntry)
for i := 0; i < len(t.Entries); i++ {
t.m[t.Entries[i].Name] = &t.Entries[i]
}
}
// Diff returns a list of changes between this tree and the provided one
func (t *Tree) Diff(to *Tree) (Changes, error) {
return t.DiffContext(context.Background(), to)
}
// DiffContext returns a list of changes between this tree and the provided one
// Error will be returned if context expires. Provided context must be non nil.
//
// NOTE: Since version 5.1.0 the renames are correctly handled, the settings
// used are the recommended options DefaultDiffTreeOptions.
func (t *Tree) DiffContext(ctx context.Context, to *Tree) (Changes, error) {
return DiffTreeWithOptions(ctx, t, to, DefaultDiffTreeOptions)
}
// Patch returns a slice of Patch objects with all the changes between trees
// in chunks. This representation can be used to create several diff outputs.
func (t *Tree) Patch(to *Tree) (*Patch, error) {
return t.PatchContext(context.Background(), to)
}
// PatchContext returns a slice of Patch objects with all the changes between
// trees in chunks. This representation can be used to create several diff
// outputs. If context expires, an error will be returned. Provided context must
// be non-nil.
//
// NOTE: Since version 5.1.0 the renames are correctly handled, the settings
// used are the recommended options DefaultDiffTreeOptions.
func (t *Tree) PatchContext(ctx context.Context, to *Tree) (*Patch, error) {
changes, err := t.DiffContext(ctx, to)
if err != nil {
return nil, err
}
return changes.PatchContext(ctx)
}
// treeEntryIter facilitates iterating through the TreeEntry objects in a Tree.
type treeEntryIter struct {
t *Tree
pos int
}
func (iter *treeEntryIter) Next() (TreeEntry, error) {
if iter.pos >= len(iter.t.Entries) {
return TreeEntry{}, io.EOF
}
iter.pos++
return iter.t.Entries[iter.pos-1], nil
}
// TreeWalker provides a means of walking through all of the entries in a Tree.
type TreeWalker struct {
stack []*treeEntryIter
base string
recursive bool
seen map[plumbing.Hash]bool
s storer.EncodedObjectStorer
t *Tree
}
// NewTreeWalker returns a new TreeWalker for the given tree.
//
// It is the caller's responsibility to call Close() when finished with the
// tree walker.
func NewTreeWalker(t *Tree, recursive bool, seen map[plumbing.Hash]bool) *TreeWalker {
stack := make([]*treeEntryIter, 0, startingStackSize)
stack = append(stack, &treeEntryIter{t, 0})
return &TreeWalker{
stack: stack,
recursive: recursive,
seen: seen,
s: t.s,
t: t,
}
}
// Next returns the next object from the tree. Objects are returned in order
// and subtrees are included. After the last object has been returned further
// calls to Next() will return io.EOF.
//
// In the current implementation any objects which cannot be found in the
// underlying repository will be skipped automatically. It is possible that this
// may change in future versions.
func (w *TreeWalker) Next() (name string, entry TreeEntry, err error) {
var obj *Tree
for {
current := len(w.stack) - 1
if current < 0 {
// Nothing left on the stack so we're finished
err = io.EOF
return
}
if current > maxTreeDepth {
// We're probably following bad data or some self-referencing tree
err = ErrMaxTreeDepth
return
}
entry, err = w.stack[current].Next()
if err == io.EOF {
// Finished with the current tree, move back up to the parent
w.stack = w.stack[:current]
w.base, _ = path.Split(w.base)
w.base = strings.TrimSuffix(w.base, "/")
continue
}
if err != nil {
return
}
if w.seen[entry.Hash] {
continue
}
if entry.Mode == filemode.Dir {
obj, err = GetTree(w.s, entry.Hash)
}
name = simpleJoin(w.base, entry.Name)
if err != nil {
err = io.EOF
return
}
break
}
if !w.recursive {
return
}
if obj != nil {
w.stack = append(w.stack, &treeEntryIter{obj, 0})
w.base = simpleJoin(w.base, entry.Name)
}
return
}
// Tree returns the tree that the tree walker most recently operated on.
func (w *TreeWalker) Tree() *Tree {
current := len(w.stack) - 1
if w.stack[current].pos == 0 {
current--
}
if current < 0 {
return nil
}
return w.stack[current].t
}
// Close releases any resources used by the TreeWalker.
func (w *TreeWalker) Close() {
w.stack = nil
}
// TreeIter provides an iterator for a set of trees.
type TreeIter struct {
storer.EncodedObjectIter
s storer.EncodedObjectStorer
}
// NewTreeIter takes a storer.EncodedObjectStorer and a
// storer.EncodedObjectIter and returns a *TreeIter that iterates over all
// tree contained in the storer.EncodedObjectIter.
//
// Any non-tree object returned by the storer.EncodedObjectIter is skipped.
func NewTreeIter(s storer.EncodedObjectStorer, iter storer.EncodedObjectIter) *TreeIter {
return &TreeIter{iter, s}
}
// Next moves the iterator to the next tree and returns a pointer to it. If
// there are no more trees, it returns io.EOF.
func (iter *TreeIter) Next() (*Tree, error) {
for {
obj, err := iter.EncodedObjectIter.Next()
if err != nil {
return nil, err
}
if obj.Type() != plumbing.TreeObject {
continue
}
return DecodeTree(iter.s, obj)
}
}
// ForEach call the cb function for each tree contained on this iter until
// an error happens or the end of the iter is reached. If ErrStop is sent
// the iteration is stop but no error is returned. The iterator is closed.
func (iter *TreeIter) ForEach(cb func(*Tree) error) error {
return iter.EncodedObjectIter.ForEach(func(obj plumbing.EncodedObject) error {
if obj.Type() != plumbing.TreeObject {
return nil
}
t, err := DecodeTree(iter.s, obj)
if err != nil {
return err
}
return cb(t)
})
}
func simpleJoin(parent, child string) string {
if len(parent) > 0 {
return parent + "/" + child
}
return child
}

View File

@ -0,0 +1,142 @@
package object
import (
"io"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/filemode"
"github.com/go-git/go-git/v5/utils/merkletrie/noder"
)
// A treenoder is a helper type that wraps git trees into merkletrie
// noders.
//
// As a merkletrie noder doesn't understand the concept of modes (e.g.
// file permissions), the treenoder includes the mode of the git tree in
// the hash, so changes in the modes will be detected as modifications
// to the file contents by the merkletrie difftree algorithm. This is
// consistent with how the "git diff-tree" command works.
type treeNoder struct {
parent *Tree // the root node is its own parent
name string // empty string for the root node
mode filemode.FileMode
hash plumbing.Hash
children []noder.Noder // memoized
}
// NewTreeRootNode returns the root node of a Tree
func NewTreeRootNode(t *Tree) noder.Noder {
if t == nil {
return &treeNoder{}
}
return &treeNoder{
parent: t,
name: "",
mode: filemode.Dir,
hash: t.Hash,
}
}
func (t *treeNoder) Skip() bool {
return false
}
func (t *treeNoder) isRoot() bool {
return t.name == ""
}
func (t *treeNoder) String() string {
return "treeNoder <" + t.name + ">"
}
func (t *treeNoder) Hash() []byte {
if t.mode == filemode.Deprecated {
return append(t.hash[:], filemode.Regular.Bytes()...)
}
return append(t.hash[:], t.mode.Bytes()...)
}
func (t *treeNoder) Name() string {
return t.name
}
func (t *treeNoder) IsDir() bool {
return t.mode == filemode.Dir
}
// Children will return the children of a treenoder as treenoders,
// building them from the children of the wrapped git tree.
func (t *treeNoder) Children() ([]noder.Noder, error) {
if t.mode != filemode.Dir {
return noder.NoChildren, nil
}
// children are memoized for efficiency
if t.children != nil {
return t.children, nil
}
// the parent of the returned children will be ourself as a tree if
// we are a not the root treenoder. The root is special as it
// is is own parent.
parent := t.parent
if !t.isRoot() {
var err error
if parent, err = t.parent.Tree(t.name); err != nil {
return nil, err
}
}
var err error
t.children, err = transformChildren(parent)
return t.children, err
}
// Returns the children of a tree as treenoders.
// Efficiency is key here.
func transformChildren(t *Tree) ([]noder.Noder, error) {
var err error
var e TreeEntry
// there will be more tree entries than children in the tree,
// due to submodules and empty directories, but I think it is still
// worth it to pre-allocate the whole array now, even if sometimes
// is bigger than needed.
ret := make([]noder.Noder, 0, len(t.Entries))
walker := NewTreeWalker(t, false, nil) // don't recurse
// don't defer walker.Close() for efficiency reasons.
for {
_, e, err = walker.Next()
if err == io.EOF {
break
}
if err != nil {
walker.Close()
return nil, err
}
ret = append(ret, &treeNoder{
parent: t,
name: e.Name,
mode: e.Mode,
hash: e.Hash,
})
}
walker.Close()
return ret, nil
}
// len(t.tree.Entries) != the number of elements walked by treewalker
// for some reason because of empty directories, submodules, etc, so we
// have to walk here.
func (t *treeNoder) NumChildren() (int, error) {
children, err := t.Children()
if err != nil {
return 0, err
}
return len(children), nil
}

Some files were not shown because too many files have changed in this diff Show More