git-fetch
git-fetch--tool
git-fetch-pack
+git-filter-branch
git-findtags
git-fmt-merge-msg
git-for-each-ref
* Documentation updates
* User manual updates
-
-
* Documentation updates
* User manual updates
-
-
- user-manual has better cross references.
- gitweb installation/deployment procedure is now documented.
-
description was given by the caller.
Also contains various documentation updates.
-
--- a/pico/pico.c
+++ b/pico/pico.c
@@ -219,7 +219,9 @@ PICO *pm;
- switch(pico_all_done){ /* prepare for/handle final events */
- case COMP_EXIT : /* already confirmed */
- packheader();
+ switch(pico_all_done){ /* prepare for/handle final events */
+ case COMP_EXIT : /* already confirmed */
+ packheader();
+#if 0
- stripwhitespace();
+ stripwhitespace();
+#endif
- c |= COMP_EXIT;
- break;
-
+ c |= COMP_EXIT;
+ break;
+
(Daniel Barkalow)
[gitlink-inlinemacro]
<a href="{target}.html">{target}{0?({0})}</a>
endif::backend-xhtml11[]
-
-
transfer.unpackLimit::
When `fetch.unpackLimit` or `receive.unpackLimit` are
not set, the value of this variable is used instead.
-
-
often the best way of explaining what is going on.
In normal life, most people wouldn't use the "core" git programs
-directly, but rather script around them to make them more palatable.
+directly, but rather script around them to make them more palatable.
Understanding the core git stuff may help some people get those scripts
done, though, and it may also be instructive in helping people
understand what it is that the higher-level helper scripts are actually
-doing.
+doing.
The core git is often called "plumbing", with the prettier user
interfaces on top of it called "porcelain". You may not want to use the
out empty, and the only thing you need to do is find yourself a
subdirectory that you want to use as a working tree - either an empty
one for a totally new project, or an existing working tree that you want
-to import into git.
+to import into git.
For our first example, we're going to start a totally new repository from
scratch, with no pre-existing files, and we'll call it `git-tutorial`.
and see two files:
----------------
-.git/objects/55/7db03de997c86a4a028e1ebd3a1ceb225be238
+.git/objects/55/7db03de997c86a4a028e1ebd3a1ceb225be238
.git/objects/f2/4c74a2e500f5ee1332c86b94199f52b1d1d962
----------------
you've only *told* git about them.
However, since git knows about them, you can now start using some of the
-most basic git commands to manipulate the files or look at their status.
+most basic git commands to manipulate the files or look at their status.
In particular, let's not even check in the two files into git yet, we'll
start off by adding another line to `hello` first:
Remember how we did the `git-update-index` on file `hello` and then we
changed `hello` afterward, and could compare the new state of `hello` with the
-state we saved in the index file?
+state we saved in the index file?
Further, remember how I said that `git-write-tree` writes the contents
of the *index* file to the tree, and thus what we just committed was in
between a committed *tree* and either the index file or the working
tree. In other words, `git-diff-index` wants a tree to be diffed
against, and before we did the commit, we couldn't do that, because we
-didn't have anything to diff against.
+didn't have anything to diff against.
But now we can do
----------------
(where `-p` has the same meaning as it did in `git-diff-files`), and it
-will show us the same difference, but for a totally different reason.
+will show us the same difference, but for a totally different reason.
Now we're comparing the working tree not against the index file,
but against the tree we just wrote. It just so happens that those two
are obviously the same, so we get the same result.
instead compare against just the index cache contents, and ignore the
current working tree state entirely. Since we just wrote the index
file to HEAD, doing `git-diff-index \--cached -p HEAD` should thus return
-an empty set of differences, and that's exactly what it does.
+an empty set of differences, and that's exactly what it does.
[NOTE]
================
----------------
and you will see exactly what has changed in the repository over its
-short history.
+short history.
[NOTE]
The `\--root` flag is a flag to `git-diff-tree` to tell it to
the working tree that it describes" may not be technically 100%
accurate, but it's a good model for all normal use.
-This has two implications:
+This has two implications:
- if you grow bored with the tutorial repository you created (or you've
made a mistake and want to start all over), you can just do simple
the checked out files or even an index file, and will *only* contain the
actual core git files. Such a repository usually doesn't even have the
`.git` subdirectory, but has all the git files directly in the
-repository.
+repository.
To create your own local live copy of such a "raw" git repository, you'd
first create your own subdirectory for the project, and then copy the
$ rsync -rL rsync://rsync.kernel.org/pub/scm/git/git.git/ .git
----------------
-followed by
+followed by
----------------
$ git-read-tree HEAD
`-a` flag means "check out all files" (if you have a stale copy or an
older version of a checked out tree you may also need to add the `-f`
flag first, to tell git-checkout-index to *force* overwriting of any old
-files).
+files).
Again, this can all be simplified with
which will end up doing all of the above for you.
You have now successfully copied somebody else's (mine) remote
-repository, and checked it out.
+repository, and checked it out.
Creating a new branch
Branches in git are really nothing more than pointers into the git
object database from within the `.git/refs/` subdirectory, and as we
already discussed, the `HEAD` branch is nothing but a symlink to one of
-these object pointers.
+these object pointers.
You can at any time create a new branch by just picking an arbitrary
point in the project history, and just writing the SHA1 name of that
object into a file under `.git/refs/heads/`. You can use any filename you
want (and indeed, subdirectories), but the convention is that the
"normal" branch is called `master`. That's just a convention, though,
-and nothing enforces it.
+and nothing enforces it.
To show that as an example, let's go back to the git-tutorial repository we
used earlier, and create a branch in it. You do that by simply just
------------
will create a new branch based at the current `HEAD` position, and switch
-to it.
+to it.
[NOTE]
================================================
$ git branch <branchname> [startingpoint]
------------
-which will simply _create_ the branch, but will not do anything further.
+which will simply _create_ the branch, but will not do anything further.
You can then later -- once you decide that you want to actually develop
on that branch -- switch to that branch with a regular `git checkout`
with the branchname as the argument.
will show you graphically both of your branches (that's what the `\--all`
means: normally it will just show you your current `HEAD`) and their
histories. You can also see exactly how they came to be from a common
-source.
+source.
Anyway, let's exit `gitk` (`^Q` or the File menu), and decide that we want
to merge the work we did on the `mybranch` branch into the `master`
file, which had no differences in the `mybranch` branch), and say:
----------------
- Auto-merging hello
- CONFLICT (content): Merge conflict in hello
+ Auto-merging hello
+ CONFLICT (content): Merge conflict in hello
Automatic merge failed; fix up by hand
----------------
propagation to other publicly visible machines:
------------
-$ git push master.kernel.org:/pub/scm/git/git.git/
+$ git push master.kernel.org:/pub/scm/git/git.git/
------------
The output format from "git-diff-index", "git-diff-tree" and
"git-diff-files" are very similar.
-These commands all compare two sets of things; what is
+These commands all compare two sets of things; what is
compared differs:
git-diff-index <tree-ish>::
--- a/describe.c
+++ b/describe.c
@@@ -98,20 -98,12 +98,20 @@@
- return (a_date > b_date) ? -1 : (a_date == b_date) ? 0 : 1;
+ return (a_date > b_date) ? -1 : (a_date == b_date) ? 0 : 1;
}
-
+
- static void describe(char *arg)
-static void describe(struct commit *cmit, int last_one)
++static void describe(char *arg, int last_one)
{
+ unsigned char sha1[20];
+ struct commit *cmit;
- struct commit_list *list;
- static int initialized = 0;
- struct commit_name *n;
-
+ struct commit_list *list;
+ static int initialized = 0;
+ struct commit_name *n;
+
+ if (get_sha1(arg, sha1) < 0)
+ usage(describe_usage);
+ cmit = lookup_commit_reference(sha1);
+ if (!cmit)
+ usage(describe_usage);
+
- if (!initialized) {
- initialized = 1;
- for_each_ref(get_name);
+ if (!initialized) {
+ initialized = 1;
+ for_each_ref(get_name);
------------
1. It is preceded with a "git diff" header, that looks like
two unresolved merge parents with the working tree file
(i.e. file1 is stage 2 aka "our version", file2 is stage 3 aka
"their version").
-
that matches other criteria, nothing is selected.
--find-copies-harder::
- For performance reasons, by default, -C option finds copies only
- if the original file of the copy was modified in the same
+ For performance reasons, by default, -C option finds copies only
+ if the original file of the copy was modified in the same
changeset. This flag makes the command
inspect unmodified files as candidates for the source of
copy. This is a very expensive operation for large
is controlled by giving the pathname parameters to the
git-diff-* commands on the command line. The pathspec is used
to limit the world diff operates in. It removes the filepairs
-outside the specified set of pathnames. E.g. If the input set
+outside the specified set of pathnames. E.g. If the input set
of filepairs included:
------------------------------------------------
*.c
t
------------------------------------------------
-
-/*\r
- CSS stylesheet for XHTML produced by DocBook XSL stylesheets.\r
- Tested with XSL stylesheets 1.61.2, 1.67.2\r
-*/\r
-\r
-span.strong {\r
- font-weight: bold;\r
-}\r
-\r
-body blockquote {\r
- margin-top: .75em;\r
- line-height: 1.5;\r
- margin-bottom: .75em;\r
-}\r
-\r
-html body {\r
- margin: 1em 5% 1em 5%;\r
- line-height: 1.2;\r
-}\r
-\r
-body div {\r
- margin: 0;\r
-}\r
-\r
-h1, h2, h3, h4, h5, h6,\r
-div.toc p b,\r
-div.list-of-figures p b,\r
-div.list-of-tables p b,\r
-div.abstract p.title\r
-{\r
- color: #527bbd;\r
- font-family: tahoma, verdana, sans-serif;\r
-}\r
-\r
-div.toc p:first-child,\r
-div.list-of-figures p:first-child,\r
-div.list-of-tables p:first-child,\r
-div.example p.title\r
-{\r
- margin-bottom: 0.2em;\r
-}\r
-\r
-body h1 {\r
- margin: .0em 0 0 -4%;\r
- line-height: 1.3;\r
- border-bottom: 2px solid silver;\r
-}\r
-\r
-body h2 {\r
- margin: 0.5em 0 0 -4%;\r
- line-height: 1.3;\r
- border-bottom: 2px solid silver;\r
-}\r
-\r
-body h3 {\r
- margin: .8em 0 0 -3%;\r
- line-height: 1.3;\r
-}\r
-\r
-body h4 {\r
- margin: .8em 0 0 -3%;\r
- line-height: 1.3;\r
-}\r
-\r
-body h5 {\r
- margin: .8em 0 0 -2%;\r
- line-height: 1.3;\r
-}\r
-\r
-body h6 {\r
- margin: .8em 0 0 -1%;\r
- line-height: 1.3;\r
-}\r
-\r
-body hr {\r
- border: none; /* Broken on IE6 */\r
-}\r
-div.footnotes hr {\r
- border: 1px solid silver;\r
-}\r
-\r
-div.navheader th, div.navheader td, div.navfooter td {\r
- font-family: sans-serif;\r
- font-size: 0.9em;\r
- font-weight: bold;\r
- color: #527bbd;\r
-}\r
-div.navheader img, div.navfooter img {\r
- border-style: none;\r
-}\r
-div.navheader a, div.navfooter a {\r
- font-weight: normal;\r
-}\r
-div.navfooter hr {\r
- border: 1px solid silver;\r
-}\r
-\r
-body td {\r
- line-height: 1.2\r
-}\r
-\r
-body th {\r
- line-height: 1.2;\r
-}\r
-\r
-ol {\r
- line-height: 1.2;\r
-}\r
-\r
-ul, body dir, body menu {\r
- line-height: 1.2;\r
-}\r
-\r
-html {\r
- margin: 0; \r
- padding: 0;\r
-}\r
-\r
-body h1, body h2, body h3, body h4, body h5, body h6 {\r
- margin-left: 0\r
-} \r
-\r
-body pre {\r
- margin: 0.5em 10% 0.5em 1em;\r
- line-height: 1.0;\r
- color: navy;\r
-}\r
-\r
-tt.literal, code.literal {\r
- color: navy;\r
-}\r
-\r
-div.literallayout p {\r
- padding: 0em;\r
- margin: 0em;\r
-}\r
-\r
-div.literallayout {\r
- font-family: monospace;\r
-# margin: 0.5em 10% 0.5em 1em;\r
- margin: 0em;\r
- color: navy;\r
- border: 1px solid silver;\r
- background: #f4f4f4;\r
- padding: 0.5em;\r
-}\r
-\r
-.programlisting, .screen {\r
- border: 1px solid silver;\r
- background: #f4f4f4;\r
- margin: 0.5em 10% 0.5em 0;\r
- padding: 0.5em 1em;\r
-}\r
-\r
-div.sidebar {\r
- background: #ffffee;\r
- margin: 1.0em 10% 0.5em 0;\r
- padding: 0.5em 1em;\r
- border: 1px solid silver;\r
-}\r
-div.sidebar * { padding: 0; }\r
-div.sidebar div { margin: 0; }\r
-div.sidebar p.title {\r
- font-family: sans-serif;\r
- margin-top: 0.5em;\r
- margin-bottom: 0.2em;\r
-}\r
-\r
-div.bibliomixed {\r
- margin: 0.5em 5% 0.5em 1em;\r
-}\r
-\r
-div.glossary dt {\r
- font-weight: bold;\r
-}\r
-div.glossary dd p {\r
- margin-top: 0.2em;\r
-}\r
-\r
-dl {\r
- margin: .8em 0;\r
- line-height: 1.2;\r
-}\r
-\r
-dt {\r
- margin-top: 0.5em;\r
-}\r
-\r
-dt span.term {\r
- font-style: italic;\r
-}\r
-\r
-div.variablelist dd p {\r
- margin-top: 0;\r
-}\r
-\r
-div.itemizedlist li, div.orderedlist li {\r
- margin-left: -0.8em;\r
- margin-top: 0.5em;\r
-}\r
-\r
-ul, ol {\r
- list-style-position: outside;\r
-}\r
-\r
-div.sidebar ul, div.sidebar ol {\r
- margin-left: 2.8em;\r
-}\r
-\r
-div.itemizedlist p.title,\r
-div.orderedlist p.title,\r
-div.variablelist p.title\r
-{\r
- margin-bottom: -0.8em;\r
-}\r
-\r
-div.revhistory table {\r
- border-collapse: collapse;\r
- border: none;\r
-}\r
-div.revhistory th {\r
- border: none;\r
- color: #527bbd;\r
- font-family: tahoma, verdana, sans-serif;\r
-}\r
-div.revhistory td {\r
- border: 1px solid silver;\r
-}\r
-\r
-/* Keep TOC and index lines close together. */\r
-div.toc dl, div.toc dt,\r
-div.list-of-figures dl, div.list-of-figures dt,\r
-div.list-of-tables dl, div.list-of-tables dt,\r
-div.indexdiv dl, div.indexdiv dt\r
-{\r
- line-height: normal;\r
- margin-top: 0;\r
- margin-bottom: 0;\r
-}\r
-\r
-/*\r
- Table styling does not work because of overriding attributes in\r
- generated HTML.\r
-*/\r
-div.table table,\r
-div.informaltable table\r
-{\r
- margin-left: 0;\r
- margin-right: 5%;\r
- margin-bottom: 0.8em;\r
-}\r
-div.informaltable table\r
-{\r
- margin-top: 0.4em\r
-}\r
-div.table thead,\r
-div.table tfoot,\r
-div.table tbody,\r
-div.informaltable thead,\r
-div.informaltable tfoot,\r
-div.informaltable tbody\r
-{\r
- /* No effect in IE6. */\r
- border-top: 2px solid #527bbd;\r
- border-bottom: 2px solid #527bbd;\r
-}\r
-div.table thead, div.table tfoot,\r
-div.informaltable thead, div.informaltable tfoot\r
-{\r
- font-weight: bold;\r
-}\r
-\r
-div.mediaobject img {\r
- border: 1px solid silver;\r
- margin-bottom: 0.8em;\r
-}\r
-div.figure p.title,\r
-div.table p.title\r
-{\r
- margin-top: 1em;\r
- margin-bottom: 0.4em;\r
-}\r
-\r
-@media print {\r
- div.navheader, div.navfooter { display: none; }\r
-}\r
+/*
+ CSS stylesheet for XHTML produced by DocBook XSL stylesheets.
+ Tested with XSL stylesheets 1.61.2, 1.67.2
+*/
+
+span.strong {
+ font-weight: bold;
+}
+
+body blockquote {
+ margin-top: .75em;
+ line-height: 1.5;
+ margin-bottom: .75em;
+}
+
+html body {
+ margin: 1em 5% 1em 5%;
+ line-height: 1.2;
+}
+
+body div {
+ margin: 0;
+}
+
+h1, h2, h3, h4, h5, h6,
+div.toc p b,
+div.list-of-figures p b,
+div.list-of-tables p b,
+div.abstract p.title
+{
+ color: #527bbd;
+ font-family: tahoma, verdana, sans-serif;
+}
+
+div.toc p:first-child,
+div.list-of-figures p:first-child,
+div.list-of-tables p:first-child,
+div.example p.title
+{
+ margin-bottom: 0.2em;
+}
+
+body h1 {
+ margin: .0em 0 0 -4%;
+ line-height: 1.3;
+ border-bottom: 2px solid silver;
+}
+
+body h2 {
+ margin: 0.5em 0 0 -4%;
+ line-height: 1.3;
+ border-bottom: 2px solid silver;
+}
+
+body h3 {
+ margin: .8em 0 0 -3%;
+ line-height: 1.3;
+}
+
+body h4 {
+ margin: .8em 0 0 -3%;
+ line-height: 1.3;
+}
+
+body h5 {
+ margin: .8em 0 0 -2%;
+ line-height: 1.3;
+}
+
+body h6 {
+ margin: .8em 0 0 -1%;
+ line-height: 1.3;
+}
+
+body hr {
+ border: none; /* Broken on IE6 */
+}
+div.footnotes hr {
+ border: 1px solid silver;
+}
+
+div.navheader th, div.navheader td, div.navfooter td {
+ font-family: sans-serif;
+ font-size: 0.9em;
+ font-weight: bold;
+ color: #527bbd;
+}
+div.navheader img, div.navfooter img {
+ border-style: none;
+}
+div.navheader a, div.navfooter a {
+ font-weight: normal;
+}
+div.navfooter hr {
+ border: 1px solid silver;
+}
+
+body td {
+ line-height: 1.2
+}
+
+body th {
+ line-height: 1.2;
+}
+
+ol {
+ line-height: 1.2;
+}
+
+ul, body dir, body menu {
+ line-height: 1.2;
+}
+
+html {
+ margin: 0;
+ padding: 0;
+}
+
+body h1, body h2, body h3, body h4, body h5, body h6 {
+ margin-left: 0
+}
+
+body pre {
+ margin: 0.5em 10% 0.5em 1em;
+ line-height: 1.0;
+ color: navy;
+}
+
+tt.literal, code.literal {
+ color: navy;
+}
+
+div.literallayout p {
+ padding: 0em;
+ margin: 0em;
+}
+
+div.literallayout {
+ font-family: monospace;
+# margin: 0.5em 10% 0.5em 1em;
+ margin: 0em;
+ color: navy;
+ border: 1px solid silver;
+ background: #f4f4f4;
+ padding: 0.5em;
+}
+
+.programlisting, .screen {
+ border: 1px solid silver;
+ background: #f4f4f4;
+ margin: 0.5em 10% 0.5em 0;
+ padding: 0.5em 1em;
+}
+
+div.sidebar {
+ background: #ffffee;
+ margin: 1.0em 10% 0.5em 0;
+ padding: 0.5em 1em;
+ border: 1px solid silver;
+}
+div.sidebar * { padding: 0; }
+div.sidebar div { margin: 0; }
+div.sidebar p.title {
+ font-family: sans-serif;
+ margin-top: 0.5em;
+ margin-bottom: 0.2em;
+}
+
+div.bibliomixed {
+ margin: 0.5em 5% 0.5em 1em;
+}
+
+div.glossary dt {
+ font-weight: bold;
+}
+div.glossary dd p {
+ margin-top: 0.2em;
+}
+
+dl {
+ margin: .8em 0;
+ line-height: 1.2;
+}
+
+dt {
+ margin-top: 0.5em;
+}
+
+dt span.term {
+ font-style: italic;
+}
+
+div.variablelist dd p {
+ margin-top: 0;
+}
+
+div.itemizedlist li, div.orderedlist li {
+ margin-left: -0.8em;
+ margin-top: 0.5em;
+}
+
+ul, ol {
+ list-style-position: outside;
+}
+
+div.sidebar ul, div.sidebar ol {
+ margin-left: 2.8em;
+}
+
+div.itemizedlist p.title,
+div.orderedlist p.title,
+div.variablelist p.title
+{
+ margin-bottom: -0.8em;
+}
+
+div.revhistory table {
+ border-collapse: collapse;
+ border: none;
+}
+div.revhistory th {
+ border: none;
+ color: #527bbd;
+ font-family: tahoma, verdana, sans-serif;
+}
+div.revhistory td {
+ border: 1px solid silver;
+}
+
+/* Keep TOC and index lines close together. */
+div.toc dl, div.toc dt,
+div.list-of-figures dl, div.list-of-figures dt,
+div.list-of-tables dl, div.list-of-tables dt,
+div.indexdiv dl, div.indexdiv dt
+{
+ line-height: normal;
+ margin-top: 0;
+ margin-bottom: 0;
+}
+
+/*
+ Table styling does not work because of overriding attributes in
+ generated HTML.
+*/
+div.table table,
+div.informaltable table
+{
+ margin-left: 0;
+ margin-right: 5%;
+ margin-bottom: 0.8em;
+}
+div.informaltable table
+{
+ margin-top: 0.4em
+}
+div.table thead,
+div.table tfoot,
+div.table tbody,
+div.informaltable thead,
+div.informaltable tfoot,
+div.informaltable tbody
+{
+ /* No effect in IE6. */
+ border-top: 2px solid #527bbd;
+ border-bottom: 2px solid #527bbd;
+}
+div.table thead, div.table tfoot,
+div.informaltable thead, div.informaltable tfoot
+{
+ font-weight: bold;
+}
+
+div.mediaobject img {
+ border: 1px solid silver;
+ margin-bottom: 0.8em;
+}
+div.figure p.title,
+div.table p.title
+{
+ margin-top: 1em;
+ margin-bottom: 0.4em;
+}
+
+@media print {
+ div.navheader, div.navfooter { display: none; }
+}
Deepen the history of a 'shallow' repository created by
`git clone` with `--depth=<depth>` option (see gitlink:git-clone[1])
by the specified number of commits.
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
Imports a project from one or more Arch repositories. It will follow branches
and repositories within the namespaces defined by the <archive/branch>
parameters supplied. If it cannot find the remote branch a merge comes from
-it will just import it as a regular commit. If it can find it, it will mark it
-as a merge whenever possible (see discussion below).
+it will just import it as a regular commit. If it can find it, it will mark it
+as a merge whenever possible (see discussion below).
-The script expects you to provide the key roots where it can start the import
-from an 'initial import' or 'tag' type of Arch commit. It will follow and
-import new branches within the provided roots.
+The script expects you to provide the key roots where it can start the import
+from an 'initial import' or 'tag' type of Arch commit. It will follow and
+import new branches within the provided roots.
-It expects to be dealing with one project only. If it sees
-branches that have different roots, it will refuse to run. In that case,
-edit your <archive/branch> parameters to define clearly the scope of the
-import.
+It expects to be dealing with one project only. If it sees
+branches that have different roots, it will refuse to run. In that case,
+edit your <archive/branch> parameters to define clearly the scope of the
+import.
-`git-archimport` uses `tla` extensively in the background to access the
+`git-archimport` uses `tla` extensively in the background to access the
Arch repository.
Make sure you have a recent version of `tla` available in the path. `tla` must
-know about the repositories you pass to `git-archimport`.
+know about the repositories you pass to `git-archimport`.
-For the initial import `git-archimport` expects to find itself in an empty
-directory. To follow the development of a project that uses Arch, rerun
-`git-archimport` with the same parameters as the initial import to perform
+For the initial import `git-archimport` expects to find itself in an empty
+directory. To follow the development of a project that uses Arch, rerun
+`git-archimport` with the same parameters as the initial import to perform
incremental imports.
While git-archimport will try to create sensible branch names for the
MERGES
------
-Patch merge data from Arch is used to mark merges in git as well. git
+Patch merge data from Arch is used to mark merges in git as well. git
does not care much about tracking patches, and only considers a merge when a
branch incorporates all the commits since the point they forked. The end result
-is that git will have a good idea of how far branches have diverged. So the
+is that git will have a good idea of how far branches have diverged. So the
import process does lose some patch-trading metadata.
-Fortunately, when you try and merge branches imported from Arch,
-git will find a good merge base, and it has a good chance of identifying
-patches that have been traded out-of-sequence between the branches.
+Fortunately, when you try and merge branches imported from Arch,
+git will find a good merge base, and it has a good chance of identifying
+patches that have been traded out-of-sequence between the branches.
OPTIONS
-------
Display usage.
-v::
- Verbose output.
+ Verbose output.
-T::
- Many tags. Will create a tag for every commit, reflecting the commit
+ Many tags. Will create a tag for every commit, reflecting the commit
name in the Arch repository.
-f::
<archive/branch>::
- Archive/branch identifier in a format that `tla log` understands.
+ Archive/branch identifier in a format that `tla log` understands.
Author
GIT
---
Part of the gitlink:git[7] suite
-
SYNOPSIS
--------
-'git bisect' <subcommand> <options>
+'git bisect' <subcommand> <options>
DESCRIPTION
-----------
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
-p <parent commit>::
Each '-p' indicates the id of a parent commit object.
-
+
Commit Information
------------------
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
DESCRIPTION
-----------
Exports a commit from GIT to a CVS checkout, making it easier
-to merge patches from a git repository into a CVS repository.
+to merge patches from a git repository into a CVS repository.
-Execute it from the root of the CVS working copy. GIT_DIR must be defined.
+Execute it from the root of the CVS working copy. GIT_DIR must be defined.
See examples below.
-It does its best to do the safe thing, it will check that the files are
-unchanged and up to date in the CVS checkout, and it will not autocommit
+It does its best to do the safe thing, it will check that the files are
+unchanged and up to date in the CVS checkout, and it will not autocommit
by default.
Supports file additions, removals, and commits that affect binary files.
If the commit is a merge commit, you must tell git-cvsexportcommit what parent
-should the changeset be done against.
+should the changeset be done against.
OPTIONS
-------
Force the parent commit, even if it is not a direct parent.
-m::
- Prepend the commit message with the provided prefix.
+ Prepend the commit message with the provided prefix.
Useful for patch series and the like.
-u::
$ export GIT_DIR=~/project/.git
$ cd ~/project_cvs_checkout
$ git-cvsexportcommit -v <commit-sha1>
-$ cvs commit -F .mgs <files>
+$ cvs commit -F .mgs <files>
------------
Merge pending patches into CVS automatically -- only if you really know what you are doing ::
GIT
---
Part of the gitlink:git[7] suite
-
-d <CVSROOT>::
The root of the CVS archive. May be local (a simple path) or remote;
- currently, only the :local:, :ext: and :pserver: access methods
+ currently, only the :local:, :ext: and :pserver: access methods
are supported. If not given, git-cvsimport will try to read it
from `CVS/Root`. If no such file exists, it checks for the
`CVSROOT` environment variable.
-k::
Kill keywords: will extract files with '-kk' from the CVS archive
to avoid noisy changesets. Highly recommended, but off by default
- to preserve compatibility with early imported trees.
+ to preserve compatibility with early imported trees.
-u::
Convert underscores in tag and branch names to dots.
Instead of calling cvsps, read the provided cvsps output file. Useful
for debugging or when cvsps is being handled outside cvsimport.
--m::
+-m::
Attempt to detect merges based on the commit message. This option
- will enable default regexes that try to capture the name source
- branch name from the commit message.
+ will enable default regexes that try to capture the name source
+ branch name from the commit message.
-M <regex>::
Attempt to detect merges based on the commit message with a custom
regex. It can be used with '-m' to also see the default regexes.
- You must escape forward slashes.
+ You must escape forward slashes.
-S <regex>::
Skip paths matching the regex.
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
branch" respectively. With these options, diffs for
merged entries are not shown.
+
-The default is to diff against our branch (-2) and the
+The default is to diff against our branch (-2) and the
cleanly resolved paths. The option -0 can be given to
omit diff output for unmerged entries and just show "Unmerged".
GIT
---
Part of the gitlink:git[7] suite
-
nicer for the case where you just want to check where you are.
So doing a "git-diff-index --cached" is basically very useful when you are
-asking yourself "what have I already marked for being committed, and
+asking yourself "what have I already marked for being committed, and
what's the difference to a previous tree".
Non-cached Mode
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
[verse]
'git-format-patch' [-n | -k] [-o <dir> | --stdout] [--thread]
[--attach[=<boundary>] | --inline[=<boundary>]]
- [-s | --signoff] [<common diff options>] [--start-number <n>]
+ [-s | --signoff] [<common diff options>]
+ [--start-number <n>] [--numbered-files]
[--in-reply-to=Message-Id] [--suffix=.<sfx>]
[--ignore-if-in-upstream]
[--subject-prefix=Subject-Prefix]
The output of this command is convenient for e-mail submission or
for use with gitlink:git-am[1].
-Each output file is numbered sequentially from 1, and uses the
+By default, each output file is numbered sequentially from 1, and uses the
first line of the commit message (massaged for pathname safety) as
-the filename. The names of the output files are printed to standard
+the filename. With the --numbered-files option, the output file names
+will only be numbers, without the first line of the commit appended.
+The names of the output files are printed to standard
output, unless the --stdout option is specified.
If -o is specified, output files are created in <dir>. Otherwise
--start-number <n>::
Start numbering the patches at <n> instead of 1.
+--numbered-files::
+ Output file names will be a simple number sequence
+ without the default first line of the commit appended.
+ Mutually exclusive with the --stdout option.
+
-k|--keep-subject::
Do not strip/add '[PATCH]' from the first line of the
commit log message.
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
object database. Reports its object ID to its standard output.
This is used by "git-cvsimport" to update the index
without modifying files in the work tree. When <type> is not
-specified, it defaults to "blob".
+specified, it defaults to "blob".
OPTIONS
-------
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
A '<ref>' specification can be either a single pattern, or a pair
of such patterns separated by a colon ":" (this means that a ref name
-cannot have a colon in it). A single pattern '<name>' is just a
+cannot have a colon in it). A single pattern '<name>' is just a
shorthand for '<name>:<name>'.
Each pattern pair consists of the source side (before the colon)
GIT
---
Part of the gitlink:git[7] suite
-
This is a synonym for gitlink:git-init[1]. Please refer to the
documentation of that command.
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
This is modified MM in the branch B. # merge2
This is modified MM in the branch B. # current contents
-or
+or
torvalds@ppc970:~/merge-test> git-merge-index cat AA MM
cat: : No such file or directory
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
1. the results are updated both in the index file and in your
working tree,
2. index file is written out as a tree,
-3. the tree gets committed, and
+3. the tree gets committed, and
4. the `HEAD` pointer gets advanced.
Because of 2., we require that the original state of the index
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
'xargs rm' if you are in the root of the repository.
git-pack-redundant accepts a list of objects on standard input. Any objects
-given will be ignored when checking which packs are required. This makes the
+given will be ignored when checking which packs are required. This makes the
following command useful when wanting to remove packs which contain unreachable
objects.
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
`localhost` otherwise.
--subject::
- Specify the initial subject of the email thread.
+ Specify the initial subject of the email thread.
Only necessary if --compose is also set. If --compose
is not set, this will be prompted for.
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
[verse]
'git-tag' [-a | -s | -u <key-id>] [-f] [-m <msg> | -F <file>] <name> [<head>]
'git-tag' -d <name>...
-'git-tag' -l [<pattern>]
+'git-tag' [-n [<num>]] -l [<pattern>]
'git-tag' -v <name>
DESCRIPTION
`-v <tag>` verifies the gpg signature of the tag.
-`-l <pattern>` lists tags that match the given pattern (or all
-if no pattern is given).
+`-l <pattern>` lists tags with names that match the given pattern
+(or all if no pattern is given).
OPTIONS
-------
-v::
Verify the gpg signature of given the tag
+-n <num>::
+ <num> specifies how many lines from the annotation, if any,
+ are printed when using -l.
+ The default is not to print any annotation lines.
+
-l <pattern>::
- List tags that match the given pattern (or all if no pattern is given).
+ List tags with names that match the given pattern (or all if no pattern is given).
-m <msg>::
Use the given tag message (instead of prompting)
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
OPTIONS
-------
-n::
- Only list the objects that would be unpacked, don't actually unpack
- them.
+ Dry run. Check the pack file without actually unpacking
+ the objects.
-q::
The command usually shows percentage progress. This
GIT
---
Part of the gitlink:git[7] suite
-
--unmerged::
If --refresh finds unmerged changes in the index, the default
- behavior is to error out. This option makes git-update-index
+ behavior is to error out. This option makes git-update-index
continue anyway.
--ignore-missing::
--cacheinfo <mode> <object> <path>::
Directly insert the specified info into the index.
-
+
--index-info::
Read index information from stdin.
--chmod=(+|-)x::
- Set the execute permissions on the updated files.
+ Set the execute permissions on the updated files.
--assume-unchanged, --no-assume-unchanged::
When these flags are specified, the object name recorded
<file>::
Files to act on.
Note that files beginning with '.' are discarded. This includes
- `./file` and `dir/./file`. If you don't want this, then use
+ `./file` and `dir/./file`. If you don't want this, then use
cleaner names.
The same applies to directories ending '/' and paths with '//'
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
GIT
---
Part of the gitlink:git[7] suite
-
On Sat, 13 Aug 2005, Linus Torvalds wrote:
-> That's correct. Same things apply: you can move a patch over, and create a
-> new one with a modified comment, but basically the _old_ commit will be
+> That's correct. Same things apply: you can move a patch over, and create a
+> new one with a modified comment, but basically the _old_ commit will be
> immutable.
Let me clarify.
You can entirely _drop_ old branches, so commits may be immutable, but
-nothing forces you to keep them. Of course, when you drop a commit, you'll
-always end up dropping all the commits that depended on it, and if you
-actually got somebody else to pull that commit you can't drop it from
+nothing forces you to keep them. Of course, when you drop a commit, you'll
+always end up dropping all the commits that depended on it, and if you
+actually got somebody else to pull that commit you can't drop it from
_their_ repository, but undoing things is not impossible.
For example, let's say that you've made a mess of things: you've committed
# for reference
git branch broken
- # Reset the main branch to three parents back: this
+ # Reset the main branch to three parents back: this
# effectively undoes the three top commits
git reset HEAD^^^
git checkout -f
to see that everything looks sensible.
-And then, you can just remove the broken branch if you decide you really
+And then, you can just remove the broken branch if you decide you really
don't want it:
# remove 'broken' branch
# Prune old objects if you're really really sure
git prune
-And yeah, I'm sure there are other ways of doing this. And as usual, the
-above is totally untested, and I just wrote it down in this email, so if
+And yeah, I'm sure there are other ways of doing this. And as usual, the
+above is totally untested, and I just wrote it down in this email, so if
I've done something wrong, you'll have to figure it out on your own ;)
Linus
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-
-
> Dear diary, on Sun, Aug 14, 2005 at 09:57:13AM CEST, I got a letter
> where Junio C Hamano <junkio@cox.net> told me that...
>> Linus Torvalds <torvalds@osdl.org> writes:
->>
->> > Junio, maybe you want to talk about how you move patches from your "pu"
+>>
+>> > Junio, maybe you want to talk about how you move patches from your "pu"
>> > branch to the real branches.
->>
+>>
> Actually, wouldn't this be also precisely for what StGIT is intended to?
Exactly my feeling. I was sort of waiting for Catalin to speak
where *your "master" head
upstream --> #1 --> #2 --> #3
- used \
+ used \
to be \--> #A --> #2' --> #3' --> #B --> #C
*upstream head
$ git fetch upstream
This leaves the updated upstream head in .git/FETCH_HEAD but
-does not touch your .git/HEAD nor .git/refs/heads/master.
+does not touch your .git/HEAD nor .git/refs/heads/master.
You run "git rebase" now.
$ git rebase FETCH_HEAD master
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-
-
- This is still crude and does not protect against simultaneous
make invocations stomping on each other. I would need to add
some locking mechanism for this.
-
nor tag anymore, so remove them:
------------------------------------------------
-$ rm -f .git/refs/tags/pu-anchor
+$ rm -f .git/refs/tags/pu-anchor
$ git branch -d revert-c99
------------------------------------------------
"master"
o---o
- \ "topic"
+ \ "topic"
o---o---o---o---o---o
At this point, "topic" contains something I know I want, but it
$ git checkout -b topicA master
... pick and apply pieces from P.diff to build
... commits on topicA branch.
-
+
o---o---o
/ "topicA"
o---o"master"
- \ "topic"
+ \ "topic"
o---o---o---o---o---o
Before doing each commit on "topicA" HEAD, I run "diff HEAD"
/o---o---o
|/ "topicA"
o---o"master"
- \ "topic"
+ \ "topic"
o---o---o---o---o---o
After I am done, I'd try a pretend-merge between "topicA" and
/o---o---o----------'
|/ "topicA"
o---o"master"
- \ "topic"
+ \ "topic"
o---o---o---o---o---o
The last diff better not to show anything other than cleanups
"topicB"
o---o---o---o---o
- /
+ /
/o---o---o
|/ "topicA"
o---o"master"
-
$ git ls-remote git://127.0.0.1/rule-the-world.git
If this does not work, find out why, and submit a patch to this document.
-
If there is no `-s` option, a built-in list of strategies
is used instead (`git-merge-recursive` when merging a single
head, `git-merge-octopus` otherwise).
-
- '%Creset': reset color
- '%m': left, right or boundary mark
- '%n': newline
-
command to re-code the commit log message in the encoding
preferred by the user. For non plumbing commands this
defaults to UTF-8.
-
+
Some short-cut notations are also supported.
+
-* `tag <tag>` means the same as `refs/tags/<tag>:refs/tags/<tag>`;
+* `tag <tag>` means the same as `refs/tags/<tag>:refs/tags/<tag>`;
it requests fetching everything up to the given tag.
* A parameter <ref> without a colon is equivalent to
<ref>: when pulling/fetching, so it merges <ref> into the current
This is similar to `info/grafts` but is internally used
and maintained by shallow clone mechanism. See `--depth`
option to gitlink:git-clone[1] and gitlink:git-fetch[1].
-
+--------------------------------+ |
main | offset | |
index | object name 00XXXXXXXXXXXXXXXX | |
-table +--------------------------------+ |
+table +--------------------------------+ |
| offset | |
| object name 00XXXXXXXXXXXXXXXX | |
+--------------------------------+ |
| +--------------------------------+
| | idxfile checksum |
| +--------------------------------+
- .-------.
+ .-------.
|
Pack file entry: <+
packed object header:
1-byte size extension bit (MSB)
type (next 3 bit)
- size0 (lower 4-bit)
+ size0 (lower 4-bit)
n-byte sizeN (as long as MSB is set, each 7-bit)
size0..sizeN form 4+7+7+..+7 bit integer, size0
is the least significant part, and sizeN is the
is the size before compression).
If it is DELTA, then
20-byte base object name SHA1 (the size above is the
- size of the delta data that follows).
+ size of the delta data that follows).
delta data, deflated.
Date: Sat Dec 2 22:22:25 2006 -0800
[XFRM]: Fix aevent structuring to be more complete.
-
+
aevents can not uniquely identify an SA. We break the ABI with this
patch, but consensus is that since it is not yet utilized by any
(known) application then it is fine (better do it now than later).
-
+
Signed-off-by: Jamal Hadi Salim <hadi@cyberus.ca>
Signed-off-by: David S. Miller <davem@davemloft.net>
--- a/Documentation/networking/xfrm_sync.txt
+++ b/Documentation/networking/xfrm_sync.txt
@@ -47,10 +47,13 @@ aevent_id structure looks like:
-
+
struct xfrm_aevent_id {
struct xfrm_usersa_id sa_id;
+ xfrm_address_t saddr;
-------------------------------------------------
As a special shortcut,
-
+
-------------------------------------------------
$ git commit -a
-------------------------------------------------
Fortunately, git also keeps a log, called a "reflog", of all the
previous values of each branch. So in this case you can still find the
-old history using, for example,
+old history using, for example,
-------------------------------------------------
$ git log master@{1}
reference pointing to it, for example, a new branch:
------------------------------------------------
-$ git branch recovered-branch 7281251ddd
+$ git branch recovered-branch 7281251ddd
------------------------------------------------
Other types of dangling objects (blobs and trees) are also possible, and
you push
your personal repo ------------------> your public repo
- ^ |
+ ^ |
| |
| you pull | they pull
| |
\ \
a--b--c--m <-- mywork
................................................
-
+
However, if you prefer to keep the history in mywork a simple series of
commits without any merges, you may instead choose to use
gitlink:git-rebase[1]:
root objects together into one project by creating a commit object which
has two or more separate roots as its ultimate parents, that's probably
just going to confuse people. So aim for the notion of "one root object
-per project", even if git itself does not enforce that.
+per project", even if git itself does not enforce that.
A <<def_tag_object,"tag" object>> symbolically identifies and can be
used to sign other objects. It contains the identifier and type of
be validated by verifying that (a) their hashes match the content of the
file and (b) the object successfully inflates to a stream of bytes that
forms a sequence of <ascii type without space> + <space> + <ascii decimal
-size> + <byte\0> + <binary object data>.
+size> + <byte\0> + <binary object data>.
The structured objects can further have their structure and
connectivity to other objects verified. This is generally done with
known tree object, or update/compare it with a live tree that is being
developed. If you blow the directory cache away entirely, you generally
haven't lost any information as long as you have the name of the tree
-that it described.
+that it described.
At the same time, the index is at the same time also the
staging area for creating new trees, and creating a new tree always
work *purely* on the index file (showing the current state of the
index), but most operations move data to and from the index file. Either
from the database or from the working directory. Thus there are four
-main combinations:
+main combinations:
[[working-directory-to-index]]
working directory -> index
leaving _some_ of the new objects in the object database, but just
dangling and useless.
-Anyway, once you are sure that you're not interested in any dangling
+Anyway, once you are sure that you're not interested in any dangling
state, you can just prune all unreachable objects:
------------------------------------------------
repository - it's kind of like doing a filesystem fsck recovery: you
don't want to do that while the filesystem is mounted.
-(The same is true of "git-fsck" itself, btw - but since
-git-fsck never actually *changes* the repository, it just reports
-on what it found, git-fsck itself is never "dangerous" to run.
-Running it while somebody is actually changing the repository can cause
-confusing and scary messages, but it won't actually do anything bad. In
-contrast, running "git prune" while somebody is actively changing the
+(The same is true of "git-fsck" itself, btw - but since
+git-fsck never actually *changes* the repository, it just reports
+on what it found, git-fsck itself is never "dangerous" to run.
+Running it while somebody is actually changing the repository can cause
+confusing and scary messages, but it won't actually do anything bad. In
+contrast, running "git prune" while somebody is actively changing the
repository is a *BAD* idea).
[[birdview-on-the-source-code]]
echo >&2 "GIT_VERSION = $VN"
echo "GIT_VERSION = $VN" >$GVF
}
-
-
interactive tools. None of the core git stuff needs the wrapper,
it's just a convenient shorthand and while it is documented in some
places, you can always replace "git commit" with "git-commit"
- instead.
+ instead.
But let's face it, most of us don't have GNU interactive tools, and
even if we had it, we wouldn't know what it does. I don't think it
would instead give you a copy of what you see at:
http://www.kernel.org/pub/software/scm/git/docs/
-
git-convert-objects$X git-fetch-pack$X \
git-hash-object$X git-index-pack$X git-local-fetch$X \
git-fast-import$X \
- git-merge-base$X \
git-daemon$X \
git-merge-index$X git-mktag$X git-mktree$X git-patch-id$X \
git-peek-remote$X git-receive-pack$X \
clean:
rm -f *.o mozilla-sha1/*.o arm/*.o ppc/*.o compat/*.o xdiff/*.o \
- test-chmtime$X test-genrandom$X $(LIB_FILE) $(XDIFF_LIB)
+ $(LIB_FILE) $(XDIFF_LIB)
rm -f $(ALL_PROGRAMS) $(BUILT_INS) git$X
+ rm -f $(TEST_PROGRAMS)
rm -f *.spec *.pyc *.pyo */*.pyc */*.pyo common-cmds.h TAGS tags
rm -rf autom4te.cache
rm -f configure config.log config.mak.autogen config.mak.append config.status config.cache
void SHA1_Final(unsigned char *hash, SHA_CTX *c)
{
uint64_t bitlen;
- uint32_t bitlen_hi, bitlen_lo;
+ uint32_t bitlen_hi, bitlen_lo;
unsigned int i, offset, padlen;
unsigned char bits[8];
static const unsigned char padding[64] = { 0x80, };
bits[5] = bitlen_lo >> 16;
bits[6] = bitlen_lo >> 8;
bits[7] = bitlen_lo;
- SHA1_Update(c, bits, 8);
+ SHA1_Update(c, bits, 8);
for (i = 0; i < 5; i++) {
uint32_t v = c->hash[i];
.L_sha_K:
.word 0x5a827999, 0x6ed9eba1, 0x8f1bbcdc, 0xca62c1d6
-
return cmd_blame(argc + 1, nargv, prefix);
}
-
die("bad config variable '%s'", var);
}
-int git_branch_config(const char *var, const char *value)
+static int git_branch_config(const char *var, const char *value)
{
if (!strcmp(var, "color.branch")) {
branch_use_color = git_config_colorbool(var, value);
return git_default_config(var, value);
}
-const char *branch_get_color(enum color_branch ix)
+static const char *branch_get_color(enum color_branch ix)
{
if (branch_use_color)
return branch_colors[ix];
static char *config_repo;
static char *config_remote;
static const char *start_ref;
-static int start_len;
-static int base_len;
static int get_remote_branch_name(const char *value)
{
end = value + strlen(value);
- /* Try an exact match first. */
+ /*
+ * Try an exact match first. I.e. handle the case where the
+ * value is "$anything:refs/foo/bar/baz" and start_ref is exactly
+ * "refs/foo/bar/baz". Then the name at the remote is $anything.
+ */
if (!strcmp(colon + 1, start_ref)) {
- /* Truncate the value before the colon. */
+ /* Truncate the value before the colon. */
nfasprintf(&config_repo, "%.*s", colon - value, value);
return 1;
}
- /* Try with a wildcard match now. */
- if (end - value > 2 && end[-2] == '/' && end[-1] == '*' &&
- colon - value > 2 && colon[-2] == '/' && colon[-1] == '*' &&
- (end - 2) - (colon + 1) == base_len &&
- !strncmp(colon + 1, start_ref, base_len)) {
- /* Replace the star with the remote branch name. */
- nfasprintf(&config_repo, "%.*s%s",
- (colon - 2) - value, value,
- start_ref + base_len);
- return 1;
- }
+ /*
+ * Is this a wildcard match?
+ */
+ if ((end - 2 <= value) || end[-2] != '/' || end[-1] != '*' ||
+ (colon - 2 <= value) || colon[-2] != '/' || colon[-1] != '*')
+ return 0;
- return 0;
+ /*
+ * Value is "refs/foo/bar/<asterisk>:refs/baz/boa/<asterisk>"
+ * and start_ref begins with "refs/baz/boa/"; the name at the
+ * remote is refs/foo/bar/ with the remaining part of the
+ * start_ref. The length of the prefix on the RHS is (end -
+ * colon - 2), including the slash immediately before the
+ * asterisk.
+ */
+ if ((strlen(start_ref) < end - colon - 2) ||
+ memcmp(start_ref, colon + 1, end - colon - 2))
+ return 0; /* does not match prefix */
+
+ /* Replace the asterisk with the remote branch name. */
+ nfasprintf(&config_repo, "%.*s%s",
+ (colon - 1) - value, value,
+ start_ref + (end - colon - 2));
+ return 1;
}
static int get_remote_config(const char *key, const char *value)
return 0;
var = strrchr(key, '.');
- if (var == key + 6)
+ if (var == key + 6 || strcmp(var, ".fetch"))
return 0;
-
- if (!strcmp(var, ".fetch") && get_remote_branch_name(value))
+ /*
+ * Ok, we are looking at key == "remote.$foo.fetch";
+ */
+ if (get_remote_branch_name(value))
nfasprintf(&config_remote, "%.*s", var - (key + 7), key + 7);
return 0;
static void set_branch_defaults(const char *name, const char *real_ref)
{
- const char *slash = strrchr(real_ref, '/');
-
- if (!slash)
- return;
-
+ /*
+ * name is the name of new branch under refs/heads;
+ * real_ref is typically refs/remotes/$foo/$bar, where
+ * $foo is the remote name (there typically are no slashes)
+ * and $bar is the branch name we map from the remote
+ * (it could have slashes).
+ */
start_ref = real_ref;
- start_len = strlen(real_ref);
- base_len = slash - real_ref;
git_config(get_remote_config);
if (!config_repo && !config_remote &&
!prefixcmp(real_ref, "refs/heads/")) {
argc = setup_revisions(argc, argv, &rev, NULL);
for (i = 1; i < argc; i++) {
const char *arg = argv[i];
-
+
if (!strcmp(arg, "--cached"))
cached = 1;
else
if (!commit->parents && show_root)
printf("root %s\n", sha1_to_hex(commit->object.sha1));
if (!commit->date)
- printf("bad commit date in %s\n",
+ printf("bad commit date in %s\n",
sha1_to_hex(commit->object.sha1));
return 0;
}
heads = 0;
for (i = 1; i < argc; i++) {
- const char *arg = argv[i];
+ const char *arg = argv[i];
if (*arg == '-')
continue;
rev->always_show_header = 0;
for (i = 1; i < argc; i++) {
const char *arg = argv[i];
- if (!prefixcmp(arg, "--encoding=")) {
- arg += 11;
- if (strcmp(arg, "none"))
- git_log_output_encoding = xstrdup(arg);
- else
- git_log_output_encoding = "";
- } else if (!strcmp(arg, "--decorate")) {
+ if (!strcmp(arg, "--decorate")) {
if (!decorate)
for_each_ref(add_ref_decoration, NULL);
decorate = 1;
static FILE *realstdout = NULL;
static const char *output_directory = NULL;
-static int reopen_stdout(struct commit *commit, int nr, int keep_subject)
+static int reopen_stdout(struct commit *commit, int nr, int keep_subject,
+ int numbered_files)
{
char filename[PATH_MAX];
char *sol;
filename[len++] = '/';
}
- sprintf(filename + len, "%04d", nr);
- len = strlen(filename);
-
- sol = strstr(commit->buffer, "\n\n");
- if (sol) {
- int j, space = 1;
-
- sol += 2;
- /* strip [PATCH] or [PATCH blabla] */
- if (!keep_subject && !prefixcmp(sol, "[PATCH")) {
- char *eos = strchr(sol + 6, ']');
- if (eos) {
- while (isspace(*eos))
- eos++;
- sol = eos;
- }
- }
+ if (numbered_files) {
+ sprintf(filename + len, "%d", nr);
+ len = strlen(filename);
+
+ } else {
+ sprintf(filename + len, "%04d", nr);
+ len = strlen(filename);
- for (j = 0;
- j < FORMAT_PATCH_NAME_MAX - suffix_len - 5 &&
- len < sizeof(filename) - suffix_len &&
- sol[j] && sol[j] != '\n';
- j++) {
- if (istitlechar(sol[j])) {
- if (space) {
- filename[len++] = '-';
- space = 0;
+ sol = strstr(commit->buffer, "\n\n");
+ if (sol) {
+ int j, space = 1;
+
+ sol += 2;
+ /* strip [PATCH] or [PATCH blabla] */
+ if (!keep_subject && !prefixcmp(sol, "[PATCH")) {
+ char *eos = strchr(sol + 6, ']');
+ if (eos) {
+ while (isspace(*eos))
+ eos++;
+ sol = eos;
}
- filename[len++] = sol[j];
- if (sol[j] == '.')
- while (sol[j + 1] == '.')
- j++;
- } else
- space = 1;
+ }
+
+ for (j = 0;
+ j < FORMAT_PATCH_NAME_MAX - suffix_len - 5 &&
+ len < sizeof(filename) - suffix_len &&
+ sol[j] && sol[j] != '\n';
+ j++) {
+ if (istitlechar(sol[j])) {
+ if (space) {
+ filename[len++] = '-';
+ space = 0;
+ }
+ filename[len++] = sol[j];
+ if (sol[j] == '.')
+ while (sol[j + 1] == '.')
+ j++;
+ } else
+ space = 1;
+ }
+ while (filename[len - 1] == '.'
+ || filename[len - 1] == '-')
+ len--;
+ filename[len] = 0;
}
- while (filename[len - 1] == '.' || filename[len - 1] == '-')
- len--;
- filename[len] = 0;
+ if (len + suffix_len >= sizeof(filename))
+ return error("Patch pathname too long");
+ strcpy(filename + len, fmt_patch_suffix);
}
- if (len + suffix_len >= sizeof(filename))
- return error("Patch pathname too long");
- strcpy(filename + len, fmt_patch_suffix);
+
fprintf(realstdout, "%s\n", filename);
if (freopen(filename, "w", stdout) == NULL)
return error("Cannot open patch file %s",filename);
- return 0;
+ return 0;
}
static void get_patch_ids(struct rev_info *rev, struct patch_ids *ids, const char *prefix)
int numbered = 0;
int start_number = -1;
int keep_subject = 0;
+ int numbered_files = 0; /* _just_ numbers */
int subject_prefix = 0;
int ignore_if_in_upstream = 0;
int thread = 0;
numbered = 1;
else if (!prefixcmp(argv[i], "--start-number="))
start_number = strtol(argv[i] + 15, NULL, 10);
+ else if (!strcmp(argv[i], "--numbered-files"))
+ numbered_files = 1;
else if (!strcmp(argv[i], "--start-number")) {
i++;
if (i == argc)
die ("-n and -k are mutually exclusive.");
if (keep_subject && subject_prefix)
die ("--subject-prefix and -k are mutually exclusive.");
+ if (numbered_files && use_stdout)
+ die ("--numbered-files and --stdout are mutually exclusive.");
argc = setup_revisions(argc, argv, &rev, "HEAD");
if (argc > 1)
rev.message_id = message_id;
}
if (!use_stdout)
- if (reopen_stdout(commit, rev.nr, keep_subject))
+ if (reopen_stdout(commit, rev.nr, keep_subject,
+ numbered_files))
die("Failed to create output files");
shown = log_tree_commit(&rev, commit);
free(commit->buffer);
if (0 <= pos)
continue; /* exact match */
pos = -pos - 1;
- if (pos < active_nr) {
+ if (pos < active_nr) {
ce = active_cache[pos];
if (ce_namelen(ce) == len &&
!memcmp(ce->name, ent->name, len))
fprintf(fout, "\n");
}
-int mailinfo(FILE *in, FILE *out, int ks, const char *encoding,
- const char *msg, const char *patch)
+static int mailinfo(FILE *in, FILE *out, int ks, const char *encoding,
+ const char *msg, const char *patch)
{
keep_subject = ks;
metainfo_charset = encoding;
return ret;
}
-int split_mbox(const char *file, const char *dir, int allow_bare,
- int nr_prec, int skip)
+static int split_mbox(const char *file, const char *dir, int allow_bare,
+ int nr_prec, int skip)
{
char name[PATH_MAX];
int ret = -1;
[--stdout | base-name] [<ref-list | <object-list]";
struct object_entry {
- unsigned char sha1[20];
- uint32_t crc32; /* crc of raw pack data for this object */
- off_t offset; /* offset into the final pack file */
+ struct pack_idx_entry idx;
unsigned long size; /* uncompressed size */
+
unsigned int hash; /* name hint hash */
unsigned int depth; /* delta depth */
struct packed_git *in_pack; /* already in pack */
static const char *pack_tmp_name, *idx_tmp_name;
static char tmpname[PATH_MAX];
static const char *base_name;
-static unsigned char pack_file_sha1[20];
static int progress = 1;
static int window = 10;
static uint32_t pack_size_limit;
{
unsigned long othersize, delta_size;
enum object_type type;
- void *otherbuf = read_sha1_file(entry->delta->sha1, &type, &othersize);
+ void *otherbuf = read_sha1_file(entry->delta->idx.sha1, &type, &othersize);
void *delta_buf;
if (!otherbuf)
- die("unable to read %s", sha1_to_hex(entry->delta->sha1));
+ die("unable to read %s", sha1_to_hex(entry->delta->idx.sha1));
delta_buf = diff_delta(otherbuf, othersize,
buf, size, &delta_size, 0);
if (!delta_buf || delta_size != entry->delta_size)
- die("delta size changed");
+ die("delta size changed");
free(buf);
free(otherbuf);
return delta_buf;
/* yes if unlimited packfile */
!pack_size_limit ? 1 :
/* no if base written to previous pack */
- entry->delta->offset == (off_t)-1 ? 0 :
+ entry->delta->idx.offset == (off_t)-1 ? 0 :
/* otherwise double-check written to this
* pack, like we do below
*/
- entry->delta->offset ? 1 : 0;
+ entry->delta->idx.offset ? 1 : 0;
if (!pack_to_stdout)
crc32_begin(f);
unsigned long maxsize;
void *out;
if (!usable_delta) {
- buf = read_sha1_file(entry->sha1, &obj_type, &size);
+ buf = read_sha1_file(entry->idx.sha1, &obj_type, &size);
if (!buf)
- die("unable to read %s", sha1_to_hex(entry->sha1));
+ die("unable to read %s", sha1_to_hex(entry->idx.sha1));
} else if (entry->delta_data) {
size = entry->delta_size;
buf = entry->delta_data;
entry->delta_data = NULL;
- obj_type = (allow_ofs_delta && entry->delta->offset) ?
+ obj_type = (allow_ofs_delta && entry->delta->idx.offset) ?
OBJ_OFS_DELTA : OBJ_REF_DELTA;
} else {
- buf = read_sha1_file(entry->sha1, &type, &size);
+ buf = read_sha1_file(entry->idx.sha1, &type, &size);
if (!buf)
- die("unable to read %s", sha1_to_hex(entry->sha1));
+ die("unable to read %s", sha1_to_hex(entry->idx.sha1));
buf = delta_against(buf, size, entry);
size = entry->delta_size;
- obj_type = (allow_ofs_delta && entry->delta->offset) ?
+ obj_type = (allow_ofs_delta && entry->delta->idx.offset) ?
OBJ_OFS_DELTA : OBJ_REF_DELTA;
}
/* compress the data to store and put compressed length in datalen */
* encoding of the relative offset for the delta
* base from this object's position in the pack.
*/
- off_t ofs = entry->offset - entry->delta->offset;
+ off_t ofs = entry->idx.offset - entry->delta->idx.offset;
unsigned pos = sizeof(dheader) - 1;
dheader[pos] = ofs & 127;
while (ofs >>= 7)
return 0;
}
sha1write(f, header, hdrlen);
- sha1write(f, entry->delta->sha1, 20);
+ sha1write(f, entry->delta->idx.sha1, 20);
hdrlen += 20;
} else {
if (limit && hdrlen + datalen + 20 >= limit) {
off_t offset;
if (entry->delta) {
- obj_type = (allow_ofs_delta && entry->delta->offset) ?
+ obj_type = (allow_ofs_delta && entry->delta->idx.offset) ?
OBJ_OFS_DELTA : OBJ_REF_DELTA;
reused_delta++;
}
datalen = revidx[1].offset - offset;
if (!pack_to_stdout && p->index_version > 1 &&
check_pack_crc(p, &w_curs, offset, datalen, revidx->nr))
- die("bad packed object CRC for %s", sha1_to_hex(entry->sha1));
+ die("bad packed object CRC for %s", sha1_to_hex(entry->idx.sha1));
offset += entry->in_pack_header_size;
datalen -= entry->in_pack_header_size;
if (obj_type == OBJ_OFS_DELTA) {
- off_t ofs = entry->offset - entry->delta->offset;
+ off_t ofs = entry->idx.offset - entry->delta->idx.offset;
unsigned pos = sizeof(dheader) - 1;
dheader[pos] = ofs & 127;
while (ofs >>= 7)
if (limit && hdrlen + 20 + datalen + 20 >= limit)
return 0;
sha1write(f, header, hdrlen);
- sha1write(f, entry->delta->sha1, 20);
+ sha1write(f, entry->delta->idx.sha1, 20);
hdrlen += 20;
} else {
if (limit && hdrlen + datalen + 20 >= limit)
if (!pack_to_stdout && p->index_version == 1 &&
check_pack_inflate(p, &w_curs, offset, datalen, entry->size))
- die("corrupt packed object for %s", sha1_to_hex(entry->sha1));
+ die("corrupt packed object for %s", sha1_to_hex(entry->idx.sha1));
copy_pack_data(f, p, &w_curs, offset, datalen);
unuse_pack(&w_curs);
reused++;
written_delta++;
written++;
if (!pack_to_stdout)
- entry->crc32 = crc32_end(f);
+ entry->idx.crc32 = crc32_end(f);
return hdrlen + datalen;
}
unsigned long size;
/* offset is non zero if object is written already. */
- if (e->offset || e->preferred_base)
+ if (e->idx.offset || e->preferred_base)
return offset;
/* if we are deltified, write out base object first. */
return 0;
}
- e->offset = offset;
+ e->idx.offset = offset;
size = write_object(f, e, offset);
if (!size) {
- e->offset = 0;
+ e->idx.offset = 0;
return 0;
}
written_list[nr_written++] = e;
return mkstemp(tmpname);
}
-/* forward declarations for write_pack_file */
-static void write_index_file(off_t last_obj_offset, unsigned char *sha1);
+/* forward declaration for write_pack_file */
static int adjust_perm(const char *path, mode_t mode);
static void write_pack_file(void)
written_list = xmalloc(nr_objects * sizeof(struct object_entry *));
do {
+ unsigned char sha1[20];
+
if (pack_to_stdout) {
f = sha1fd(1, "<stdout>");
} else {
* If so, rewrite it like in fast-import
*/
if (pack_to_stdout || nr_written == nr_remaining) {
- sha1close(f, pack_file_sha1, 1);
+ sha1close(f, sha1, 1);
} else {
- sha1close(f, pack_file_sha1, 0);
- fixup_pack_header_footer(f->fd, pack_file_sha1, pack_tmp_name, nr_written);
+ sha1close(f, sha1, 0);
+ fixup_pack_header_footer(f->fd, sha1, pack_tmp_name, nr_written);
close(f->fd);
}
if (!pack_to_stdout) {
- unsigned char object_list_sha1[20];
mode_t mode = umask(0);
umask(mode);
mode = 0444 & ~mode;
- write_index_file(last_obj_offset, object_list_sha1);
+ idx_tmp_name = write_idx_file(NULL,
+ (struct pack_idx_entry **) written_list, nr_written, sha1);
snprintf(tmpname, sizeof(tmpname), "%s-%s.pack",
- base_name, sha1_to_hex(object_list_sha1));
+ base_name, sha1_to_hex(sha1));
if (adjust_perm(pack_tmp_name, mode))
die("unable to make temporary pack file readable: %s",
strerror(errno));
die("unable to rename temporary pack file: %s",
strerror(errno));
snprintf(tmpname, sizeof(tmpname), "%s-%s.idx",
- base_name, sha1_to_hex(object_list_sha1));
+ base_name, sha1_to_hex(sha1));
if (adjust_perm(idx_tmp_name, mode))
die("unable to make temporary index file readable: %s",
strerror(errno));
if (rename(idx_tmp_name, tmpname))
die("unable to rename temporary index file: %s",
strerror(errno));
- puts(sha1_to_hex(object_list_sha1));
+ puts(sha1_to_hex(sha1));
}
/* mark written objects as written to previous pack */
for (j = 0; j < nr_written; j++) {
- written_list[j]->offset = (off_t)-1;
+ written_list[j]->idx.offset = (off_t)-1;
}
nr_remaining -= nr_written;
} while (nr_remaining && i < nr_objects);
*/
for (j = 0; i < nr_objects; i++) {
struct object_entry *e = objects + i;
- j += !e->offset && !e->preferred_base;
+ j += !e->idx.offset && !e->preferred_base;
}
if (j)
die("wrote %u objects as expected but %u unwritten", written, j);
}
-static int sha1_sort(const void *_a, const void *_b)
-{
- const struct object_entry *a = *(struct object_entry **)_a;
- const struct object_entry *b = *(struct object_entry **)_b;
- return hashcmp(a->sha1, b->sha1);
-}
-
-static uint32_t index_default_version = 1;
-static uint32_t index_off32_limit = 0x7fffffff;
-
-static void write_index_file(off_t last_obj_offset, unsigned char *sha1)
-{
- struct sha1file *f;
- struct object_entry **sorted_by_sha, **list, **last;
- uint32_t array[256];
- uint32_t i, index_version;
- SHA_CTX ctx;
-
- int fd = open_object_dir_tmp("tmp_idx_XXXXXX");
- if (fd < 0)
- die("unable to create %s: %s\n", tmpname, strerror(errno));
- idx_tmp_name = xstrdup(tmpname);
- f = sha1fd(fd, idx_tmp_name);
-
- if (nr_written) {
- sorted_by_sha = written_list;
- qsort(sorted_by_sha, nr_written, sizeof(*sorted_by_sha), sha1_sort);
- list = sorted_by_sha;
- last = sorted_by_sha + nr_written;
- } else
- sorted_by_sha = list = last = NULL;
-
- /* if last object's offset is >= 2^31 we should use index V2 */
- index_version = (last_obj_offset >> 31) ? 2 : index_default_version;
-
- /* index versions 2 and above need a header */
- if (index_version >= 2) {
- struct pack_idx_header hdr;
- hdr.idx_signature = htonl(PACK_IDX_SIGNATURE);
- hdr.idx_version = htonl(index_version);
- sha1write(f, &hdr, sizeof(hdr));
- }
-
- /*
- * Write the first-level table (the list is sorted,
- * but we use a 256-entry lookup to be able to avoid
- * having to do eight extra binary search iterations).
- */
- for (i = 0; i < 256; i++) {
- struct object_entry **next = list;
- while (next < last) {
- struct object_entry *entry = *next;
- if (entry->sha1[0] != i)
- break;
- next++;
- }
- array[i] = htonl(next - sorted_by_sha);
- list = next;
- }
- sha1write(f, array, 256 * 4);
-
- /* Compute the SHA1 hash of sorted object names. */
- SHA1_Init(&ctx);
-
- /* Write the actual SHA1 entries. */
- list = sorted_by_sha;
- for (i = 0; i < nr_written; i++) {
- struct object_entry *entry = *list++;
- if (index_version < 2) {
- uint32_t offset = htonl(entry->offset);
- sha1write(f, &offset, 4);
- }
- sha1write(f, entry->sha1, 20);
- SHA1_Update(&ctx, entry->sha1, 20);
- }
-
- if (index_version >= 2) {
- unsigned int nr_large_offset = 0;
-
- /* write the crc32 table */
- list = sorted_by_sha;
- for (i = 0; i < nr_written; i++) {
- struct object_entry *entry = *list++;
- uint32_t crc32_val = htonl(entry->crc32);
- sha1write(f, &crc32_val, 4);
- }
-
- /* write the 32-bit offset table */
- list = sorted_by_sha;
- for (i = 0; i < nr_written; i++) {
- struct object_entry *entry = *list++;
- uint32_t offset = (entry->offset <= index_off32_limit) ?
- entry->offset : (0x80000000 | nr_large_offset++);
- offset = htonl(offset);
- sha1write(f, &offset, 4);
- }
-
- /* write the large offset table */
- list = sorted_by_sha;
- while (nr_large_offset) {
- struct object_entry *entry = *list++;
- uint64_t offset = entry->offset;
- if (offset > index_off32_limit) {
- uint32_t split[2];
- split[0] = htonl(offset >> 32);
- split[1] = htonl(offset & 0xffffffff);
- sha1write(f, split, 8);
- nr_large_offset--;
- }
- }
- }
-
- sha1write(f, pack_file_sha1, 20);
- sha1close(f, NULL, 1);
- SHA1_Final(sha1, &ctx);
-}
-
static int locate_object_entry_hash(const unsigned char *sha1)
{
int i;
memcpy(&ui, sha1, sizeof(unsigned int));
i = ui % object_ix_hashsz;
while (0 < object_ix[i]) {
- if (!hashcmp(sha1, objects[object_ix[i] - 1].sha1))
+ if (!hashcmp(sha1, objects[object_ix[i] - 1].idx.sha1))
return i;
if (++i == object_ix_hashsz)
i = 0;
object_ix = xrealloc(object_ix, sizeof(int) * object_ix_hashsz);
memset(object_ix, 0, sizeof(int) * object_ix_hashsz);
for (i = 0, oe = objects; i < nr_objects; i++, oe++) {
- int ix = locate_object_entry_hash(oe->sha1);
+ int ix = locate_object_entry_hash(oe->idx.sha1);
if (0 <= ix)
continue;
ix = -1 - ix;
entry = objects + nr_objects++;
memset(entry, 0, sizeof(*entry));
- hashcpy(entry->sha1, sha1);
+ hashcpy(entry->idx.sha1, sha1);
entry->hash = hash;
if (type)
entry->type = type;
ofs += 1;
if (!ofs || MSB(ofs, 7))
die("delta base offset overflow in pack for %s",
- sha1_to_hex(entry->sha1));
+ sha1_to_hex(entry->idx.sha1));
c = buf[used_0++];
ofs = (ofs << 7) + (c & 127);
}
if (ofs >= entry->in_pack_offset)
die("delta base offset out of bound for %s",
- sha1_to_hex(entry->sha1));
+ sha1_to_hex(entry->idx.sha1));
ofs = entry->in_pack_offset - ofs;
if (!no_reuse_delta && !entry->preferred_base)
base_ref = find_packed_object_name(p, ofs);
unuse_pack(&w_curs);
}
- entry->type = sha1_object_info(entry->sha1, &entry->size);
+ entry->type = sha1_object_info(entry->idx.sha1, &entry->size);
if (entry->type < 0)
die("unable to get type of object %s",
- sha1_to_hex(entry->sha1));
+ sha1_to_hex(entry->idx.sha1));
}
static int pack_offset_sort(const void *_a, const void *_b)
/* avoid filesystem trashing with loose objects */
if (!a->in_pack && !b->in_pack)
- return hashcmp(a->sha1, b->sha1);
+ return hashcmp(a->idx.sha1, b->idx.sha1);
if (a->in_pack < b->in_pack)
return -1;
/* Load data if not already done */
if (!trg->data) {
- trg->data = read_sha1_file(trg_entry->sha1, &type, &sz);
+ trg->data = read_sha1_file(trg_entry->idx.sha1, &type, &sz);
if (sz != trg_size)
die("object %s inconsistent object length (%lu vs %lu)",
- sha1_to_hex(trg_entry->sha1), sz, trg_size);
+ sha1_to_hex(trg_entry->idx.sha1), sz, trg_size);
}
if (!src->data) {
- src->data = read_sha1_file(src_entry->sha1, &type, &sz);
+ src->data = read_sha1_file(src_entry->idx.sha1, &type, &sz);
if (sz != src_size)
die("object %s inconsistent object length (%lu vs %lu)",
- sha1_to_hex(src_entry->sha1), sz, src_size);
+ sha1_to_hex(src_entry->idx.sha1), sz, src_size);
}
if (!src->index) {
src->index = create_delta_index(src->data, src_size);
}
if (!prefixcmp(arg, "--index-version=")) {
char *c;
- index_default_version = strtoul(arg + 16, &c, 10);
- if (index_default_version > 2)
+ pack_idx_default_version = strtoul(arg + 16, &c, 10);
+ if (pack_idx_default_version > 2)
die("bad %s", arg);
if (*c == ',')
- index_off32_limit = strtoul(c+1, &c, 0);
- if (*c || index_off32_limit & 0x80000000)
+ pack_idx_off32_limit = strtoul(c+1, &c, 0);
+ if (*c || pack_idx_off32_limit & 0x80000000)
die("bad %s", arg);
continue;
}
path_list_clear(&merge_rr, 1);
return 0;
}
-
return result;
}
-char *get_encoding(const char *message)
+static char *get_encoding(const char *message)
{
const char *p = message, *eol;
return 1;
}
-void stripspace(FILE *in, FILE *out)
+static void stripspace(FILE *in, FILE *out)
{
int empties = -1;
int incomplete = 0;
extern const char git_usage_string[];
extern void help_unknown_cmd(const char *cmd);
-extern int mailinfo(FILE *in, FILE *out, int ks, const char *encoding, const char *msg, const char *patch);
-extern int split_mbox(const char *file, const char *dir, int allow_bare, int nr_prec, int skip);
-extern void stripspace(FILE *in, FILE *out);
extern int write_tree(unsigned char *sha1, int missing_ok, const char *prefix);
extern void prune_packed_objects(int);
extern void reprepare_packed_git(void);
extern void install_packed_git(struct packed_git *pack);
-extern struct packed_git *find_sha1_pack(const unsigned char *sha1,
+extern struct packed_git *find_sha1_pack(const unsigned char *sha1,
struct packed_git *packs);
extern void pack_report(void);
int register_commit_graft(struct commit_graft *graft, int ignore_dups)
{
int pos = commit_graft_pos(graft->sha1);
-
+
if (0 <= pos) {
if (ignore_dups)
free(graft);
return commit_list_insert(item, pp);
}
-
+
void sort_by_date(struct commit_list **list)
{
struct commit_list *ret = NULL;
return item;
}
-int count_parents(struct commit * commit)
-{
- int count;
- struct commit_list * parents = commit->parents;
- for (count = 0; parents; parents = parents->next,count++)
- ;
- return count;
-}
-
void topo_sort_default_setter(struct commit *c, void *data)
{
c->util = data;
next = next->next;
count++;
}
-
+
if (!count)
return;
/* allocate an array to help sort the list */
}
next=next->next;
}
- /*
+ /*
* find the tips
*
- * tips are nodes not reachable from any other node in the list
- *
+ * tips are nodes not reachable from any other node in the list
+ *
* the tips serve as a starting set for the work queue.
*/
next=*list;
if (pn) {
/*
- * parents are only enqueued for emission
+ * parents are only enqueued for emission
* when all their children have been emitted thereby
* guaranteeing topological order.
*/
/** Removes the first commit from a list sorted by date, and adds all
* of its parents.
**/
-struct commit *pop_most_recent_commit(struct commit_list **list,
+struct commit *pop_most_recent_commit(struct commit_list **list,
unsigned int mark);
struct commit *pop_commit(struct commit_list **stack);
void clear_commit_marks(struct commit *commit, unsigned int mark);
-int count_parents(struct commit * commit);
-
/*
* Performs an in-place topological sort of list supplied.
*
free(start);
return 0;
}
-
size_t equal_offset = size, bracket_offset = size;
ssize_t offset;
- for (offset = offset_-2; offset > 0
+ for (offset = offset_-2; offset > 0
&& contents[offset] != '\n'; offset--)
switch (contents[offset]) {
case '=': equal_offset = offset; break;
free(config_filename);
return ret;
}
-
NO_STRLCPY=@NO_STRLCPY@
NO_SETENV=@NO_SETENV@
NO_ICONV=@NO_ICONV@
-
}
if (0 <= matchlen) {
/* core.gitproxy = none for kernel.org */
- if (matchlen == 4 &&
+ if (matchlen == 4 &&
!memcmp(value, "none", 4))
matchlen = 0;
git_proxy_command = xmalloc(matchlen + 1);
Cc: git@vger.kernel.org
Date: Sat, 27 Jan 2007 18:52:38 -0500
Message-ID: <20070127235238.GA28706@coredump.intra.peff.net>
-
self.set_colour(ctx, colour, 0.0, 0.5)
ctx.show_text(name)
-class Commit:
+class Commit(object):
""" This represent a commit object obtained after parsing the git-rev-list
output """
+ __slots__ = ['children_sha1', 'message', 'author', 'date', 'committer',
+ 'commit_date', 'commit_sha1', 'parent_sha1']
+
children_sha1 = {}
def __init__(self, commit_lines):
fp.close()
return diff
-class AnnotateWindow:
+class AnnotateWindow(object):
"""Annotate window.
This object represents and manages a single window containing the
annotate information of the file
self.io_watch_tag = gobject.io_add_watch(fp, gobject.IO_IN, self.data_ready)
-class DiffWindow:
+class DiffWindow(object):
"""Diff window.
This object represents and manages a single window containing the
differences between two revisions on a branch.
fp.close()
dialog.destroy()
-class GitView:
+class GitView(object):
""" This is the main class
"""
version = "0.9"
view = GitView( without_diff != 1)
view.run(sys.argv[without_diff:])
-
-
hooks/post-receive
- --
+ --
$projectdesc
EOF
}
fmt++;
} while (*buf && *fmt);
printf("left: %s\n", buf);
- return mktime(&tm);
+ return mktime(&tm);
}
static int convert_date_line(char *dst, void **buf, unsigned long *sp)
close(ifd);
return 0;
}
-
AA, AA, AA, AA, AA, AA, AA, AA, AA, AA, AA, 0, 0, 0, 0, 0, /* 112-15 */
/* Nothing in the 128.. range */
};
-
{
int sl, ndot;
- /*
+ /*
* This resurrects the belts and suspenders paranoia check by HPA
* done in <435560F7.4080006@zytor.com> thread, now enter_repo()
* does not do getcwd() based path canonicalizations.
int pathlen = strlen(path);
/* The validation is done on the paths after enter_repo
- * appends optional {.git,.git/.git} and friends, but
+ * appends optional {.git,.git/.git} and friends, but
* it does not use getcwd(). So if your /pub is
* a symlink to /mnt/pub, you can whitelist /pub and
* do not have to say /mnt/pub.
}
}
-void fill_in_extra_table_entries(struct interp *itable)
+static void fill_in_extra_table_entries(struct interp *itable)
{
char *hp;
}
/*
- * We've seen a digit. Time? Year? Date?
+ * We've seen a digit. Time? Year? Date?
*/
static int match_digit(const char *date, struct tm *tm, int *offset, int *tm_gmt)
{
num = strtoul(date, &end, 10);
/*
- * Seconds since 1970? We trigger on that for anything after Jan 1, 2000
+ * Seconds since 1970? We trigger on that for any numbers with
+ * more than 8 digits. This is because we don't want to rule out
+ * numbers like 20070606 as a YYYYMMDD date.
*/
- if (num > 946684800) {
+ if (num >= 100000000) {
time_t time = num;
if (gmtime_r(&time, tm)) {
*tm_gmt = 1;
} else if (num > 0 && num < 13) {
tm->tm_mon = num-1;
}
-
+
return n;
}
if (!match) {
/* BAD CRAP */
match = 1;
- }
+ }
date += match;
}
/* mktime uses local timezone */
- then = my_mktime(&tm);
+ then = my_mktime(&tm);
if (offset == -1)
offset = (then - mktime(&tm)) / 60;
{ "days", 24*60*60 },
{ "weeks", 7*24*60*60 },
{ NULL }
-};
+};
static const char *approxidate_alpha(const char *date, struct tm *tm, int *num)
{
const char *tree_name;
int match_missing = 0;
- /*
+ /*
* Backward compatibility wart - "diff-index -m" does
* not mean "do not ignore merges", but totally different.
*/
return 1;
}
+static int diff_scoreopt_parse(const char *opt);
+
int diff_opt_parse(struct diff_options *options, const char **av, int ac)
{
const char *arg = av[0];
return (int)((num >= scale) ? MAX_SCORE : (MAX_SCORE * num / scale));
}
-int diff_scoreopt_parse(const char *opt)
+static int diff_scoreopt_parse(const char *opt)
{
int opt1, opt2, cmd;
* entries to the diff-core. They will be prefixed
* with something like '=' or '*' (I haven't decided
* which but should not make any difference).
- * Feeding the same new and old to diff_change()
+ * Feeding the same new and old to diff_change()
* also has the same effect.
* Before the final output happens, they are pruned after
* merged into rename/copy pairs as appropriate.
unsigned old_mode, unsigned new_mode,
const unsigned char *old_sha1,
const unsigned char *new_sha1,
- const char *base, const char *path)
+ const char *base, const char *path)
{
char concatpath[PATH_MAX];
struct diff_filespec *one, *two;
unsigned mode,
const unsigned char *sha1);
-extern int diff_scoreopt_parse(const char *opt);
-
#define DIFF_SETUP_REVERSE 1
#define DIFF_SETUP_USE_CACHE 2
#define DIFF_SETUP_USE_SIZE_CACHE 4
for (i = 0; i < q->nr; i++)
diff_free_filepair(q->queue[i]);
}
- else
+ else
/* Showing only the filepairs that has the needle */
for (i = 0; i < q->nr; i++) {
struct diff_filepair *p = q->queue[i];
struct dirent *de;
char pathbuf[PATH_MAX];
char *name;
-
+
if (!dir)
die("cannot opendir %s (%s)", path, strerror(errno));
strcpy(pathbuf, path);
setup_git_env();
return git_graft_file;
}
-
-
commit->object.flags |= POPPED;
if (!(commit->object.flags & COMMON))
non_common_revs--;
-
+
parents = commit->parents;
if (commit->object.flags & COMMON) {
int get_recover = 0;
static unsigned char current_commit_sha1[20];
-void pull_say(const char *fmt, const char *hex)
+void pull_say(const char *fmt, const char *hex)
{
if (get_verbosely)
fprintf(stderr, fmt, hex);
return 0;
prefetch(obj->sha1);
}
-
+
object_list_insert(obj, process_queue_end);
process_queue_end = &(*process_queue_end)->next;
return 0;
# This tool is copyright (c) 2005, Martin Langhoff.
# It is released under the Gnu Public License, version 2.
#
-# The basic idea is to walk the output of tla abrowse,
-# fetch the changesets and apply them.
+# The basic idea is to walk the output of tla abrowse,
+# fetch the changesets and apply them.
#
=head1 Invocation
- git-archimport [ -h ] [ -v ] [ -o ] [ -a ] [ -f ] [ -T ]
- [ -D depth] [ -t tempdir ] <archive>/<branch> [ <archive>/<branch> ]
+ git-archimport [ -h ] [ -v ] [ -o ] [ -a ] [ -f ] [ -T ]
+ [ -D depth] [ -t tempdir ] <archive>/<branch> [ <archive>/<branch> ]
Imports a project from one or more Arch repositories. It will follow branches
and repositories within the namespaces defined by the <archive/branch>
parameters supplied. If it cannot find the remote branch a merge comes from
-it will just import it as a regular commit. If it can find it, it will mark it
+it will just import it as a regular commit. If it can find it, it will mark it
as a merge whenever possible.
See man (1) git-archimport for more details.
- create tag objects instead of ref tags
- audit shell-escaping of filenames
- hide our private tags somewhere smarter
- - find a way to make "cat *patches | patch" safe even when patchfiles are missing newlines
+ - find a way to make "cat *patches | patch" safe even when patchfiles are missing newlines
- sort and apply patches by graphing ancestry relations instead of just
relying in dates supplied in the changeset itself.
tla ancestry-graph -m could be helpful here...
=head1 Devel tricks
-Add print in front of the shell commands invoked via backticks.
+Add print in front of the shell commands invoked via backticks.
=head1 Devel Notes
my $stage = shift;
while (my ($limit, $level) = each %arch_branches) {
next unless $level == $stage;
-
- open ABROWSE, "$TLA abrowse -fkD --merges $limit |"
+
+ open ABROWSE, "$TLA abrowse -fkD --merges $limit |"
or die "Problems with tla abrowse: $!";
-
+
my %ps = (); # the current one
my $lastseen = '';
-
+
while (<ABROWSE>) {
chomp;
-
+
# first record padded w 8 spaces
if (s/^\s{8}\b//) {
my ($id, $type) = split(m/\s+/, $_, 2);
push (@psets, \%last_ps);
$psets{ $last_ps{id} } = \%last_ps;
}
-
+
my $branch = extract_versionname($id);
%ps = ( id => $id, branch => $branch );
if (%last_ps && ($last_ps{branch} eq $branch)) {
$ps{parent_id} = $last_ps{id};
}
-
+
$arch_branches{$branch} = 1;
$lastseen = 'id';
$ps{type} = 't';
# read which revision we've tagged when we parse the log
$ps{tag} = $1;
- } else {
+ } else {
warn "Unknown type $type";
}
$arch_branches{$branch} = 1;
$lastseen = 'id';
- } elsif (s/^\s{10}//) {
- # 10 leading spaces or more
+ } elsif (s/^\s{10}//) {
+ # 10 leading spaces or more
# indicate commit metadata
-
+
# date
if ($lastseen eq 'id' && m/^(\d{4}-\d\d-\d\d \d\d:\d\d:\d\d)/){
$ps{date} = $1;
} elsif ($lastseen eq 'merges' && s/^\s{2}//) {
my $id = $_;
push (@{$ps{merges}}, $id);
-
+
# aggressive branch finding:
if ($opt_D) {
my $branch = extract_versionname($id);
my $repo = extract_reponame($branch);
-
+
if (archive_reachable($repo) &&
!defined $arch_branches{$branch}) {
$arch_branches{$branch} = $stage + 1;
if (@psets && $psets[$#psets]{branch} eq $ps{branch}) {
$temp{parent_id} = $psets[$#psets]{id};
}
- push (@psets, \%temp);
+ push (@psets, \%temp);
$psets{ $temp{id} } = \%temp;
- }
-
+ }
+
close ABROWSE or die "$TLA abrowse failed on $limit\n";
}
} # end foreach $root
while (my $file = readdir(DIR)) {
# skip non-interesting-files
next unless -f "$ptag_dir/$file";
-
+
# convert first '--' to '/' from old git-archimport to use
# as an archivename/c--b--v private tag
if ($file !~ m!,!) {
my $fq_cvbr = shift; # archivename/[[[[category]branch]version]revision]
return (split(/\//, $fq_cvbr))[0];
}
-
+
sub extract_versionname {
my $name = shift;
$name =~ s/--(?:patch|version(?:fix)?|base)-\d+$//;
}
# convert a fully-qualified revision or version to a unique dirname:
-# normalperson@yhbt.net-05/mpd--uclinux--1--patch-2
+# normalperson@yhbt.net-05/mpd--uclinux--1--patch-2
# becomes: normalperson@yhbt.net-05,mpd--uclinux--1
#
# the git notion of a branch is closer to
sub process_patchset_accurate {
my $ps = shift;
-
+
# switch to that branch if we're not already in that branch:
if (-e "$git_dir/refs/heads/$ps->{branch}") {
system('git-checkout','-f',$ps->{branch}) == 0 or die "$! $?\n";
my $rm = safe_pipe_capture('git-ls-files','--others','-z');
rmtree(split(/\0/,$rm)) if $rm;
}
-
+
# Apply the import/changeset/merge into the working tree
my $dir = sync_to_ps($ps);
# read the new log entry:
parselog($ps, \@commitlog);
if ($ps->{id} =~ /--base-0$/ && $ps->{id} ne $psets[0]{id}) {
- # this should work when importing continuations
+ # this should work when importing continuations
if ($ps->{tag} && (my $branchpoint = eval { ptag($ps->{tag}) })) {
-
+
# find where we are supposed to branch from
if (! -e "$git_dir/refs/heads/$ps->{branch}") {
system('git-branch',$ps->{branch},$branchpoint) == 0 or die "$! $?\n";
}
# allow multiple bases/imports here since Arch supports cherry-picks
# from unrelated trees
- }
-
+ }
+
# update the index with all the changes we got
system('git-diff-files --name-only -z | '.
'git-update-index --remove -z --stdin') == 0 or die "$! $?\n";
# does not handle permissions or any renames involving directories
sub process_patchset_fast {
my $ps = shift;
- #
+ #
# create the branch if needed
#
if ($ps->{type} eq 'i' && !$import) {
# new branch! we need to verify a few things
die "Branch on a non-tag!" unless $ps->{type} eq 't';
my $branchpoint = ptag($ps->{tag});
- die "Tagging from unknown id unsupported: $ps->{tag}"
+ die "Tagging from unknown id unsupported: $ps->{tag}"
unless $branchpoint;
-
+
# find where we are supposed to branch from
if (! -e "$git_dir/refs/heads/$ps->{branch}") {
system('git-branch',$ps->{branch},$branchpoint) == 0 or die "$! $?\n";
}
system('git-checkout',$ps->{branch}) == 0 or die "$! $?\n";
return 0;
- }
+ }
die $! if $?;
- }
+ }
#
# Apply the import/changeset/merge into the working tree
- #
+ #
if ($ps->{type} eq 'i' || $ps->{type} eq 't') {
apply_import($ps) or die $!;
$stats{import_or_tag}++;
# prepare update git's index, based on what arch knows
# about the pset, resolve parents, etc
#
-
- my @commitlog = safe_pipe_capture($TLA,'cat-archive-log',$ps->{id});
+
+ my @commitlog = safe_pipe_capture($TLA,'cat-archive-log',$ps->{id});
die "Error in cat-archive-log: $!" if $?;
-
+
parselog($ps,\@commitlog);
# imports don't give us good info
if (@$ren % 2) {
die "Odd number of entries in rename!?";
}
-
+
while (@$ren) {
my $from = shift @$ren;
- my $to = shift @$ren;
+ my $to = shift @$ren;
unless (-d dirname($to)) {
mkpath(dirname($to)); # will die on err
"Things may be a bit slow\n";
*process_patchset = *process_patchset_accurate;
}
-
+
foreach my $ps (@psets) {
# process patchsets
$ps->{branch} = git_branchname($ps->{id});
#
- # ensure we have a clean state
- #
+ # ensure we have a clean state
+ #
if (my $dirty = `git-diff-files`) {
die "Unclean tree when about to process $ps->{id} " .
" - did we fail to commit cleanly before?\n$dirty";
}
die $! if $?;
-
+
#
# skip commits already in repo
#
my $tree = `git-write-tree`;
die "cannot write tree $!" if $?;
chomp $tree;
-
+
#
# Who's your daddy?
#
close HEAD;
chomp $p;
push @par, '-p', $p;
- } else {
+ } else {
if ($ps->{type} eq 's') {
warn "Could not find the right head for the branch $ps->{branch}";
}
}
}
-
+
if ($ps->{merges}) {
push @par, find_parents($ps);
}
- #
+ #
# Commit, tag and clean state
#
$ENV{TZ} = 'GMT';
$ENV{GIT_COMMITTER_EMAIL} = $ps->{email};
$ENV{GIT_COMMITTER_DATE} = $ps->{date};
- my $pid = open2(*READER, *WRITER,'git-commit-tree',$tree,@par)
+ my $pid = open2(*READER, *WRITER,'git-commit-tree',$tree,@par)
or die $!;
print WRITER $ps->{summary},"\n\n";
print WRITER $ps->{message},"\n";
-
+
# make it easy to backtrack and figure out which Arch revision this was:
print WRITER 'git-archimport-id: ',$ps->{id},"\n";
-
+
close WRITER;
my $commitid = <READER>; # read
chomp $commitid;
}
#
# Update the branch
- #
+ #
open HEAD, ">","$git_dir/refs/heads/$ps->{branch}";
print HEAD $commitid;
close HEAD;
sub sync_to_ps {
my $ps = shift;
my $tree_dir = $tmp.'/'.tree_dirname($ps->{id});
-
+
$opt_v && print "sync_to_ps($ps->{id}) method: ";
if (-d $tree_dir) {
safe_pipe_capture($TLA,'get','--no-pristine',$ps->{id},$tree_dir);
$stats{get_new}++;
}
-
+
# added -I flag to rsync since we're going to fast! AIEEEEE!!!!
system('rsync','-aI','--delete','--exclude',$git_dir,
# '--exclude','.arch-inventory',
mkpath($tmp);
safe_pipe_capture($TLA,'get','-s','--no-pristine',$ps->{id},"$tmp/import");
- die "Cannot get import: $!" if $?;
+ die "Cannot get import: $!" if $?;
system('rsync','-aI','--delete', '--exclude',$git_dir,
'--exclude','.arch-ids','--exclude','{arch}',
"$tmp/import/", './');
die "Cannot rsync import:$!" if $?;
-
+
rmtree("$tmp/import");
die "Cannot remove tempdir: $!" if $?;
-
+
return 1;
}
# get the changeset
safe_pipe_capture($TLA,'get-changeset',$ps->{id},"$tmp/changeset");
die "Cannot get changeset: $!" if $?;
-
+
# apply patches
if (`find $tmp/changeset/patches -type f -name '*.patch'`) {
# this can be sped up considerably by doing
# (find | xargs cat) | patch
# but that can get mucked up by patches
- # with missing trailing newlines or the standard
+ # with missing trailing newlines or the standard
# 'missing newline' flag in the patch - possibly
# produced with an old/buggy diff.
# slow and safe, we invoke patch once per patchfile
# bring in new files
system('rsync','-aI','--exclude',$git_dir,
- '--exclude','.arch-ids',
+ '--exclude','.arch-ids',
'--exclude', '{arch}',
"$tmp/changeset/new-files-archive/",'./');
removed_files => 1,
removed_directories => 1,
);
-
+
chomp (@$log);
while ($_ = shift @$log) {
if (/^Continuation-of:\s*(.*)/) {
}
}
}
-
+
# drop leading empty lines from the log message
while (@$log && $log->[0] eq '') {
shift @$log;
$ps->{summary} = $log->[0] . '...';
}
$ps->{message} = join("\n",@$log);
-
+
# skip Arch control files, unescape pika-escaped files
foreach my $k (keys %want_headers) {
next unless (defined $ps->{$k});
# write/read a tag
sub tag {
my ($tag, $commit) = @_;
-
+
if ($opt_o) {
$tag =~ s|/|--|g;
} else {
$patchname =~ s/.*--//;
$tag = git_branchname ($tag) . '--' . $patchname;
}
-
+
if ($commit) {
open(C,">","$git_dir/refs/tags/$tag")
or die "Cannot create tag $tag: $!\n";
my ($tag, $commit) = @_;
# don't use subdirs for tags yet, it could screw up other porcelains
- $tag =~ s|/|,|g;
-
+ $tag =~ s|/|,|g;
+
my $tag_file = "$ptag_dir/$tag";
my $tag_branch_dir = dirname($tag_file);
mkpath($tag_branch_dir) unless (-d $tag_branch_dir);
or die "Cannot write tag $tag: $!\n";
close(C)
or die "Cannot write tag $tag: $!\n";
- $rptags{$commit} = $tag
+ $rptags{$commit} = $tag
unless $tag =~ m/--base-0$/;
} else { # read
# if the tag isn't there, return 0
# Identify what branches are merging into me
# and whether we are fully merged
# git-merge-base <headsha> <headsha> should tell
- # me what the base of the merge should be
+ # me what the base of the merge should be
#
my $ps = shift;
}
#
- # foreach branch find a merge base and walk it to the
+ # foreach branch find a merge base and walk it to the
# head where we are, collecting the merged patchsets that
# Arch has recorded. Keep that in @have
# Compare that with the commits on the other branch
# between merge-base and the tip of the branch (@need)
# and see if we have a series of consecutive patches
# starting from the merge base. The tip of the series
- # of consecutive patches merged is our new parent for
+ # of consecutive patches merged is our new parent for
# that branch.
#
foreach my $branch (keys %branches) {
next unless -e "$git_dir/refs/heads/$branch";
my $mergebase = `git-merge-base $branch $ps->{branch}`;
- if ($?) {
- # Don't die here, Arch supports one-way cherry-picking
- # between branches with no common base (or any relationship
- # at all beforehand)
- warn "Cannot find merge base for $branch and $ps->{branch}";
- next;
- }
+ if ($?) {
+ # Don't die here, Arch supports one-way cherry-picking
+ # between branches with no common base (or any relationship
+ # at all beforehand)
+ warn "Cannot find merge base for $branch and $ps->{branch}";
+ next;
+ }
chomp $mergebase;
# now walk up to the mergepoint collecting what patches we have
# merge what we have with what ancestors have
%have = (%have, %ancestorshave);
- # see what the remote branch has - these are the merges we
+ # see what the remote branch has - these are the merges we
# will want to have in a consecutive series from the mergebase
my $otherbranchtip = git_rev_parse($branch);
my @needraw = `git-rev-list --topo-order $otherbranchtip ^$mergebase`;
foreach my $needps (@needraw) { # get the psets
$needps = commitid2pset($needps);
# git-rev-list will also
- # list commits merged in via earlier
+ # list commits merged in via earlier
# merges. we are only interested in commits
# from the branch we're looking at
if ($branch eq $needps->{branch}) {
next unless ref $psets{$p}{merges};
my @merges = @{$psets{$p}{merges}};
foreach my $merge (@merges) {
- if ($parents{$merge}) {
+ if ($parents{$merge}) {
delete $parents{$merge};
}
}
sub commitid2pset {
my $commitid = shift;
chomp $commitid;
- my $name = $rptags{$commitid}
+ my $name = $rptags{$commitid}
|| die "Cannot find reverse tag mapping for $commitid";
$name =~ s|,|/|;
- my $ps = $psets{$name}
+ my $ps = $psets{$name}
|| (print Dumper(sort keys %psets)) && die "Cannot find patchset for $name";
return $ps;
}
my $archive = shift;
return 1 if $reachable{$archive};
return 0 if $unreachable{$archive};
-
+
if (system "$TLA whereis-archive $archive >/dev/null") {
if ($opt_a && (system($TLA,'register-archive',
"http://mirrors.sourcecontrol.net/$archive") == 0)) {
return 1;
}
}
-
echo "unknown flag $arg"
exit 1
fi
- new="$rev"
new_name="$arg"
if git-show-ref --verify --quiet -- "refs/heads/$arg"
then
+ rev=$(git-rev-parse --verify "refs/heads/$arg^0")
branch="$arg"
fi
+ new="$rev"
elif rev=$(git-rev-parse --verify "$arg^{tree}" 2>/dev/null)
then
# checking out selected paths from a tree-ish.
esac
# Match the index to the working tree, and do a three-way.
- git diff-files --name-only | git update-index --remove --stdin &&
+ git diff-files --name-only | git update-index --remove --stdin &&
work=`git write-tree` &&
git read-tree $v --reset -u $new || exit
(exit $saved_err)
fi
-#
+#
# Switch the HEAD pointer to the new branch if we
# checked out a branch head, and remove any potential
# old MERGE_HEAD's (subsequent commits will clearly not
#
# Copyright (c) 2005, Linus Torvalds
# Copyright (c) 2005, Junio C Hamano
-#
+#
# Clone a repository into a different directory that does not yet exist.
# See git-sh-setup why.
get_repo_base() {
(
cd "`/bin/pwd`" &&
- cd "$1" &&
+ cd "$1" || cd "$1.git" &&
{
cd .git
pwd
*,--na|*,--nak|*,--nake|*,--naked|\
*,-b|*,--b|*,--ba|*,--bar|*,--bare) bare=yes ;;
*,-l|*,--l|*,--lo|*,--loc|*,--loca|*,--local) use_local=yes ;;
- *,-s|*,--s|*,--sh|*,--sha|*,--shar|*,--share|*,--shared)
+ *,-s|*,--s|*,--sh|*,--sha|*,--shar|*,--share|*,--shared)
local_shared=yes; use_local=yes ;;
1,--template) usage ;;
*,--template)
rm -f "$GIT_DIR/CLONE_HEAD" "$GIT_DIR/REMOTE_HEAD"
trap - 0
-
} >>"$GIT_DIR"/COMMIT_EDITMSG
else
# we need to check if there is anything to commit
- run_status >/dev/null
+ run_status >/dev/null
fi
if [ "$?" != "0" -a ! -f "$GIT_DIR/MERGE_HEAD" -a -z "$amend" ]
then
# ... validate new files,
foreach my $f (@afiles) {
if (defined ($cvsstat{$f}) and $cvsstat{$f} ne "Unknown") {
- $dirty = 1;
+ $dirty = 1;
warn "File $f is already known in your CVS checkout -- perhaps it has been added by another user. Or this may indicate that it exists on a different branch. If this is the case, use -f to force the merge.\n";
warn "Status was: $cvsstat{$f}\n";
}
if ($#ARGV == 0) {
$cvs_tree = $ARGV[0];
} elsif (-f 'CVS/Repository') {
- open my $f, '<', 'CVS/Repository' or
+ open my $f, '<', 'CVS/Repository' or
die 'Failed to open CVS/Repository';
$cvs_tree = <$f>;
chomp $cvs_tree;
my ($self,$fn,$rev) = @_;
my $res;
- my ($fh, $name) = tempfile('gitcvs.XXXXXX',
+ my ($fh, $name) = tempfile('gitcvs.XXXXXX',
DIR => File::Spec->tmpdir(), UNLINK => 1);
$self->_file($fn,$rev) and $res = $self->_line($fh);
sub get_headref ($$) {
my $name = shift;
- my $git_dir = shift;
-
+ my $git_dir = shift;
+
my $f = "$git_dir/refs/heads/$name";
if (open(my $fh, $f)) {
chomp(my $r = <$fh>);
if ($branch eq $opt_o && !$index{branch} && !get_headref($branch, $git_dir)) {
# looks like an initial commit
# use the index primed by git-init
- $ENV{GIT_INDEX_FILE} = '.git/index';
- $index{$branch} = '.git/index';
+ $ENV{GIT_INDEX_FILE} = "$git_dir/index";
+ $index{$branch} = "$git_dir/index";
} else {
# use an index per branch to speed up
# imports of projects with many branches
$xtag =~ s/\s+\*\*.*$//; # Remove stuff like ** INVALID ** and ** FUNKY **
$xtag =~ tr/_/\./ if ( $opt_u );
$xtag =~ s/[\/]/$opt_s/g;
-
+
my $pid = open2($in, $out, 'git-mktag');
print $out "object $cid\n".
"type commit\n".
$? != 0 or $tagobj !~ /^[0123456789abcdef]{40}$/ ) {
die "Cannot create tag object $xtag: $!\n";
}
-
+
open(C,">$git_dir/refs/tags/$xtag")
or die "Cannot create tag $xtag: $!\n";
}
foreach my $git_index (values %index) {
- if ($git_index ne '.git/index') {
+ if ($git_index ne "$git_dir/index") {
unlink($git_index);
}
}
my ( $cmd, $data ) = @_;
$log->debug("req_Root : $data");
+ unless ($data =~ m#^/#) {
+ print "error 1 Root must be an absolute pathname\n";
+ return 0;
+ }
+
+ if ($state->{CVSROOT}
+ && ($state->{CVSROOT} ne $data)) {
+ print "error 1 Conflicting roots specified\n";
+ return 0;
+ }
+
$state->{CVSROOT} = $data;
$ENV{GIT_DIR} = $state->{CVSROOT} . "/";
echo >&2 "GITGUI_VERSION = $VN"
echo "GITGUI_VERSION = $VN" >$GVF
}
-
-
[format { [list source [file join $dir %s]]} \
[file split $scriptFile]] "\n"
}
-
# remove lines that are unique to ours.
orig=`git-unpack-file $2`
sz0=`wc -c <"$orig"`
- diff -u -La/$orig -Lb/$orig $orig $src2 | git-apply --no-add
+ diff -u -La/$orig -Lb/$orig $orig $src2 | git-apply --no-add
sz1=`wc -c <"$orig"`
# If we do not have enough common material, it is not
# Copyright (c) 2006 Theodore Y. Ts'o
#
# This file is licensed under the GPL v2, or a later version
-# at the discretion of Junio C Hammano.
+# at the discretion of Junio C Hamano.
#
USAGE='[--tool=tool] [file to merge] ...'
if stitch == 1:
git.clean_directories()
stitch = 0
-
if ($node_kind eq $SVN::Node::dir) {
$srcpath =~ s#/*$#/#;
}
-
+
my $pid = open my $f,'-|';
die $! unless defined $pid;
if (!$pid) {
} else {
$p = $path;
}
- push(@$new,[$mode,$sha1,$p]);
+ push(@$new,[$mode,$sha1,$p]);
}
close($f) or
print STDERR "$newrev:$newbranch: could not list files in $oldpath \@ $rev\n";
#!/bin/sh
# Copyright (c) 2005 Linus Torvalds
-USAGE='-l [<pattern>] | [-a | -s | -u <key-id>] [-f | -d | -v] [-m <msg>] <tagname> [<head>]'
+USAGE='[-n [<num>]] -l [<pattern>] | [-a | -s | -u <key-id>] [-f | -d | -v] [-m <msg>] <tagname> [<head>]'
SUBDIRECTORY_OK='Yes'
. git-sh-setup
username=
list=
verify=
+LINES=0
while case "$#" in 0) break ;; esac
do
case "$1" in
-f)
force=1
;;
- -l)
- case "$#" in
- 1)
- set x . ;;
+ -n)
+ case $2 in
+ -*) LINES=1 # no argument
+ ;;
+ *) shift
+ LINES=$(expr "$1" : '\([0-9]*\)')
+ [ -z "$LINES" ] && LINES=1 # 1 line is default when -n is used
+ ;;
esac
+ ;;
+ -l)
+ list=1
shift
- git rev-parse --symbolic --tags | sort | grep "$@"
- exit $?
+ PATTERN="$1" # select tags by shell pattern, not re
+ git rev-parse --symbolic --tags | sort |
+ while read TAG
+ do
+ case "$TAG" in
+ *$PATTERN*) ;;
+ *) continue ;;
+ esac
+ [ "$LINES" -le 0 ] && { echo "$TAG"; continue ;}
+ OBJTYPE=$(git cat-file -t "$TAG")
+ case $OBJTYPE in
+ tag) ANNOTATION=$(git cat-file tag "$TAG" |
+ sed -e '1,/^$/d' \
+ -e '/^-----BEGIN PGP SIGNATURE-----$/Q' )
+ printf "%-15s %s\n" "$TAG" "$ANNOTATION" |
+ sed -e '2,$s/^/ /' \
+ -e "${LINES}q"
+ ;;
+ *) echo "$TAG"
+ ;;
+ esac
+ done
;;
-m)
- annotate=1
+ annotate=1
shift
message="$1"
if test "$#" = "0"; then
username="$1"
;;
-d)
- shift
+ shift
had_error=0
for tag
do
shift
done
+[ -n "$list" ] && exit 0
+
name="$1"
[ "$name" ] || usage
prev=0000000000000000000000000000000000000000
fi
git update-ref "refs/tags/$name" "$object" "$prev"
-
sed '/-----BEGIN PGP/Q' |
gpg --verify "$GIT_DIR/.tmp-vtag" - || exit 1
rm -f "$GIT_DIR/.tmp-vtag"
-
%package email
Summary: Git tools for sending email
Group: Development/Tools
-Requires: git-core = %{version}-%{release}
+Requires: git-core = %{version}-%{release}
%description email
Git tools for sending email.
set tagids($name) $commit
lappend idtags($commit) $name
}
- }
+ }
catch {
set tagcontents($name) [exec git cat-file tag $id]
}
Any comment/question/concern to:
Git mailing list <git@vger.kernel.org>
-
}
unlink(obj_req->tmpfile);
if (obj_req->slot) {
- release_active_slot(obj_req->slot);
+ release_active_slot(obj_req->slot);
obj_req->slot = NULL;
}
release_object_request(obj_req);
request->buffer.size = stream.total_out;
request->buffer.posn = 0;
- request->url = xmalloc(strlen(remote->url) +
+ request->url = xmalloc(strlen(remote->url) +
strlen(request->lock->token) + 51);
strcpy(request->url, remote->url);
posn = request->url + strlen(remote->url);
return 0;
}
-#ifdef USE_CURL_MULTI
+#ifdef USE_CURL_MULTI
if (!strcmp("http.maxrequests", var)) {
if (max_requests == -1)
max_requests = git_config_int(var, value);
setup_ident();
if (!name)
name = git_default_name;
- if (!email)
- email = getenv("EMAIL");
if (!email)
email = git_default_email;
+ if (!email)
+ email = getenv("EMAIL");
if (!*name) {
struct passwd *pw;
msg->data[ msg->len ] = 0;
*ofs += msg->len;
- return 1;
+ return 1;
}
static imap_server_conf_t server =
struct object_entry
{
- off_t offset;
+ struct pack_idx_entry idx;
unsigned long size;
unsigned int hdr_size;
- uint32_t crc32;
enum object_type type;
enum object_type real_type;
- unsigned char sha1[20];
};
union delta_base {
unsigned shift;
void *data;
- obj->offset = consumed_bytes;
+ obj->idx.offset = consumed_bytes;
input_crc32 = crc32(0, Z_NULL, 0);
p = fill(1);
while (c & 128) {
base_offset += 1;
if (!base_offset || MSB(base_offset, 7))
- bad_object(obj->offset, "offset value overflow for delta base object");
+ bad_object(obj->idx.offset, "offset value overflow for delta base object");
p = fill(1);
c = *p;
use(1);
base_offset = (base_offset << 7) + (c & 127);
}
- delta_base->offset = obj->offset - base_offset;
- if (delta_base->offset >= obj->offset)
- bad_object(obj->offset, "delta base offset is out of bound");
+ delta_base->offset = obj->idx.offset - base_offset;
+ if (delta_base->offset >= obj->idx.offset)
+ bad_object(obj->idx.offset, "delta base offset is out of bound");
break;
case OBJ_COMMIT:
case OBJ_TREE:
case OBJ_TAG:
break;
default:
- bad_object(obj->offset, "unknown object type %d", obj->type);
+ bad_object(obj->idx.offset, "unknown object type %d", obj->type);
}
- obj->hdr_size = consumed_bytes - obj->offset;
+ obj->hdr_size = consumed_bytes - obj->idx.offset;
- data = unpack_entry_data(obj->offset, obj->size);
- obj->crc32 = input_crc32;
+ data = unpack_entry_data(obj->idx.offset, obj->size);
+ obj->idx.crc32 = input_crc32;
return data;
}
static void *get_data_from_pack(struct object_entry *obj)
{
- unsigned long from = obj[0].offset + obj[0].hdr_size;
- unsigned long len = obj[1].offset - from;
+ unsigned long from = obj[0].idx.offset + obj[0].hdr_size;
+ unsigned long len = obj[1].idx.offset - from;
unsigned long rdy = 0;
unsigned char *src, *data;
z_stream stream;
&result_size);
free(delta_data);
if (!result)
- bad_object(delta_obj->offset, "failed to apply delta");
- sha1_object(result, result_size, type, delta_obj->sha1);
+ bad_object(delta_obj->idx.offset, "failed to apply delta");
+ sha1_object(result, result_size, type, delta_obj->idx.sha1);
nr_resolved_deltas++;
- hashcpy(delta_base.sha1, delta_obj->sha1);
+ hashcpy(delta_base.sha1, delta_obj->idx.sha1);
if (!find_delta_children(&delta_base, &first, &last)) {
for (j = first; j <= last; j++) {
struct object_entry *child = objects + deltas[j].obj_no;
}
memset(&delta_base, 0, sizeof(delta_base));
- delta_base.offset = delta_obj->offset;
+ delta_base.offset = delta_obj->idx.offset;
if (!find_delta_children(&delta_base, &first, &last)) {
for (j = first; j <= last; j++) {
struct object_entry *child = objects + deltas[j].obj_no;
delta->obj_no = i;
delta++;
} else
- sha1_object(data, obj->size, obj->type, obj->sha1);
+ sha1_object(data, obj->size, obj->type, obj->idx.sha1);
free(data);
if (verbose)
display_progress(&progress, i+1);
}
- objects[i].offset = consumed_bytes;
+ objects[i].idx.offset = consumed_bytes;
if (verbose)
stop_progress(&progress);
if (obj->type == OBJ_REF_DELTA || obj->type == OBJ_OFS_DELTA)
continue;
- hashcpy(base.sha1, obj->sha1);
+ hashcpy(base.sha1, obj->idx.sha1);
ref = !find_delta_children(&base, &ref_first, &ref_last);
memset(&base, 0, sizeof(base));
- base.offset = obj->offset;
+ base.offset = obj->idx.offset;
ofs = !find_delta_children(&base, &ofs_first, &ofs_last);
if (!ref && !ofs)
continue;
}
header[n++] = c;
write_or_die(output_fd, header, n);
- obj[0].crc32 = crc32(0, Z_NULL, 0);
- obj[0].crc32 = crc32(obj[0].crc32, header, n);
- obj[1].offset = obj[0].offset + n;
- obj[1].offset += write_compressed(output_fd, buf, size, &obj[0].crc32);
- hashcpy(obj->sha1, sha1);
+ obj[0].idx.crc32 = crc32(0, Z_NULL, 0);
+ obj[0].idx.crc32 = crc32(obj[0].idx.crc32, header, n);
+ obj[1].idx.offset = obj[0].idx.offset + n;
+ obj[1].idx.offset += write_compressed(output_fd, buf, size, &obj[0].idx.crc32);
+ hashcpy(obj->idx.sha1, sha1);
}
static int delta_pos_compare(const void *_a, const void *_b)
free(sorted_by_pos);
}
-static uint32_t index_default_version = 1;
-static uint32_t index_off32_limit = 0x7fffffff;
-
-static int sha1_compare(const void *_a, const void *_b)
-{
- struct object_entry *a = *(struct object_entry **)_a;
- struct object_entry *b = *(struct object_entry **)_b;
- return hashcmp(a->sha1, b->sha1);
-}
-
-/*
- * On entry *sha1 contains the pack content SHA1 hash, on exit it is
- * the SHA1 hash of sorted object names.
- */
-static const char *write_index_file(const char *index_name, unsigned char *sha1)
-{
- struct sha1file *f;
- struct object_entry **sorted_by_sha, **list, **last;
- uint32_t array[256];
- int i, fd;
- SHA_CTX ctx;
- uint32_t index_version;
-
- if (nr_objects) {
- sorted_by_sha =
- xcalloc(nr_objects, sizeof(struct object_entry *));
- list = sorted_by_sha;
- last = sorted_by_sha + nr_objects;
- for (i = 0; i < nr_objects; ++i)
- sorted_by_sha[i] = &objects[i];
- qsort(sorted_by_sha, nr_objects, sizeof(sorted_by_sha[0]),
- sha1_compare);
- }
- else
- sorted_by_sha = list = last = NULL;
-
- if (!index_name) {
- static char tmpfile[PATH_MAX];
- snprintf(tmpfile, sizeof(tmpfile),
- "%s/tmp_idx_XXXXXX", get_object_directory());
- fd = mkstemp(tmpfile);
- index_name = xstrdup(tmpfile);
- } else {
- unlink(index_name);
- fd = open(index_name, O_CREAT|O_EXCL|O_WRONLY, 0600);
- }
- if (fd < 0)
- die("unable to create %s: %s", index_name, strerror(errno));
- f = sha1fd(fd, index_name);
-
- /* if last object's offset is >= 2^31 we should use index V2 */
- index_version = (objects[nr_objects-1].offset >> 31) ? 2 : index_default_version;
-
- /* index versions 2 and above need a header */
- if (index_version >= 2) {
- struct pack_idx_header hdr;
- hdr.idx_signature = htonl(PACK_IDX_SIGNATURE);
- hdr.idx_version = htonl(index_version);
- sha1write(f, &hdr, sizeof(hdr));
- }
-
- /*
- * Write the first-level table (the list is sorted,
- * but we use a 256-entry lookup to be able to avoid
- * having to do eight extra binary search iterations).
- */
- for (i = 0; i < 256; i++) {
- struct object_entry **next = list;
- while (next < last) {
- struct object_entry *obj = *next;
- if (obj->sha1[0] != i)
- break;
- next++;
- }
- array[i] = htonl(next - sorted_by_sha);
- list = next;
- }
- sha1write(f, array, 256 * 4);
-
- /* compute the SHA1 hash of sorted object names. */
- SHA1_Init(&ctx);
-
- /*
- * Write the actual SHA1 entries..
- */
- list = sorted_by_sha;
- for (i = 0; i < nr_objects; i++) {
- struct object_entry *obj = *list++;
- if (index_version < 2) {
- uint32_t offset = htonl(obj->offset);
- sha1write(f, &offset, 4);
- }
- sha1write(f, obj->sha1, 20);
- SHA1_Update(&ctx, obj->sha1, 20);
- }
-
- if (index_version >= 2) {
- unsigned int nr_large_offset = 0;
-
- /* write the crc32 table */
- list = sorted_by_sha;
- for (i = 0; i < nr_objects; i++) {
- struct object_entry *obj = *list++;
- uint32_t crc32_val = htonl(obj->crc32);
- sha1write(f, &crc32_val, 4);
- }
-
- /* write the 32-bit offset table */
- list = sorted_by_sha;
- for (i = 0; i < nr_objects; i++) {
- struct object_entry *obj = *list++;
- uint32_t offset = (obj->offset <= index_off32_limit) ?
- obj->offset : (0x80000000 | nr_large_offset++);
- offset = htonl(offset);
- sha1write(f, &offset, 4);
- }
-
- /* write the large offset table */
- list = sorted_by_sha;
- while (nr_large_offset) {
- struct object_entry *obj = *list++;
- uint64_t offset = obj->offset;
- if (offset > index_off32_limit) {
- uint32_t split[2];
- split[0] = htonl(offset >> 32);
- split[1] = htonl(offset & 0xffffffff);
- sha1write(f, split, 8);
- nr_large_offset--;
- }
- }
- }
-
- sha1write(f, sha1, 20);
- sha1close(f, NULL, 1);
- free(sorted_by_sha);
- SHA1_Final(sha1, &ctx);
- return index_name;
-}
-
static void final(const char *final_pack_name, const char *curr_pack_name,
const char *final_index_name, const char *curr_index_name,
const char *keep_name, const char *keep_msg,
const char *curr_index, *index_name = NULL;
const char *keep_name = NULL, *keep_msg = NULL;
char *index_name_buf = NULL, *keep_name_buf = NULL;
+ struct pack_idx_entry **idx_objects;
unsigned char sha1[20];
for (i = 1; i < argc; i++) {
index_name = argv[++i];
} else if (!prefixcmp(arg, "--index-version=")) {
char *c;
- index_default_version = strtoul(arg + 16, &c, 10);
- if (index_default_version > 2)
+ pack_idx_default_version = strtoul(arg + 16, &c, 10);
+ if (pack_idx_default_version > 2)
die("bad %s", arg);
if (*c == ',')
- index_off32_limit = strtoul(c+1, &c, 0);
- if (*c || index_off32_limit & 0x80000000)
+ pack_idx_off32_limit = strtoul(c+1, &c, 0);
+ if (*c || pack_idx_off32_limit & 0x80000000)
die("bad %s", arg);
} else
usage(index_pack_usage);
nr_deltas - nr_resolved_deltas);
}
free(deltas);
- curr_index = write_index_file(index_name, sha1);
+
+ idx_objects = xmalloc((nr_objects) * sizeof(struct pack_idx_entry *));
+ for (i = 0; i < nr_objects; i++)
+ idx_objects[i] = &objects[i].idx;
+ curr_index = write_idx_file(index_name, idx_objects, nr_objects, sha1);
+ free(idx_objects);
+
final(pack_name, curr_pack,
index_name, curr_index,
keep_name, keep_msg,
return -1;
target = find_sha1_pack(sha1, packs);
if (!target)
- return error("Couldn't find %s: not separate or in any pack",
+ return error("Couldn't find %s: not separate or in any pack",
sha1_to_hex(sha1));
if (get_verbosely) {
fprintf(stderr, "Getting pack %s\n",
fprintf(stderr, " which contains %s\n",
sha1_to_hex(sha1));
}
- sprintf(filename, "%s/objects/pack/pack-%s.pack",
+ sprintf(filename, "%s/objects/pack/pack-%s.pack",
path, sha1_to_hex(target->sha1));
copy_file(filename, sha1_pack_name(target->sha1),
sha1_to_hex(target->sha1), 1);
- sprintf(filename, "%s/objects/pack/pack-%s.idx",
+ sprintf(filename, "%s/objects/pack/pack-%s.idx",
path, sha1_to_hex(target->sha1));
copy_file(filename, sha1_pack_index_name(target->sha1),
sha1_to_hex(target->sha1), 1);
char *hex = sha1_to_hex(sha1);
char *dest_filename = sha1_file_name(sha1);
- if (object_name_start < 0) {
+ if (object_name_start < 0) {
strcpy(filename, path); /* e.g. git.git */
strcat(filename, "/objects/");
object_name_start = strlen(filename);
unlink(lk->filename);
lk->filename[0] = 0;
}
-
splice_tree(hash1, add_prefix, hash2, shifted);
}
-
static int merge_entry(int pos, const char *path)
{
int found;
-
+
if (pos >= active_nr)
die("git-merge-index: %s not in the cache", path);
arguments[0] = pgm;
* The first three lines are guaranteed to be at least 63 bytes:
* "object <sha1>\n" is 48 bytes, "type tag\n" at 9 bytes is the
* shortest possible type-line, and "tag .\n" at 6 bytes is the
- * shortest single-character-tag line.
+ * shortest single-character-tag line.
*
* We also artificially limit the size of the full object to 8kB.
* Just because I'm a lazy bastard, and if you can't fit a signature
-/*
+/*
* The contents of this file are subject to the Mozilla Public
* License Version 1.1 (the "License"); you may not use this file
* except in compliance with the License. You may obtain a copy of
* the License at http://www.mozilla.org/MPL/
- *
+ *
* Software distributed under the License is distributed on an "AS
* IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or
* implied. See the License for the specific language governing
* rights and limitations under the License.
- *
+ *
* The Original Code is SHA 180-1 Reference Implementation (Compact version)
- *
+ *
* The Initial Developer of the Original Code is Paul Kocher of
- * Cryptography Research. Portions created by Paul Kocher are
+ * Cryptography Research. Portions created by Paul Kocher are
* Copyright (C) 1995-9 by Cryptography Research, Inc. All
* Rights Reserved.
- *
+ *
* Contributor(s):
*
* Paul Kocher
- *
+ *
* Alternatively, the contents of this file may be used under the
* terms of the GNU General Public License Version 2 or later (the
- * "GPL"), in which case the provisions of the GPL are applicable
- * instead of those above. If you wish to allow use of your
+ * "GPL"), in which case the provisions of the GPL are applicable
+ * instead of those above. If you wish to allow use of your
* version of this file only under the terms of the GPL and not to
* allow others to use your version of this file under the MPL,
* indicate your decision by deleting the provisions above and
ctx->H[3] += D;
ctx->H[4] += E;
}
-
-/*
+/*
* The contents of this file are subject to the Mozilla Public
* License Version 1.1 (the "License"); you may not use this file
* except in compliance with the License. You may obtain a copy of
* the License at http://www.mozilla.org/MPL/
- *
+ *
* Software distributed under the License is distributed on an "AS
* IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or
* implied. See the License for the specific language governing
* rights and limitations under the License.
- *
+ *
* The Original Code is SHA 180-1 Header File
- *
+ *
* The Initial Developer of the Original Code is Paul Kocher of
- * Cryptography Research. Portions created by Paul Kocher are
+ * Cryptography Research. Portions created by Paul Kocher are
* Copyright (C) 1995-9 by Cryptography Research, Inc. All
* Rights Reserved.
- *
+ *
* Contributor(s):
*
* Paul Kocher
- *
+ *
* Alternatively, the contents of this file may be used under the
* terms of the GNU General Public License Version 2 or later (the
- * "GPL"), in which case the provisions of the GPL are applicable
- * instead of those above. If you wish to allow use of your
+ * "GPL"), in which case the provisions of the GPL are applicable
+ * instead of those above. If you wish to allow use of your
* version of this file only under the terms of the GPL and not to
* allow others to use your version of this file under the MPL,
* indicate your decision by deleting the provisions above and
mark_reachable(refs->ref[i], mask);
}
}
-
-
parse_tag_buffer(tag, buffer, size);
obj = &tag->object;
} else {
+ warning("object %s has unknown type id %d\n", sha1_to_hex(sha1), type);
obj = NULL;
}
+ if (obj && obj->type == OBJ_NONE)
+ obj->type = type;
*eaten_p = eaten;
return obj;
}
void mark_reachable(struct object *obj, unsigned int mask);
-struct object_list *object_list_insert(struct object *item,
+struct object_list *object_list_insert(struct object *item,
struct object_list **list_p);
void object_list_append(struct object *item,
{
struct llist *ret;
struct llist_item *new, *old, *prev;
-
+
llist_init(&ret);
if ((ret->size = list->size) == 0)
}
new->next = NULL;
ret->back = new;
-
+
return ret;
}
#include "cache.h"
#include "pack.h"
+#include "csum-file.h"
+
+uint32_t pack_idx_default_version = 1;
+uint32_t pack_idx_off32_limit = 0x7fffffff;
+
+static int sha1_compare(const void *_a, const void *_b)
+{
+ struct pack_idx_entry *a = *(struct pack_idx_entry **)_a;
+ struct pack_idx_entry *b = *(struct pack_idx_entry **)_b;
+ return hashcmp(a->sha1, b->sha1);
+}
+
+/*
+ * On entry *sha1 contains the pack content SHA1 hash, on exit it is
+ * the SHA1 hash of sorted object names. The objects array passed in
+ * will be sorted by SHA1 on exit.
+ */
+const char *write_idx_file(const char *index_name, struct pack_idx_entry **objects, int nr_objects, unsigned char *sha1)
+{
+ struct sha1file *f;
+ struct pack_idx_entry **sorted_by_sha, **list, **last;
+ off_t last_obj_offset = 0;
+ uint32_t array[256];
+ int i, fd;
+ SHA_CTX ctx;
+ uint32_t index_version;
+
+ if (nr_objects) {
+ sorted_by_sha = objects;
+ list = sorted_by_sha;
+ last = sorted_by_sha + nr_objects;
+ for (i = 0; i < nr_objects; ++i) {
+ if (objects[i]->offset > last_obj_offset)
+ last_obj_offset = objects[i]->offset;
+ }
+ qsort(sorted_by_sha, nr_objects, sizeof(sorted_by_sha[0]),
+ sha1_compare);
+ }
+ else
+ sorted_by_sha = list = last = NULL;
+
+ if (!index_name) {
+ static char tmpfile[PATH_MAX];
+ snprintf(tmpfile, sizeof(tmpfile),
+ "%s/tmp_idx_XXXXXX", get_object_directory());
+ fd = mkstemp(tmpfile);
+ index_name = xstrdup(tmpfile);
+ } else {
+ unlink(index_name);
+ fd = open(index_name, O_CREAT|O_EXCL|O_WRONLY, 0600);
+ }
+ if (fd < 0)
+ die("unable to create %s: %s", index_name, strerror(errno));
+ f = sha1fd(fd, index_name);
+
+ /* if last object's offset is >= 2^31 we should use index V2 */
+ index_version = (last_obj_offset >> 31) ? 2 : pack_idx_default_version;
+
+ /* index versions 2 and above need a header */
+ if (index_version >= 2) {
+ struct pack_idx_header hdr;
+ hdr.idx_signature = htonl(PACK_IDX_SIGNATURE);
+ hdr.idx_version = htonl(index_version);
+ sha1write(f, &hdr, sizeof(hdr));
+ }
+
+ /*
+ * Write the first-level table (the list is sorted,
+ * but we use a 256-entry lookup to be able to avoid
+ * having to do eight extra binary search iterations).
+ */
+ for (i = 0; i < 256; i++) {
+ struct pack_idx_entry **next = list;
+ while (next < last) {
+ struct pack_idx_entry *obj = *next;
+ if (obj->sha1[0] != i)
+ break;
+ next++;
+ }
+ array[i] = htonl(next - sorted_by_sha);
+ list = next;
+ }
+ sha1write(f, array, 256 * 4);
+
+ /* compute the SHA1 hash of sorted object names. */
+ SHA1_Init(&ctx);
+
+ /*
+ * Write the actual SHA1 entries..
+ */
+ list = sorted_by_sha;
+ for (i = 0; i < nr_objects; i++) {
+ struct pack_idx_entry *obj = *list++;
+ if (index_version < 2) {
+ uint32_t offset = htonl(obj->offset);
+ sha1write(f, &offset, 4);
+ }
+ sha1write(f, obj->sha1, 20);
+ SHA1_Update(&ctx, obj->sha1, 20);
+ }
+
+ if (index_version >= 2) {
+ unsigned int nr_large_offset = 0;
+
+ /* write the crc32 table */
+ list = sorted_by_sha;
+ for (i = 0; i < nr_objects; i++) {
+ struct pack_idx_entry *obj = *list++;
+ uint32_t crc32_val = htonl(obj->crc32);
+ sha1write(f, &crc32_val, 4);
+ }
+
+ /* write the 32-bit offset table */
+ list = sorted_by_sha;
+ for (i = 0; i < nr_objects; i++) {
+ struct pack_idx_entry *obj = *list++;
+ uint32_t offset = (obj->offset <= pack_idx_off32_limit) ?
+ obj->offset : (0x80000000 | nr_large_offset++);
+ offset = htonl(offset);
+ sha1write(f, &offset, 4);
+ }
+
+ /* write the large offset table */
+ list = sorted_by_sha;
+ while (nr_large_offset) {
+ struct pack_idx_entry *obj = *list++;
+ uint64_t offset = obj->offset;
+ if (offset > pack_idx_off32_limit) {
+ uint32_t split[2];
+ split[0] = htonl(offset >> 32);
+ split[1] = htonl(offset & 0xffffffff);
+ sha1write(f, split, 8);
+ nr_large_offset--;
+ }
+ }
+ }
+
+ sha1write(f, sha1, 20);
+ sha1close(f, NULL, 1);
+ SHA1_Final(sha1, &ctx);
+ return index_name;
+}
void fixup_pack_header_footer(int pack_fd,
unsigned char *pack_file_sha1,
*/
#define PACK_IDX_SIGNATURE 0xff744f63 /* "\377tOc" */
+/* These may be overridden by command-line parameters */
+extern uint32_t pack_idx_default_version;
+extern uint32_t pack_idx_off32_limit;
+
/*
* Packed object index header
*/
uint32_t idx_version;
};
+/*
+ * Common part of object structure used for write_idx_file
+ */
+struct pack_idx_entry {
+ unsigned char sha1[20];
+ uint32_t crc32;
+ off_t offset;
+};
+
+extern const char *write_idx_file(const char *index_name, struct pack_idx_entry **objects, int nr_objects, unsigned char *sha1);
extern int verify_pack(struct packed_git *, int);
extern void fixup_pack_header_footer(int, unsigned char *, const char *, uint32_t);
generate_id_list();
return 0;
-}
+}
for (i = 0; i < p->nr; i++)
printf("%s:%p\n", p->items[i].path, p->items[i].util);
}
-
# (even though GIT-CFLAGS aren't used yet. If ever)
../GIT-CFLAGS:
$(MAKE) -C .. GIT-CFLAGS
-
* Write a packetized stream, where each line is preceded by
* its length (including the header) as a 4-byte hex number.
* A length of 'zero' means end of stream (and a length of 1-3
- * would be an error).
+ * would be an error).
*
* This is all pretty stupid, but we use this packetized line
* format to make a streaming format possible without ever
p += nb;
}
return 0;
-}
+}
int SHA1_Final(unsigned char *hash, SHA_CTX *c)
{
return (c == '\'' || c == '!');
}
-size_t sq_quote_buf(char *dst, size_t n, const char *src)
+static size_t sq_quote_buf(char *dst, size_t n, const char *src)
{
char c;
char *bp = dst;
fputc('\'', stream);
}
-char *sq_quote(const char *src)
-{
- char *buf;
- size_t cnt;
-
- cnt = sq_quote_buf(NULL, 0, src) + 1;
- buf = xmalloc(cnt);
- sq_quote_buf(buf, cnt, src);
-
- return buf;
-}
-
char *sq_quote_argv(const char** argv, int count)
{
char *buf, *to;
* excluding the final null regardless of the buffer size.
*/
-extern char *sq_quote(const char *src);
extern void sq_quote_print(FILE *stream, const char *src);
-extern size_t sq_quote_buf(char *dst, size_t n, const char *src);
extern char *sq_quote_argv(const char** argv, int count);
/*
changed |= MTIME_CHANGED;
if (ce->ce_ctime.nsec != htonl(st->st_ctim.tv_nsec))
changed |= CTIME_CHANGED;
-#endif
+#endif
if (ce->ce_uid != htonl(st->st_uid) ||
ce->ce_gid != htonl(st->st_gid))
* is being added, or we already have path and path/file is being
* added. Either one would result in a nonsense tree that has path
* twice when git-write-tree tries to write it out. Prevent it.
- *
+ *
* If ok-to-replace is specified, we remove the conflicting entries
* from the cache so the caller should recompute the insert position.
* When this happens, we return non-zero.
write_buffer_len = buffered;
len -= partial;
data = (char *) data + partial;
- }
- return 0;
+ }
+ return 0;
}
static int write_index_ext_header(SHA_CTX *context, int fd,
* size to zero here, then the object name recorded
* in index is the 6-byte file but the cached stat information
* becomes zero --- which would then match what we would
- * obtain from the filesystem next time we stat("frotz").
+ * obtain from the filesystem next time we stat("frotz").
*
* However, the second update-index, before calling
* this function, notices that the cached size is 6
return NULL;
}
-static int check_pattern_match(struct refspec *rs, int rs_nr, struct ref *src)
+static const struct refspec *check_pattern_match(const struct refspec *rs,
+ int rs_nr,
+ const struct ref *src)
{
int i;
- if (!rs_nr)
- return 1;
for (i = 0; i < rs_nr; i++) {
if (rs[i].pattern && !prefixcmp(src->name, rs[i].src))
- return 1;
+ return rs + i;
}
- return 0;
+ return NULL;
}
int match_refs(struct ref *src, struct ref *dst, struct ref ***dst_tail,
/* pick the remainder */
for ( ; src; src = src->next) {
struct ref *dst_peer;
+ const struct refspec *pat = NULL;
+ char *dst_name;
if (src->peer_ref)
continue;
- if (!check_pattern_match(rs, nr_refspec, src))
- continue;
+ if (nr_refspec) {
+ pat = check_pattern_match(rs, nr_refspec, src);
+ if (!pat)
+ continue;
+ }
- dst_peer = find_ref_by_name(dst, src->name);
+ if (pat) {
+ dst_name = xmalloc(strlen(pat->dst) +
+ strlen(src->name) -
+ strlen(pat->src) + 2);
+ strcpy(dst_name, pat->dst);
+ strcat(dst_name, src->name + strlen(pat->src));
+ } else
+ dst_name = strdup(src->name);
+ dst_peer = find_ref_by_name(dst, dst_name);
if (dst_peer && dst_peer->peer_ref)
/* We're already sending something to this ref. */
- continue;
+ goto free_name;
if (!dst_peer && !nr_refspec && !all)
/* Remote doesn't have it, and we have no
* explicit pattern, and we don't have
* --all. */
- continue;
+ goto free_name;
if (!dst_peer) {
/* Create a new one and link it */
- int len = strlen(src->name) + 1;
+ int len = strlen(dst_name) + 1;
dst_peer = xcalloc(1, sizeof(*dst_peer) + len);
- memcpy(dst_peer->name, src->name, len);
+ memcpy(dst_peer->name, dst_name, len);
hashcpy(dst_peer->new_sha1, src->new_sha1);
link_dst_tail(dst_peer, dst_tail);
}
dst_peer->peer_ref = src;
+ free_name:
+ free(dst_name);
}
return 0;
}
}
}
-void add_pending_object(struct rev_info *revs, struct object *obj, const char *name)
-{
- add_pending_object_with_mode(revs, obj, name, S_IFINVALID);
-}
-
-void add_pending_object_with_mode(struct rev_info *revs, struct object *obj, const char *name, unsigned mode)
+static void add_pending_object_with_mode(struct rev_info *revs, struct object *obj, const char *name, unsigned mode)
{
if (revs->no_walk && (obj->flags & UNINTERESTING))
die("object ranges do not make sense when not walking revisions");
(struct commit *)obj, name);
}
+void add_pending_object(struct rev_info *revs, struct object *obj, const char *name)
+{
+ add_pending_object_with_mode(revs, obj, name, S_IFINVALID);
+}
+
static struct object *get_reference(struct rev_info *revs, const char *name, const unsigned char *sha1, unsigned int flags)
{
struct object *object;
options->has_changes = 1;
}
-int rev_compare_tree(struct rev_info *revs, struct tree *t1, struct tree *t2)
+static int rev_compare_tree(struct rev_info *revs, struct tree *t1, struct tree *t2)
{
if (!t1)
return REV_TREE_NEW;
return tree_difference;
}
-int rev_same_tree_as_empty(struct rev_info *revs, struct tree *t1)
+static int rev_same_tree_as_empty(struct rev_info *revs, struct tree *t1)
{
int retval;
void *tree;
#define REV_TREE_DIFFERENT 2
/* revision.c */
-extern int rev_same_tree_as_empty(struct rev_info *, struct tree *t1);
-extern int rev_compare_tree(struct rev_info *, struct tree *t1, struct tree *t2);
extern void init_revisions(struct rev_info *revs, const char *prefix);
extern int setup_revisions(int argc, const char **argv, struct rev_info *revs, const char *def);
const char *name);
extern void add_pending_object(struct rev_info *revs, struct object *obj, const char *name);
-extern void add_pending_object_with_mode(struct rev_info *revs, struct object *obj, const char *name, unsigned mode);
#endif
#ifndef RSH_H
#define RSH_H
-int setup_connection(int *fd_in, int *fd_out, const char *remote_prog,
+int setup_connection(int *fd_in, int *fd_out, const char *remote_prog,
char *url, int rmt_argc, char **rmt_argv);
#endif
if (len) {
int speclen = strlen(path);
char *n = xmalloc(speclen + len + 1);
-
+
memcpy(n, prefix, len);
memcpy(n + len, path, speclen+1);
path = n;
return path;
}
-/*
+/*
* Unlike prefix_path, this should be used if the named file does
* not have to interact with index entry; i.e. name of a random file
* on the filesystem.
*buf++ = hex[val >> 4];
*buf++ = hex[val & 0xf];
}
-
+
return base;
}
*buf++ = hex[val >> 4];
*buf++ = hex[val & 0xf];
}
-
+
return base;
}
unsigned long size;
/*
- * The type can be at most ten bytes (including the
+ * The type can be at most ten bytes (including the
* terminating '\0' that we add), and is followed by
* a space.
*/
return 0;
}
-struct packed_git *find_sha1_pack(const unsigned char *sha1,
+struct packed_git *find_sha1_pack(const unsigned char *sha1,
struct packed_git *packs)
{
struct packed_git *p;
namelen = namelen - (cp - name);
if (!active_cache)
read_cache();
- if (active_nr < 0)
- return -1;
pos = cache_name_pos(cp, namelen);
if (pos < 0)
pos = -pos - 1;
}
if (!size)
return -1;
-
+
if (verbose)
fprintf(stderr, "Serving %s\n", sha1_to_hex(sha1));
remote = 0;
-
+
if (!has_sha1_file(sha1)) {
fprintf(stderr, "git-ssh-upload: could not find %s\n",
sha1_to_hex(sha1));
remote = -1;
}
-
+
if (write_in_full(fd_out, &remote, 1) != 1)
return 0;
-
+
if (remote < 0)
return 0;
-
+
return write_sha1_to_fd(fd_out, sha1);
}
sb->eof = 1;
strbuf_end(sb);
}
-
.PHONY: $(T) clean
.NOTPARALLEL:
-
test_expect_success \
'recording branch A tree' \
'tree_A=$(git-write-tree)'
-
+
################################################################
# Branch B
# Start from O
find .git/objects -type f -print >should-be-empty
test_expect_success \
'.git/objects should be empty after git-init in an empty repo.' \
- 'cmp -s /dev/null should-be-empty'
+ 'cmp -s /dev/null should-be-empty'
# also it should have 2 subdirectories; no fan-out anymore, pack, and info.
# 3 is counting "objects" itself
test_expect_failure '-> only packed objects' 'find -type f .git/objects/[0-9a-f][0-9a-f]'
test_done
-
test_expect_success 'value continued on next line' 'cmp result expect'
test_done
-
'test -f path0 && test -d path1 && test -f path1/file1'
test_done
-
-
test ! -h path1/file1 && test -f path1/file1'
test_done
-
git-config remote.local.fetch refs/heads/s:refs/remotes/local/s &&
(git-show-ref -q refs/remotes/local/master || git-fetch local) &&
git-branch --track my5 local/master &&
- ! test $(git-config branch.my5.remote) = local &&
- ! test $(git-config branch.my5.merge) = refs/heads/master'
+ ! test "$(git-config branch.my5.remote)" = local &&
+ ! test "$(git-config branch.my5.merge)" = refs/heads/master'
test_expect_success 'test tracking setup via config' \
'git-config branch.autosetupmerge true &&
(git-show-ref -q refs/remotes/local/master || git-fetch local) &&
git-branch --no-track my2 local/master &&
git-config branch.autosetupmerge false &&
- ! test $(git-config branch.my2.remote) = local &&
- ! test $(git-config branch.my2.merge) = refs/heads/master'
+ ! test "$(git-config branch.my2.remote)" = local &&
+ ! test "$(git-config branch.my2.merge)" = refs/heads/master'
test_expect_success 'test local tracking setup' \
'git branch --track my6 s &&
test $(git-config branch.my6.remote) = . &&
test $(git-config branch.my6.merge) = refs/heads/s'
+test_expect_success 'test tracking setup via --track but deeper' \
+ 'git-config remote.local.url . &&
+ git-config remote.local.fetch refs/heads/*:refs/remotes/local/* &&
+ (git-show-ref -q refs/remotes/local/o/o || git-fetch local) &&
+ git-branch --track my7 local/o/o &&
+ test "$(git-config branch.my7.remote)" = local &&
+ test "$(git-config branch.my7.merge)" = refs/heads/o/o'
+
# Keep this test last, as it changes the current branch
cat >expect <<EOF
0000000000000000000000000000000000000000 $HEAD $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> 1117150200 +0000 branch: Created from master
test_debug 'gitk --all & sleep 1'
test_done
-
'git diff expected check'
test_done
-
git diff ../t4100/t-apply-7.expect current'
test_done
-
'cmp apply.txt patch.txt'
test_done
-
- * arch/x86_64/include/klibc/archsetjmp.h
+ * arch/cris/include/klibc/archsetjmp.h
*/
-
+
#ifndef _KLIBC_ARCHSETJMP_H
#define _KLIBC_ARCHSETJMP_H
-
+
struct __jmp_buf {
- unsigned long __rbx;
- unsigned long __rsp;
+ unsigned long __sp;
+ unsigned long __srp;
};
-
+
typedef struct __jmp_buf jmp_buf[1];
-
+
-#endif /* _SETJMP_H */
+#endif /* _KLIBC_ARCHSETJMP_H */
diff --git a/klibc/arch/x86_64/include/klibc/archsetjmp.h b/include/arch/m32r/klibc/archsetjmp.h
- * arch/x86_64/include/klibc/archsetjmp.h
+ * arch/m32r/include/klibc/archsetjmp.h
*/
-
+
#ifndef _KLIBC_ARCHSETJMP_H
#define _KLIBC_ARCHSETJMP_H
-
+
struct __jmp_buf {
- unsigned long __rbx;
- unsigned long __rsp;
unsigned long __r15;
- unsigned long __rip;
};
-
+
typedef struct __jmp_buf jmp_buf[1];
-
+
-#endif /* _SETJMP_H */
+#endif /* _KLIBC_ARCHSETJMP_H */
EOF
+++ file1+ 2007-02-21 01:07:44.000000000 -0800
@@ -1 +1 @@
-A
-+B
++B
EOF
sed -e 's|file1|sub/&|' gpatch.file >gpatch-sub.file &&
'( git diff test~2 test~1; git diff test~1 test~0 )| git apply'
test_done
-
"test ! -f $rr/preimage && test ! -f $rr2/preimage"
test_done
-
-
'git-archive --format=zip' \
'git-archive --format=zip HEAD >d.zip'
+$UNZIP -v 2>/dev/null
+if [ $? -eq 127 ]; then
+ echo "Skipping ZIP tests, because unzip was not found"
+ test_done
+ exit
+fi
+
test_expect_success \
'extract ZIP archive' \
'(mkdir d && cd d && $UNZIP ../d.zip)'
'
test_expect_success \
- 'pushing rewound head should not barf but require --force' '
+ 'pushing rewound head should not barf but require --force' '
# should not fail but refuse to update.
if git-send-pack ./victim/.git/ master
then
--- /dev/null
+#!/bin/sh
+
+test_description='fetching and pushing, with or without wildcard'
+
+. ./test-lib.sh
+
+D=`pwd`
+
+mk_empty () {
+ rm -fr testrepo &&
+ mkdir testrepo &&
+ (
+ cd testrepo &&
+ git init
+ )
+}
+
+test_expect_success setup '
+
+ : >path1 &&
+ git add path1 &&
+ test_tick &&
+ git commit -a -m repo &&
+ the_commit=$(git show-ref -s --verify refs/heads/master)
+
+'
+
+test_expect_success 'fetch without wildcard' '
+ mk_empty &&
+ (
+ cd testrepo &&
+ git fetch .. refs/heads/master:refs/remotes/origin/master &&
+
+ r=$(git show-ref -s --verify refs/remotes/origin/master) &&
+ test "z$r" = "z$the_commit" &&
+
+ test 1 = $(git for-each-ref refs/remotes/origin | wc -l)
+ )
+'
+
+test_expect_success 'fetch with wildcard' '
+ mk_empty &&
+ (
+ cd testrepo &&
+ git config remote.up.url .. &&
+ git config remote.up.fetch "refs/heads/*:refs/remotes/origin/*" &&
+ git fetch up &&
+
+ r=$(git show-ref -s --verify refs/remotes/origin/master) &&
+ test "z$r" = "z$the_commit" &&
+
+ test 1 = $(git for-each-ref refs/remotes/origin | wc -l)
+ )
+'
+
+test_expect_success 'push without wildcard' '
+ mk_empty &&
+
+ git push testrepo refs/heads/master:refs/remotes/origin/master &&
+ (
+ cd testrepo &&
+ r=$(git show-ref -s --verify refs/remotes/origin/master) &&
+ test "z$r" = "z$the_commit" &&
+
+ test 1 = $(git for-each-ref refs/remotes/origin | wc -l)
+ )
+'
+
+test_expect_success 'push with wildcard' '
+ mk_empty &&
+
+ git push testrepo "refs/heads/*:refs/remotes/origin/*" &&
+ (
+ cd testrepo &&
+ r=$(git show-ref -s --verify refs/remotes/origin/master) &&
+ test "z$r" = "z$the_commit" &&
+
+ test 1 = $(git for-each-ref refs/remotes/origin | wc -l)
+ )
+'
+
+test_done
--- /dev/null
+#!/bin/sh
+
+test_description='test local clone'
+. ./test-lib.sh
+
+D=`pwd`
+
+test_expect_success 'preparing origin repository' '
+ : >file && git add . && git commit -m1 &&
+ git clone --bare . a.git &&
+ git clone --bare . x
+'
+
+test_expect_success 'local clone without .git suffix' '
+ cd "$D" &&
+ git clone -l -s a b &&
+ cd b &&
+ git fetch
+'
+
+test_expect_success 'local clone with .git suffix' '
+ cd "$D" &&
+ git clone -l -s a.git c &&
+ cd c &&
+ git fetch
+'
+
+test_expect_success 'local clone from x' '
+ cd "$D" &&
+ git clone -l -s x y &&
+ cd y &&
+ git fetch
+'
+
+test_expect_success 'local clone from x.git that does not exist' '
+ cd "$D" &&
+ if git clone -l -s x.git z
+ then
+ echo "Oops, should have failed"
+ false
+ else
+ echo happy
+ fi
+'
+
+test_done
cd "$base_dir"
test_done
-
_text=$1
_tree=$2
shift 2
- echo $_text | git-commit-tree $(tag $_tree) "$@"
+ echo $_text | git-commit-tree $(tag $_tree) "$@"
}
# Save the output of a command into the tag specified. Prepend
# a substitution script for the tag onto the front of sed.script
save_tag()
{
- _tag=$1
+ _tag=$1
[ -n "$_tag" ] || error "usage: save_tag tag commit-args ..."
shift 1
- "$@" >.git/refs/tags/$_tag
+ "$@" >.git/refs/tags/$_tag
echo "s/$(tag $_tag)/$_tag/g" > sed.script.tmp
cat sed.script >> sed.script.tmp
mv sed.script.tmp sed.script
}
-# Replace unhelpful sha1 hashses with their symbolic equivalents
+# Replace unhelpful sha1 hashses with their symbolic equivalents
entag()
{
sed -f sed.script
commit_date()
{
_commit=$1
- git-cat-file commit $_commit | sed -n "s/^committer .*> \([0-9]*\) .*/\1/p"
+ git-cat-file commit $_commit | sed -n "s/^committer .*> \([0-9]*\) .*/\1/p"
}
on_committer_date()
# Execute the test described by the first argument, by eval'ing
# command line specified in the 2nd argument. Check the status code
-# is zero and that the output matches the stream read from
+# is zero and that the output matches the stream read from
# stdin.
test_output_expect_success()
-{
+{
_description=$1
_test=$2
[ $# -eq 2 ] || error "usage: test_output_expect_success description test <<EOF ... EOF"
_name=$(echo $_description | name_from_description)
cat > $_name.expected
- test_expect_success "$_description" "check_output $_name \"$_test\""
+ test_expect_success "$_description" "check_output $_name \"$_test\""
}
# Test if bisection size is close to half of list size within
# tolerance.
- #
+ #
_bisect_err=`expr $_list_size - $_bisection_size \* 2`
test "$_bisect_err" -lt 0 && _bisect_err=`expr 0 - $_bisect_err`
_bisect_err=`expr $_bisect_err / 2` ; # floor
test_sequence()
{
- _bisect_option=$1
-
+ _bisect_option=$1
+
test_bisection_diff 0 $_bisect_option l0 ^root
test_bisection_diff 0 $_bisect_option l1 ^root
test_bisection_diff 0 $_bisect_option l2 ^root
test_bisection_diff 0 $_bisect_option u3 ^U
test_bisection_diff 0 $_bisect_option u4 ^U
test_bisection_diff 0 $_bisect_option u5 ^U
-
+
#
# the following illustrates Linus' binary bug blatt idea.
#
7
8
9" > file &&
-git add file &&
+git add file &&
git commit -m "Initial commit" file &&
git branch A &&
git branch B &&
test $start = $abbrv'
test_done
-
fi
'
+test_expect_success 'checkout with ambiguous tag/branch names' '
+
+ git tag both side &&
+ git branch both master &&
+ git reset --hard &&
+ git checkout master &&
+
+ git checkout both &&
+ H=$(git rev-parse --verify HEAD) &&
+ M=$(git show-ref -s --verify refs/heads/master) &&
+ test "z$H" = "z$M" &&
+ name=$(git symbolic-ref HEAD 2>/dev/null) &&
+ test "z$name" = zrefs/heads/both
+
+'
+
+test_expect_success 'checkout with ambiguous tag/branch names' '
+
+ git reset --hard &&
+ git checkout master &&
+
+ git tag frotz side &&
+ git branch frotz master &&
+ git reset --hard &&
+ git checkout master &&
+
+ git checkout tags/frotz &&
+ H=$(git rev-parse --verify HEAD) &&
+ S=$(git show-ref -s --verify refs/heads/side) &&
+ test "z$H" = "z$S" &&
+ if name=$(git symbolic-ref HEAD 2>/dev/null)
+ then
+ echo "Bad -- should have detached"
+ false
+ else
+ : happy
+ fi
+
+'
+
test_done
Content-length: 4
cba
-
-
tail -n1 log | grep -q "^I HATE YOU$"'
+# misuse pserver authentication for testing of req_Root
+
+cat >request-relative <<EOF
+BEGIN AUTH REQUEST
+gitcvs.git
+anonymous
+
+END AUTH REQUEST
+EOF
+
+cat >request-conflict <<EOF
+BEGIN AUTH REQUEST
+$SERVERDIR
+anonymous
+
+END AUTH REQUEST
+Root $WORKDIR
+EOF
+
+test_expect_success 'req_Root failure (relative pathname)' \
+ 'if cat request-relative | git-cvsserver pserver >log 2>&1
+ then
+ echo unexpected success
+ false
+ else
+ true
+ fi &&
+ tail log | grep -q "^error 1 Root must be an absolute pathname$"'
+
+test_expect_success 'req_Root failure (conflicting roots)' \
+ 'cat request-conflict | git-cvsserver pserver >log 2>&1 &&
+ tail log | grep -q "^error 1 Conflicting roots specified$"'
+
+
#--------------
# CONFIG TESTS
#--------------
mv .git/hooks .git/hooks-disabled
cd "$owd"
}
-
+
test_done () {
trap - exit
case "$test_failure" in
echo >&2 Duplicate Signed-off-by lines.
exit 1
}
-
#. /usr/share/doc/git-core/contrib/hooks/post-receive-email
-
test -x "$GIT_DIR/hooks/pre-commit" &&
exec "$GIT_DIR/hooks/pre-commit" ${1+"$@"}
:
-
free(tree);
return retval;
}
-
continue;
if (S_ISDIR(entry.mode))
obj = &lookup_tree(entry.sha1)->object;
- else
+ else if (S_ISREG(entry.mode) || S_ISLNK(entry.mode))
obj = &lookup_blob(entry.sha1)->object;
+ else {
+ warning("in tree %s: entry %s has bad mode %.6o\n",
+ sha1_to_hex(item->object.sha1), entry.path, entry.mode);
+ obj = lookup_unknown_object(entry.sha1);
+ }
refs->ref[i++] = obj;
}
set_object_refs(&item->object, refs);
return safe_write(fd, data, sz);
}
-FILE *pack_pipe = NULL;
+static FILE *pack_pipe = NULL;
static void show_commit(struct commit *commit)
{
if (commit->object.flags & BOUNDARY)
break;
}
}
-
+
if (i != argc-1)
usage(upload_pack_usage);
dir = argv[i];
val = read_var(argv[1]);
if (!val)
usage(var_usage);
-
+
printf("%s\n", val);
-
+
return 0;
}
read_cache();
}
-void wt_status_print_initial(struct wt_status *s)
+static void wt_status_print_initial(struct wt_status *s)
{
int i;
char buf[PATH_MAX];
size = FIRST_FEW_BYTES;
return !!memchr(ptr, 0, size);
}
-
-
#endif /* #ifdef __cplusplus */
#endif /* #if !defined(XDIFF_H) */
-
xdemitconf_t const *xecfg);
#endif /* #if !defined(XDIFFI_H) */
-
}
-int xdl_emit_common(xdfenv_t *xe, xdchange_t *xscr, xdemitcb_t *ecb,
- xdemitconf_t const *xecfg) {
+static int xdl_emit_common(xdfenv_t *xe, xdchange_t *xscr, xdemitcb_t *ecb,
+ xdemitconf_t const *xecfg) {
xdfile_t *xdf = &xe->xdf1;
const char *rchg = xdf->rchg;
long ix;
return 0;
}
-
#endif /* #if !defined(XEMIT_H) */
-
#endif /* #if !defined(XINCLUDE_H) */
-
#endif /* #if !defined(XMACROS_H) */
-
#endif /* #if !defined(XPREPARE_H) */
-
#endif /* #if !defined(XTYPES_H) */
-
#endif /* #if !defined(XUTILS_H) */
-