-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pmGenerator doesn't filter anything? That's false. What do you mean? #2
Comments
Yeah my comment was really unfair/inaccurate. T is indeed "any tautology" I think it stemmed from the feeling that the manual heuristics were automatable, but that's too vague to be useful. Here are some ideas that may or may not be useful:
|
I'm not sure what you mean with (1.) and (2.). Could you give examples?
Right, potentially useful and easy to add, so I just did that: [3f44a65] --search -t: look for formulas of given schemas (typed search) Now, when for example up to
results in
Prior to this, I only implemented what I needed myself, but I'd gladly consider feature requests. [Side note: This repository doesn't have discussions enabled, otherwise I would have posted there.] |
As a reminder, I wrote:
I plan to release 1.2.1 around the end of this month, but I certainly cannot address your problems without understanding them. |
My original idea was as follows: If we have say So a new filter might be to remove conclusions with the same consequent but reordered or extra antecedents (first idea) But, except for a very specific use case of manually searching through a list of pmGenerator proofs, I don't see a use case. Maybe if some system had A1, where 50% of statements had an extra antecedent, a search with only the non-extra antecedent versions could go deeper by a few depth; this was my second idea Even in the manual use case, one can just look at the ending statement, so the total effect is to save a half minute of time per equivalent statement. I don't think I really looked at that many, maybe 20 at most; most of my time was spent trying to extract more usefulness out of non redundant potentially useful statements idea 4: inferenceThis is probably a much better idea com12 is an inference proof: given some hypothesis H, it makes a conclusion Here's a random D proof that may or may not be valid: DDDD111D1DD1111D1D11 Let's say it successfully makes some complicated theorem, and let's also say that if we replace part of it with a hypothesis and unify: DDDD111D1DH11D1D11 we get a much simpler inference statement akin to w2-q or com12 Statements like com12 are likely to be useful in multiple places. It also probably takes longer to prove the
than it takes to use the inference proof (inference form: first antecedent of closed form becomes a hypothesis). Examples where the inference form of a proof is shorter to prove than the full statement and then D definitely exist, for example id has a proof of 1 character in inference form: H My suggestion is to generate these inference forms for a specific proof, and see if these inference forms can be applied to any step in the proof to make it shorter edit: Future ideas/feature requests will go in the discussions page linked. Thanks for the reminder too |
Thanks for your suggestions. So I understand them as basically:
Details are still unclear about (2) — would you like to be able to specify a set or schema of formulas to try, or rather use some exhaustive generation or proof files? I'm still not sure if those are really what you meant. You need to be more precise. But:
|
That's pretty much what I meant In the case of (2) I imagined the case where a proof file is used. (Very dense text follows:) Given a proof
This seems to be O(n^3) (
failed manual exampleI ran Technically, I did find that output of above command
|
I elaborated on (2) in the first comment of xamidi/pmGenerator/discussions/3. |
(Remaining discussion moved) |
Possibly I am missing a problem about pmGenerator stated in ~w2.mm.
One of your comments says
But assuming
T
is a unique formula (i.e.T
=T
), that cannot happen (such is necessarily filtered out upon proof collection since conclusions are used as unique keys, and all isomorphic formulas are represented by equal strings). If you interpreteT
as "any tautology", then sure it shall not be filtered, since for syntactic consequence we need to consider all kinds of tautological structures. Am I missing something, or is it just that?My tool doesn't yet have any features that look at semantics (which is irrelevant to formal proofs, but might be useful in the future in case I add features to look for new axioms rather than formal proofs).
I also don't think it's at all fair to say that "pmGenerator doesn't filter anything". After all, filtering out redundant schemas (via
-m
, or-g
without-u
) is a major feature upon generating proof databases. And, of course, upon proof collection, all duplicates (i.e. equal conclusions for different proofs) are also filtered out, and those proofs that do not meet extraction conditions — such as set by-g -l <limit>
or-g -k <limit>
— are filtered out as well. And then there is--transform -z
which upon morphing a proof summary filters out subproofs that are unnecessary.Let me show you how much of an impact filtering in pmGenerator can make. Exemplary output (which was recently produced):
This is from an ongoing
pmGenerator -c -n -s CpCCNqCCNrsCptCCtqCrq -e 4 -g -1 -l 19 -v
✻ run.In the example, 1468180428 (1.4… billion) D-proof candidates were filtered to those 728415 which are valid, have new conclusions, and do not prove theorems that exceed 19 symbols (in a mere 6.246305 minutes), which again were filtered to those 1246 D-proofs whose conclusions do not already have schemas in the database representing them (in a mere 13.958495 minutes).
That's filtering out ≈99.999915133% of garbage in order to create a proof file small enough so that generation can continue as quickly and as long as possible.
It does so for thousands of files, and even a single time not filtering like that would basically cripple progression to an halt.
✻Probably
-g -v
will be-g -b
soon, and current-g -b
be made the default, and the current default will be-g -f
for "full parsing" (which shouldn't normally be used since it tends to be slow).The text was updated successfully, but these errors were encountered: