-
Notifications
You must be signed in to change notification settings - Fork 871
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support old binary formats. #1899
Comments
May be it is not the best way. Too long time for migration. It is possible to use sometimes but in very rare cases. So for for each binary format change we change configuration number and use old clusters and index versions if needed. Also we can provide migration tool that can be run pragmatically to migrate db versions without opening them. |
Test scripts should be implemented to test that a new version of Orient has the ability to read/write and upgrade old database formats. These test scripts should be incorporated into the Orient build process to ensure that successive versions are backwards compatible. |
@jamieb22 +1 ). |
This will begin on January 24th. |
Luca Please may I have status update on this project. So far, Orient DB is causing instability to our platform. The database I am hoping an upgrade will resolve the matter. Regards Jamie On 2014/01/21, 2:41 PM, Luca Garulli wrote:
|
Hi Jamie, |
Hi Jamie, |
Please download the file orientdb.zip from sftp site On 2014/02/10, 7:22 PM, Andrey Lomakin wrote:
|
Thanks @jamieb22! We expect to finish compatibility with 1.5.1 in the middle of the next week. |
Great... I am hoping this effort will offer a permanent upgrade path for On 2014/02/14, 10:34 AM, Artem Orobets wrote:
|
Middle of next week has arrived. Any update? On 2014/02/14, 10:34 AM, Artem Orobets wrote:
|
We've completed this issue at 90%, we need today to finish tests, maximum tomorrow. |
Hi, |
Thank you for the update and for your efforts. Please rather take your On 2014/02/20, 11:36 AM, Andrey Lomakin wrote:
|
Regarding the comment:
Isn't the latest version of Orient DB 1.7? Will this upgrade script take |
This issue will be deployed on v. 1.7-rc2 we will release as soon as this issue is fixed. We'd like to ask you to make some tests against multiple "old" databases with 1.7-rc2-SNAPSHOT as soon as this issue is closed, to be sure to release a bug free version with 1.7-rc2. WDYT? |
@jamieb22 Could you send me your database once again. |
The sftp site is still active. You can download. Andrey Lomakin notifications@github.com wrote:
Sent from my Android device with K-9 Mail. Please excuse my brevity. |
I was asked for the password and can not login. |
The credentials are as follows: sftp mailarchiva.com On 2014/02/24, 11:38 AM, Andrey Lomakin wrote:
|
I have very good news.
Got result "databases match". Could you as @lvca proposed build database from latest sources and provide the same comparison procedure too ? Please note that rc2 is not released yet (we have several critical issues to fix) so we do not recommend to use it (yet) in production. |
Andrey, that's great news. Thank you. May I ask whether your upgrade On 2014/02/24, 1:39 PM, Andrey Lomakin wrote:
|
Andrey... can this function also be used as a backup procedure. It would be nice to be able to use a JSON export function to backup the DB on occasion? Is the JSON export thread safe? Are you creating public API functions for upgrading, exporting, etc? |
Don't forget to build in upgrade test in the Orient build scripts. If we do not do this, it is guaranteed that the upgrade will be broken at some point in time. |
Hi,
About binary compatibility test we were planing to include them from beginning, otherwise it will be unprofessional :-) . Actually here is how this test will run:
will do this on CI every night and because it is kind of tricky will be written on Gradle. |
Is there a way to determine the current version of Orient in code? We Will latest version of Orient be able to read older formats without upgrade? On 2014/02/25, 11:51 AM, Andrey Lomakin wrote:
|
Hi,
That is how it works. During development database binary format can be changed several times, reading of version is not usable but you can read binary format version it can be taken from com.orientechnologies.orient.core.db.record.OCurrentStorageComponentsFactory#binaryFormatVersion and com.orientechnologies.orient.core.storage.OStorage#getComponentsFactory . |
@valenpo yes you can use single name with .gz at the end. |
@Laa Andrey, I follow exactly same code for conversion, and cant confirm that conversion is working. |
@valenpo Yes, this is too small :-) I will try to convert it too. |
I just run same conversion program as you provide. мар 11, 2014 2:54:46 PM com.orientechnologies.common.log.OLogManager log Started export of database 'archiva.db' to /Library/Application Support/MailArchiva/ROOT/database/archiva.db.json.gz... Exporting database info... Exporting clusters... Exporting schema... Exporting records...
Done. Exported 893834 of total 893834 records Exporting index info...
OK (9 indexes) Exporting manual indexes content...
OK (7 manual indexes) Database export completed in 404346ms Started import of database 'plocal:/Library/Application Support/MailArchiva/ROOT/database/new.archiva.db' from /Library/Application Support/MailArchiva/ROOT/database/archiva.db.json.gz... Non merge mode (-merge=false): removing all default non security classes
Removed 5 classes. Importing database info... Importing clusters...
Rebuilding indexes of truncated clusters ... Cluster content was truncated and index ORole.name will be rebuilt Index ORole.name was successfully rebuilt. Cluster content was truncated and index OUser.name will be rebuilt Index OUser.name was successfully rebuilt. Done 2 indexes were rebuilt. Done. Imported 13 clusters Importing database schema... Importing records...
10000 documents were processed... 20000 documents were processed... 30000 documents were processed... 40000 documents were processed... 50000 documents were processed... 60000 documents were processed... 70000 documents were processed... 80000 documents were processed... 90000 documents were processed... 100000 documents were processed... 110000 documents were processed... 120000 documents were processed... 130000 documents were processed... 140000 documents were processed... 150000 documents were processed... 160000 documents were processed... 170000 documents were processed... 180000 documents were processed... 190000 documents were processed... 200000 documents were processed... 210000 documents were processed... 220000 documents were processed... 230000 documents were processed... 240000 documents were processed... 250000 documents were processed...
10000 documents were processed... 20000 documents were processed... 30000 documents were processed... 40000 documents were processed... 50000 documents were processed... 60000 documents were processed... 70000 documents were processed... 80000 documents were processed... 90000 documents were processed... 100000 documents were processed... 110000 documents were processed... 120000 documents were processed... 130000 documents were processed... 140000 documents were processed... 150000 documents were processed... 160000 documents were processed... 170000 documents were processed... 180000 documents were processed... 190000 documents were processed... 200000 documents were processed... 210000 documents were processed... 220000 documents were processed... 230000 documents were processed... 240000 documents were processed... 250000 documents were processed... 260000 documents were processed... 270000 documents were processed... 280000 documents were processed... 290000 documents were processed... 300000 documents were processed... 310000 documents were processed... 320000 documents were processed... 330000 documents were processed... 340000 documents were processed... 350000 documents were processed... 360000 documents were processed... 370000 documents were processed... 380000 documents were processed... 390000 documents were processed... 400000 documents were processed... 410000 documents were processed... 420000 documents were processed... 430000 documents were processed... 440000 documents were processed... 450000 documents were processed... 460000 documents were processed... 470000 documents were processed... 480000 documents were processed... 490000 documents were processed... 500000 documents were processed... 510000 documents were processed... 520000 documents were processed... 530000 documents were processed... 540000 documents were processed...
10000 documents were processed... 20000 documents were processed... 30000 documents were processed... 40000 documents were processed... 50000 documents were processed... 60000 documents were processed... 70000 documents were processed... 80000 documents were processed... Total links updated: 891338 Done. Imported 891339 records Importing indexes ...
Done. Created 9 indexes. Importing manual index entries...
|
:-) you did not close again )) but it is not the issue now, I am looking what is wrong with manual indexes import. |
nope, this DB (1.5.1) was closed correctly 100% via the graph.shutdown(). |
Also FYI i see new converted DB is greater 2.4 times when old one. But I suppose it must be lesser, as snappy used. Regards |
@valenpo the correct way is to shutdown JVM before database copy (does not matter embedded or server), call of shutdwon() method leads to connection close not storage close. You have invalid link in your manual index that is why you have NPE. You have bigger database because it contains not only data but operations log too. Also I have changed code a bit to provide better logging support. final ODatabaseDocumentTx database = new ODatabaseDocumentTx("local:/home/andrey/Development/orientdb/archiva.db");
database.open("admin", "admin");
final ODatabaseExport databaseExport = new ODatabaseExport(database, "/home/andrey/Development/orientdb/archiva.db.json",
new OCommandOutputListener() {
@Override
public void onMessage(String iText) {
System.out.println(iText);
}
});
databaseExport.exportDatabase();
databaseExport.close();
final ODatabaseDocumentTx newDatabase = new ODatabaseDocumentTx("plocal:/home/andrey/Development/orientdb/new.archiva.db");
newDatabase.create();
final ODatabaseImport databaseImport = new ODatabaseImport(newDatabase, "/home/andrey/Development/orientdb/archiva.db.json.gz",
new OCommandOutputListener() {
@Override
public void onMessage(String iText) {
System.out.println(iText);
}
});
databaseImport.importDatabase();
database.close();
databaseImport.close();
database.open("admin", "admin");
OGraphMigration graphMigration = new OGraphMigration(newDatabase, new OCommandOutputListener() {
@Override
public void onMessage(String iText) {
System.out.println(iText);
}
});
graphMigration.execute();
database.close(); |
@Laa Andrey, morning FYI, I made some huge test till night, archiving 100 huge pst files with all structure of emails and folders, I can see about 100 pst root folders on tree view (this is stored on DB). After server restart with correct close (graph.shutdown()) I can see ~32 root folders, so 2/3 are lost. Those are E connections from ROOT vertex to USER vertex. Also same DB with local are 4 times smaller (1GB local vs 4.4GB plocal). I assume that plocal use some compression? I can provide generated DB and access to testing platform. Making test can take alot of time, as alot of business logic involved, and I think better to see it on real environment |
@valenpo I mentioned several times before here #1899 (comment) and here #1899 (comment), that graph.shutdown() is not correct way to close storage. Could you confirm that you close storage by closing JVM, not only by calling graph.shutdown() ? |
@Laa yes, JVM is closed via standard Tomcat scripts. |
@valenpo when you open db do you see message Do you see files with .wal extension into the closed database ? |
Andrey, logs are behind. Libraries are used on Tomcat instance, so tomcat start up normally as shutdown too. I shutdown tomcat when the server perform no actions … Right now I see no one record. Should I upgrade to night build and test it again? 2014-03-12 15:34:36.788 WARN - Cannot find default script language for javascript |
Andrey Our server always shutdown clearly. We have spent many hours making sure Jamie On 2014/03/12, 12:42 PM, Andrey Lomakin wrote:
|
@valenpo @jamieb22 Which means that storage was not closed correctly. And I have seen this message in all instances of db which you send to us. So first step which we need to do is put following code in ServletContextListener.html#contextDestroyed method (implementation of interface of course). Orient.isntance().shutdown(); And check whether we see given warning again, and check data presence. |
@OverRide done, so hope right now it will shutdown correct. |
@Laa Andrey, thank for clue. After that hook Orient.isntance().shutdown(); it looks DB started fine after server restart. I see all records. Thanks Regards |
Ok. |
ok, I think we could change it to auto index. We use manual from 0.9 version of orient because it provide more control with query request. Also we don't need index all fields of V and E. What docs can you suggest to read about index conversion to auto? Regards, |
It is quite simple On Wed, Mar 12, 2014 at 5:01 PM, valenpo notifications@github.com wrote:
Best regards, Orient Technologies |
Executing such code
throw exceptions Database are very small, so can give for test http://yadi.sk/d/gaOcvd0EKK4en 19Kb |
It means that nothing was imported. On Tue, Mar 25, 2014 at 3:30 PM, valenpo notifications@github.com wrote:
Best regards, Orient Technologies |
мар 25, 2014 5:23:37 PM com.orientechnologies.common.log.OLogManager log
Done. Exported 814 of total 814 records export db:
Database export completed in 916ms
importing db: Database import completed in 130 ms |
So nothing is imported. I suggest you pay attention to log output it would allow to identify db On Tue, Mar 25, 2014 at 3:39 PM, valenpo notifications@github.com wrote:
Best regards, Orient Technologies |
Andrey, such logging give nothing. I tested import code, here you see used fileinputstream, and correct input stream are provided. I just follow you recommendation of export import. ODatabaseImport databaseImport; where file for export and import are same, checked it via debugger. And such file exist, other case i will throw exception (filenotfound) etc ... |
I checked it with String, and it work. So maybe some problems with InputStream |
Good. On Tue, Mar 25, 2014 at 4:03 PM, valenpo notifications@github.com wrote:
Best regards, Orient Technologies |
To support old binary formats we can use following approach:
Subtasks:
The text was updated successfully, but these errors were encountered: