Drone-migrate's repoActivateQuery is invalid

The command activate-repos tries to activate the result of repoActivateQuery but the result includes originally inactive repositories.
The command activate-repos should activate only originally active repositories.

I will send a pull request to fix this bug.
Thank you.

Are you sure? If I remember correctly Drone does not migrate inactivate repositories. It only copies active repositories from the source to the target database [1] (by filtering on repo_user_id, inactive repositories should have a zero value). Therefore all repositories in the select statement you referenced are activate.

[1] https://github.com/drone/drone-migrate/blob/v1.1.2/migrate/repos.go#L23

Thank you for your reply.

That’s strange.
I upgraded Drone yesterday and the new database has many repositories whose repo_user_d is 0.
I will investigate the reason.

mysql> select count(*) from repos where repo_user_id = 0;
| count(*) |
|     2389 |
1 row in set (0.01 sec)
// the log of migrate-repos's command
time="2019-07-22T14:04:18Z" level=info msg="migrating 320 repositories"

Ok. I understand.
After I run update-repos
I run Drone server and agents and clicked the “sync” button at Web UI
before I run activate-repos.
So there were many inactive repositories when I run activate-repos.

Off course this is my mistake, but I think it is safer to add the condition “when repo_active =1”.

Off course this is my mistake, but I think it is safer to add the condition “when repo_active =1”.

sure, however, I do not think using an integer as a boolean value (e.g. repo_active = 1) is compatible with all database providers. It may be easier to handle this in the Go code:

	for _, repoV0 := range reposV0 {
+		if repoV0.Active {
+			continue
+		}

I agree.
Maybe it is better to filter in SQL in terms of the performance,
but it is difficult to keep all database provider’s compatibility and it is easier to filter in Go.
I have sent the pull request and fixed to filter in SQL.


The above pull request has been merged, so this issue has been resolved.
Thank you.