Discussion:
Restoring bitmaps after failed/cancelled migration
(too old to reply)
Vladimir Sementsov-Ogievskiy
2018-04-18 14:00:35 UTC
Permalink
Hi all.

We now have the following problem:

If dirty-bitmaps migration capability is enabled, persistance flag is
dropped for all migrated bitmaps, to prevent their storing to the
storage on inactivate. It works ok, persistence itself is migrated
through the migration channel. But on source, bitmaps becomes not
persistent, so if we, for some reasons, want to continue using source
vm, we'll lose bitmaps on stop/start.

It's simple to fix it: just make bitmaps persistent again on invalidate
[1].. But I have some questions.

1. What are possible cases? I think about the following:

a. migration cancel or fail, then invalidate
b. migration success, then qmp cont => invalidate
c. migration success, then stop/start (there was no invalidate, so [1]
will not work)


2. Is it safe at all, saving bitmaps after inactivate, even without
persistence?

Inactive disk implies, that it may be changed by somebody other, isn't
it? Is it possible, that target will change the disk, and then we return
control to the source? In this case bitmaps will be invalid. So, should
not we drop all the bitmaps on inactivate?
--
Best regards,
Vladimir
Dr. David Alan Gilbert
2018-04-23 18:41:54 UTC
Permalink
Post by Vladimir Sementsov-Ogievskiy
Hi all.
If dirty-bitmaps migration capability is enabled, persistance flag is
dropped for all migrated bitmaps, to prevent their storing to the storage on
inactivate. It works ok, persistence itself is migrated through the
migration channel. But on source, bitmaps becomes not persistent, so if we,
for some reasons, want to continue using source vm, we'll lose bitmaps on
stop/start.
It's simple to fix it: just make bitmaps persistent again on invalidate
[1].. But I have some questions.
a. migration cancel or fail, then invalidate
b. migration success, then qmp cont => invalidate
c. migration success, then stop/start (there was no invalidate, so [1] will
not work)
2. Is it safe at all, saving bitmaps after inactivate, even without
persistence?
Inactive disk implies, that it may be changed by somebody other, isn't it?
Is it possible, that target will change the disk, and then we return control
to the source? In this case bitmaps will be invalid. So, should not we drop
all the bitmaps on inactivate?
I don't know the full answers; but I do know it's valid to have the case
(b) - i.e. from QEMU's point of view the migration succeeds, but the
destination was run with -S and for some external failure reason, the
management layer decides to kill the destination and restart the source.
And you're right, that at that point there's no one holding the lock,
so the source can't be sure that no one snuck in and fiddled with the
disk before restarting.

Dave
Post by Vladimir Sementsov-Ogievskiy
--
Best regards,
Vladimir
--
Dr. David Alan Gilbert / ***@redhat.com / Manchester, UK
John Snow
2018-05-11 21:23:39 UTC
Permalink
Post by Vladimir Sementsov-Ogievskiy
Hi all.
If dirty-bitmaps migration capability is enabled, persistance flag is
dropped for all migrated bitmaps, to prevent their storing to the
storage on inactivate. It works ok, persistence itself is migrated
through the migration channel. But on source, bitmaps becomes not
persistent, so if we, for some reasons, want to continue using source
vm, we'll lose bitmaps on stop/start.
Sorry for not following along more carefully, which kind of migration
are we talking about in this case?
Post by Vladimir Sementsov-Ogievskiy
It's simple to fix it: just make bitmaps persistent again on invalidate
[1].. But I have some questions.
a. migration cancel or fail, then invalidate
b. migration success, then qmp cont => invalidate
c. migration success, then stop/start (there was no invalidate, so [1]
will not work)
2. Is it safe at all, saving bitmaps after inactivate, even without
persistence?
Inactive disk implies, that it may be changed by somebody other, isn't
it? Is it possible, that target will change the disk, and then we return
control to the source? In this case bitmaps will be invalid. So, should
not we drop all the bitmaps on inactivate?
Vladimir Sementsov-Ogievskiy
2018-05-14 09:55:25 UTC
Permalink
Post by John Snow
Post by Vladimir Sementsov-Ogievskiy
Hi all.
If dirty-bitmaps migration capability is enabled, persistance flag is
dropped for all migrated bitmaps, to prevent their storing to the
storage on inactivate. It works ok, persistence itself is migrated
through the migration channel. But on source, bitmaps becomes not
persistent, so if we, for some reasons, want to continue using source
vm, we'll lose bitmaps on stop/start.
Sorry for not following along more carefully, which kind of migration
are we talking about in this case?
Any migration with dirty-bitmaps capability enabled..
Post by John Snow
Post by Vladimir Sementsov-Ogievskiy
It's simple to fix it: just make bitmaps persistent again on invalidate
[1].. But I have some questions.
a. migration cancel or fail, then invalidate
b. migration success, then qmp cont => invalidate
c. migration success, then stop/start (there was no invalidate, so [1]
will not work)
2. Is it safe at all, saving bitmaps after inactivate, even without
persistence?
Inactive disk implies, that it may be changed by somebody other, isn't
it? Is it possible, that target will change the disk, and then we return
control to the source? In this case bitmaps will be invalid. So, should
not we drop all the bitmaps on inactivate?
--
Best regards,
Vladimir
Fam Zheng
2018-05-14 06:41:39 UTC
Permalink
Post by Vladimir Sementsov-Ogievskiy
Hi all.
If dirty-bitmaps migration capability is enabled, persistance flag is
dropped for all migrated bitmaps, to prevent their storing to the storage on
inactivate.
Why do we prevent source storing persistent bitmaps by clearing the flag instead
of making the bitmap code more BDRV_O_INACTIVE-aware, so it is _not_ stored
when/after inactivation?
Post by Vladimir Sementsov-Ogievskiy
It works ok, persistence itself is migrated through the
migration channel. But on source, bitmaps becomes not persistent, so if we,
for some reasons, want to continue using source vm, we'll lose bitmaps on
stop/start.
It's simple to fix it: just make bitmaps persistent again on invalidate
[1].. But I have some questions.
a. migration cancel or fail, then invalidate
b. migration success, then qmp cont => invalidate
Is this "cont" on the source even though dst is already up? How will this work?
Isn't it expected that dst is using the image?
Post by Vladimir Sementsov-Ogievskiy
c. migration success, then stop/start (there was no invalidate, so [1] will
not work)
2. Is it safe at all, saving bitmaps after inactivate, even without
persistence?
This is not safe. No I/O should be done to the image after inactivation.
Post by Vladimir Sementsov-Ogievskiy
Inactive disk implies, that it may be changed by somebody other, isn't it?
Yes.
Post by Vladimir Sementsov-Ogievskiy
Is it possible, that target will change the disk, and then we return control
to the source? In this case bitmaps will be invalid. So, should not we drop
all the bitmaps on inactivate?
Yes, dropping all live bitmaps upon inactivate sounds reasonable. If the dst
fails to start, and we want to resume VM at src, we could (optionally?) reload
the persistent bitmaps, I guess.
Post by Vladimir Sementsov-Ogievskiy
--
Best regards,
Vladimir
Vladimir Sementsov-Ogievskiy
2018-05-14 10:09:24 UTC
Permalink
Post by Fam Zheng
Post by Vladimir Sementsov-Ogievskiy
Hi all.
If dirty-bitmaps migration capability is enabled, persistance flag is
dropped for all migrated bitmaps, to prevent their storing to the storage on
inactivate.
Why do we prevent source storing persistent bitmaps by clearing the flag instead
of making the bitmap code more BDRV_O_INACTIVE-aware, so it is _not_ stored
when/after inactivation?
Bitmaps are stored exactly on inactivation. So, we need some
flag, that on following inactivation, we don't want to save them. And
this flag
is .persistent flag.

Of course, we can't save them if BDRV_O_INACTIVE is already set
Post by Fam Zheng
Post by Vladimir Sementsov-Ogievskiy
It works ok, persistence itself is migrated through the
migration channel. But on source, bitmaps becomes not persistent, so if we,
for some reasons, want to continue using source vm, we'll lose bitmaps on
stop/start.
It's simple to fix it: just make bitmaps persistent again on invalidate
[1].. But I have some questions.
a. migration cancel or fail, then invalidate
b. migration success, then qmp cont => invalidate
Is this "cont" on the source even though dst is already up? How will this work?
Isn't it expected that dst is using the image?
Dr. David described this case, look at his answer.
Post by Fam Zheng
Post by Vladimir Sementsov-Ogievskiy
c. migration success, then stop/start (there was no invalidate, so [1] will
not work)
2. Is it safe at all, saving bitmaps after inactivate, even without
persistence?
This is not safe. No I/O should be done to the image after inactivation.
Post by Vladimir Sementsov-Ogievskiy
Inactive disk implies, that it may be changed by somebody other, isn't it?
Yes.
Post by Vladimir Sementsov-Ogievskiy
Is it possible, that target will change the disk, and then we return control
to the source? In this case bitmaps will be invalid. So, should not we drop
all the bitmaps on inactivate?
Yes, dropping all live bitmaps upon inactivate sounds reasonable. If the dst
fails to start, and we want to resume VM at src, we could (optionally?) reload
the persistent bitmaps, I guess.
Reload from where? We didn't store them.

So, you agree, that dropping all bitmaps after inactivation is good
idea.. The second question, is it possible to not drop them? Is there a
way, to check that disk was not changed after pair of
inactivate-invalidate? I have an idea:
we can create small-bitmap, which will not migrate through migration
channel, but remain persistent. It will be very small (big granularity,
to not increase downtime of migration), so after invalidate we can check
this bitmap. If it is empty, we are happy, we can "activate" all our
bitmaps and make them persistent again. If not, we have two ways:

1. drop all bitmaps
2. merge small-bitmap to all our bitmaps
Post by Fam Zheng
Post by Vladimir Sementsov-Ogievskiy
--
Best regards,
Vladimir
--
Best regards,
Vladimir
Vladimir Sementsov-Ogievskiy
2018-05-14 10:23:21 UTC
Permalink
Post by Vladimir Sementsov-Ogievskiy
Post by Fam Zheng
Post by Vladimir Sementsov-Ogievskiy
Hi all.
If dirty-bitmaps migration capability is enabled, persistance flag is
dropped for all migrated bitmaps, to prevent their storing to the storage on
inactivate.
Why do we prevent source storing persistent bitmaps by clearing the flag instead
of making the bitmap code more BDRV_O_INACTIVE-aware, so it is _not_ stored
when/after inactivation?
Bitmaps are stored exactly on inactivation. So, we need some
flag, that on following inactivation, we don't want to save them. And
this flag
is .persistent flag.
Of course, we can't save them if BDRV_O_INACTIVE is already set
Post by Fam Zheng
Post by Vladimir Sementsov-Ogievskiy
It works ok, persistence itself is migrated through the
migration channel. But on source, bitmaps becomes not persistent, so if we,
for some reasons, want to continue using source vm, we'll lose bitmaps on
stop/start.
It's simple to fix it: just make bitmaps persistent again on invalidate
[1].. But I have some questions.
a. migration cancel or fail, then invalidate
b. migration success, then qmp cont => invalidate
Is this "cont" on the source even though dst is already up? How will this work?
Isn't it expected that dst is using the image?
Dr. David described this case, look at his answer.
Post by Fam Zheng
Post by Vladimir Sementsov-Ogievskiy
c. migration success, then stop/start (there was no invalidate, so [1] will
not work)
2. Is it safe at all, saving bitmaps after inactivate, even without
persistence?
This is not safe. No I/O should be done to the image after inactivation.
Post by Vladimir Sementsov-Ogievskiy
Inactive disk implies, that it may be changed by somebody other, isn't it?
Yes.
Post by Vladimir Sementsov-Ogievskiy
Is it possible, that target will change the disk, and then we return control
to the source? In this case bitmaps will be invalid. So, should not we drop
all the bitmaps on inactivate?
Yes, dropping all live bitmaps upon inactivate sounds reasonable. If the dst
fails to start, and we want to resume VM at src, we could
(optionally?) reload
the persistent bitmaps, I guess.
Reload from where? We didn't store them.
So, you agree, that dropping all bitmaps after inactivation is good
idea.. The second question, is it possible to not drop them? Is there
a way, to check that disk was not changed after pair of
we can create small-bitmap, which will not migrate through migration
channel, but remain persistent. It will be very small (big
granularity, to not increase downtime of migration), so after
invalidate we can check this bitmap. If it is empty, we are happy, we
can "activate" all our bitmaps and make them persistent again. If not,
1. drop all bitmaps
2. merge small-bitmap to all our bitmaps
However, we must not start source, if disk was changed, as memory and
devices states will not correspond to disk. So, such a small bitmap may
be used to check, can we start source or not.
Post by Vladimir Sementsov-Ogievskiy
Post by Fam Zheng
Post by Vladimir Sementsov-Ogievskiy
--
Best regards,
Vladimir
--
Best regards,
Vladimir
Fam Zheng
2018-05-15 02:03:39 UTC
Permalink
Post by Vladimir Sementsov-Ogievskiy
Post by Vladimir Sementsov-Ogievskiy
So, you agree, that dropping all bitmaps after inactivation is good
idea.. The second question, is it possible to not drop them? Is there a
way, to check that disk was not changed after pair of
we can create small-bitmap, which will not migrate through migration
channel, but remain persistent. It will be very small (big granularity,
to not increase downtime of migration), so after invalidate we can check
this bitmap. If it is empty, we are happy, we can "activate" all our
1. drop all bitmaps
2. merge small-bitmap to all our bitmaps
However, we must not start source, if disk was changed, as memory and
devices states will not correspond to disk. So, such a small bitmap may be
used to check, can we start source or not.
Or it can be a generation number/uuid that is updated when the image is changed
(upon the first write after each open), similar to VMDK's CID and VHDX GUIDs. We
can compare the on disk value with the known value at source QEMU, if they match
it means the image data is not touched.

Fam
Kevin Wolf
2018-05-16 12:47:55 UTC
Permalink
Post by Vladimir Sementsov-Ogievskiy
Post by Fam Zheng
Post by Vladimir Sementsov-Ogievskiy
Is it possible, that target will change the disk, and then we return control
to the source? In this case bitmaps will be invalid. So, should not we drop
all the bitmaps on inactivate?
Yes, dropping all live bitmaps upon inactivate sounds reasonable. If the dst
fails to start, and we want to resume VM at src, we could (optionally?) reload
the persistent bitmaps, I guess.
Reload from where? We didn't store them.
Maybe this just means that it turns out that not storing them was a bad
idea?

What was the motivation for not storing the bitmap? The additional
downtime? Is it really that bad, though? Bitmaps should be fairly small
for the usual image sizes and writing them out should be quick.

Kevin
Vladimir Sementsov-Ogievskiy
2018-05-16 15:10:26 UTC
Permalink
Post by Kevin Wolf
Post by Vladimir Sementsov-Ogievskiy
Post by Fam Zheng
Post by Vladimir Sementsov-Ogievskiy
Is it possible, that target will change the disk, and then we return control
to the source? In this case bitmaps will be invalid. So, should not we drop
all the bitmaps on inactivate?
Yes, dropping all live bitmaps upon inactivate sounds reasonable. If the dst
fails to start, and we want to resume VM at src, we could (optionally?) reload
the persistent bitmaps, I guess.
Reload from where? We didn't store them.
Maybe this just means that it turns out that not storing them was a bad
idea?
What was the motivation for not storing the bitmap? The additional
downtime? Is it really that bad, though? Bitmaps should be fairly small
for the usual image sizes and writing them out should be quick.
Kevin
What are usual ones? A bitmap of standard granularity of 64k for 16Tb
disk is ~30mb. If we have several such bitmaps it may be significant
downtime.
--
Best regards,
Vladimir
Kevin Wolf
2018-05-16 15:32:03 UTC
Permalink
Post by Kevin Wolf
Post by Vladimir Sementsov-Ogievskiy
Post by Fam Zheng
Post by Vladimir Sementsov-Ogievskiy
Is it possible, that target will change the disk, and then we return control
to the source? In this case bitmaps will be invalid. So, should not we drop
all the bitmaps on inactivate?
Yes, dropping all live bitmaps upon inactivate sounds reasonable. If the dst
fails to start, and we want to resume VM at src, we could (optionally?) reload
the persistent bitmaps, I guess.
Reload from where? We didn't store them.
Maybe this just means that it turns out that not storing them was a bad
idea?
What was the motivation for not storing the bitmap? The additional
downtime? Is it really that bad, though? Bitmaps should be fairly small
for the usual image sizes and writing them out should be quick.
What are usual ones? A bitmap of standard granularity of 64k for 16Tb disk
is ~30mb. If we have several such bitmaps it may be significant downtime.
We could have an in-memory bitmap that tracks which parts of the
persistent bitmap are dirty so that you don't have to write out the
whole 30 MB during the migration downtime, but can already flush most of
the persistent bitmap before the VM is stopped.

Kevin
Vladimir Sementsov-Ogievskiy
2018-05-16 15:52:28 UTC
Permalink
Post by Kevin Wolf
Post by Kevin Wolf
Post by Vladimir Sementsov-Ogievskiy
Post by Fam Zheng
Post by Vladimir Sementsov-Ogievskiy
Is it possible, that target will change the disk, and then we return control
to the source? In this case bitmaps will be invalid. So, should not we drop
all the bitmaps on inactivate?
Yes, dropping all live bitmaps upon inactivate sounds reasonable. If the dst
fails to start, and we want to resume VM at src, we could (optionally?) reload
the persistent bitmaps, I guess.
Reload from where? We didn't store them.
Maybe this just means that it turns out that not storing them was a bad
idea?
What was the motivation for not storing the bitmap? The additional
downtime? Is it really that bad, though? Bitmaps should be fairly small
for the usual image sizes and writing them out should be quick.
What are usual ones? A bitmap of standard granularity of 64k for 16Tb disk
is ~30mb. If we have several such bitmaps it may be significant downtime.
We could have an in-memory bitmap that tracks which parts of the
persistent bitmap are dirty so that you don't have to write out the
whole 30 MB during the migration downtime, but can already flush most of
the persistent bitmap before the VM is stopped.
Kevin
Yes it looks possible. But how to control that downtime? Introduce
migration state, with specific _pending function? However, it may be not
necessary.

Anyway, I think we don't need to store it.

If we decided to resume source, bitmap is already in memory, why to
reload it? If someone already killed source (which was in paused mode),
it is inconsistent anyway and loss of dirty bitmap is not the worst
possible problem.

So, finally, it looks safe enough, just to make bitmaps on source
persistent again (or better, introduce another way to skip storing (may
be with additional flag, so everybody will be happy), not dropping
persistent flag). And, after source resume, we have one of the following
situations:

1. disk was not changed during migration, so, all is ok and we have bitmaps
2. disk was changed. bitmaps are inconsistent. But not only bitmaps, the
whole vm state is inconsistent with it's disk. This case is a bug in
management layer and it should never happen. And possibly, we need some
separate way, to catch such cases.
--
Best regards,
Vladimir
Loading...