Pārlūkot izejas kodu

xen/gntdev: Fix circular locking dependency

apply_to_page_range will acquire PTE lock while priv->lock is held,
and mn_invl_range_start tries to acquire priv->lock with PTE already
held.  Fix by not holding priv->lock during the entire map operation.
This is safe because map->vma is set nonzero while the lock is held,
which will cause subsequent maps to fail and will cause the unmap
ioctl (and other users of gntdev_del_map) to return -EBUSY until the
area is unmapped. It is similarly impossible for gntdev_vma_close to
be called while the vma is still being created.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Daniel De Graaf 14 gadi atpakaļ
vecāks
revīzija
f0a70c882e
1 mainītis faili ar 7 papildinājumiem un 2 dzēšanām
  1. 7 2
      drivers/xen/gntdev.c

+ 7 - 2
drivers/xen/gntdev.c

@@ -575,21 +575,26 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
 	if (!(vma->vm_flags & VM_WRITE))
 		map->flags |= GNTMAP_readonly;
 
+	spin_unlock(&priv->lock);
+
 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
 				  vma->vm_end - vma->vm_start,
 				  find_grant_ptes, map);
 	if (err) {
 		printk(KERN_WARNING "find_grant_ptes() failure.\n");
-		goto unlock_out;
+		return err;
 	}
 
 	err = map_grant_pages(map);
 	if (err) {
 		printk(KERN_WARNING "map_grant_pages() failure.\n");
-		goto unlock_out;
+		return err;
 	}
+
 	map->is_mapped = 1;
 
+	return 0;
+
 unlock_out:
 	spin_unlock(&priv->lock);
 	return err;